text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
II. A portrait in space and time of the expanding radio jet from Based on observations carried out with the VLA and ALMA.INAF, Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125 Firenze, [email protected] INAF, Osservatorio Astronomico di Capodimonte, via Moiariello 16, I-80131 Napoli, Italy Dublin Institute for Advanced Studies, School of Cosmic Physics, Astronomy & Astrophysics Section, 31 Fitzwilliam Place, Dublin 2, Ireland Thüringer Landessternwarte Tautenburg, Sternwarte 5, D-07778 Tautenburg, Germany Instituto de Astrofísica de Andalucía, CSIC, Glorieta de la Astronomía s/n, E-18008 Granada, Spain Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, F-38406 Saint Martin d’Hères, France INAF, Osservatorio Astronomico di Cagliari, Via della Scienza 5, I-09047 Selargius (CA), ItalyR. Cesaroni, [email protected] Growing observational evidence indicates that the accretion process leading to star formation may occur in an episodic way, through accretion outbursts revealed in various tracers. This phenomenon has also now been detected in association with a few young massive (proto)stars(>8 ), where an increase in the emission has been observed from the IR to the centimetre domain. In particular, the recent outburst at radio wavelengths ofhas been interpreted as due to the expansion of a thermal jet, fed by part of the infalling material, a fraction of which has been converted into an outflow.We wish to follow up on our previous study of the centimetre and millimetre continuum emission from the outbursting massive (proto)starand confirm our interpretation of the radio outburst, based on an expanding thermal jet.The source was monitored for more than 1 yr in six bands from 1.5 GHz to 45.5 GHz with the Karl G. Jansky Very Large Array, and, after an interval of ∼1.5 yr, it was imaged with the Atacama Large Millimeter/submillimeter Array at two epochs, which made it possible to detect the proper motions of the jet lobes.The prediction of our previous study is confirmed by the new results. The radio jet is found to expand, while the flux, after an initial exponential increase, appears to stabilise and eventually decline, albeit very slowly. The radio flux measured during our monitoring is attributed to a single lobe, expanding towards the NE. However, starting from 2019, a second lobe has been emerging in the opposite direction, probably powered by the same accretion outburst as the NE lobe, although with a delay of at least a couple of years. Flux densities measured at frequencies higher than 6 GHz were satisfactorily fitted with a jet model, whereas those below 6 GHz are clearly underestimated by the model. This indicates that non-thermal emission becomes dominant at long wavelengths.Our results suggest that thermal jets can be a direct consequence of accretion events, when yearly flux variations are detected. The formation of a jet lobe and its early expansion appear to have been triggered by the accretion event that started in 2015. The end of the accretion outburst is also mirrored in the radio jet. In fact, ∼1 yr after the onset of the radio outburst, the inner radius of the jet began to increase, at the same time the jet mass stopped growing, as expected if the powering mechanism of the jet is quenched.We conclude that our findings strongly support a tight connection between accretion and ejection in massive stars, consistent with a formation process involving a disk–jet system similar to that of low-mass stars.Radio outburst from a massive (proto)star R. Cesaroni1 L. Moscadelli1 A. Caratti o Garatti2,3 J. Eislöffel4 R. Fedriani5 R. Neri6 T. Ray3 A. Sanna7 B. Stecklum4Received date; accepted date ====================================================================================================================================================================================================§ INTRODUCTIONIn recent years, observations have provided us with increasing evidence of circumstellar rotating structures around B-type (proto)stars, especially since the advent of the Atacama Large Millimeter/submillimeter Array (ALMA; see e.g. the review by Beltrán & de Wit <cit.>). This strongly suggests that disk-mediated accretion could be a viable mechanism to feed even the most massive stars. However, we still do not understand the physical properties of these rotating structures, nor how such an accretion proceeds, whether through a smooth, continuous flow or episodically with parcels of material falling onto the star. The recent detection of outbursts (Stecklum <cit.>; Caratti o Garatti et al. <cit.>; Hunter et al. <cit.>; Burns et al. <cit.>; Chen et al. <cit.>) in a few luminous young stellar objects (YSOs) >8  provides us with the intriguing possibility that these phenomena could be the consequence of episodic accretion events, akin to those commonly observed in low-mass YSOs as FU Orionis and EX Orionis events (Audard et al. <cit.>; Fischer et al. <cit.>).In this study we focus on the outburst from the massive (proto)star , located at a distance of 1.78^+0.12_-0.11 kpc (Burns et al. <cit.>). This object is unique because the outburst has been observed not only in the emission of some maser species (Fujisawa et al. <cit.>; Moscadelli et al. <cit.>; Hirota et al. <cit.>) and in lines and continua at IR and (sub-)millimetre wavelengths (Caratti o Garatti et al. <cit.>; Uchiyama et al. <cit.>; Liu et al. <cit.>), but also in the centimetre domain (Cesaroni et al. <cit.>; hereafter Paper I). A time delay of ∼1 yr was found between the onset of the IR and radio outbursts, consistent with the different mechanisms at the origin of the two: in fact, the IR outburst is based on radiative processes propagating at velocities comparable to the speed of light, whereas the radio outburst is due to shocks expanding approximately at the speed of sound. In Paper I we present compelling evidence of an exponential increase in the radio emission from the thermal jet associated with this source. Given the existence of a disk-jet system in(Boley et al. <cit.>; Wang et al. <cit.>; Zinchenko et al. <cit.>; Liu et al. <cit.>), we believe that we are witnessing an episodic accretion event mediated by the disk, where part of the infalling material has been diverted into the associated jet (Fedriani et al. <cit.>). In Paper I we show that a simple model of an expanding jet can satisfactorily reproduce the increase in the radio flux observed in four bands. Although the emission was basically unresolved, we predicted that in a few years the jet expansion should make it possible to resolve its structure. With this in mind, we performed both monitoring of the radio emission and sub-arcsecond millimetre imaging at two epochs, several years after the beginning of the outburst. In this article we report on the results of these observations.§ OBSERVATIONSThe radio emission was monitored with the Karl G. Jansky Very Large Array (VLA) and, at a later time, with ALMA. In the following we describe the observational setup and data reduction separately for the two datasets. In both cases, for the phase centre we chose the position α(J2000)=06^ h 12^ m 5402, δ(J2000)=17 59 231. §.§ Very Large Array was observed at 11 epochs from January 2017 to January 2018 (project codes: 16B-427 and 17B-045). The observing dates and array configurations are listed in Table <ref>, where for the sake of completeness we have also included the 2016 data already presented in Paper I. We note that the table contains as yet unpublished data in the L band obtained in the same observing run (project 16A-424) described in Paper I.The signal was recorded with the Wideband Interferometric Digital ARchitecture (WIDAR) correlator in six bands, centred approximately at 1.5 (L band), 3 (S), 6 (C), 10 (X), 22.2 (K), and 45.5 GHz (Q), in dual polarisation mode. The total observing bandwidth (per polarisation) was 1 GHz in the L band, 2 GHz in the S band, 4 GHz in the C and X bands, and 8 GHz in the K and Q bands. The primary flux calibrator was 3C48 and the phase-calibrators were J0632+1022 in the L and S bands, J0559+2353 in the C and X bands, and J0539+1433 in the K and Q bands.We made use of the calibrated dataset provided by the NRAO pipeline and subsequent inspection of the data and imaging were performed with the CASA[The Common Astronomy Software Applications software can be downloaded at http://casa.nrao.edu] package, version 5.6.2-2. The continuum images were constructed using natural weighting to maximise flux recovery. Typical values of the 1σ RMS noise level and synthesised beam are given in Table <ref> for each band and array configuration.The flux density of the compact, variable source centred onhas been estimated inside a polygonal shape encompassing the compact, unresolved radio source and is given in Table <ref> for each band and epoch. As explained in Paper I, in our VLA observations the continuum emission fromappears as an unresolved component (the variable source of interest for our study) plus two large-scale lobes separated by ∼8, or ∼15000 au, in projection (see Fig. 1 of Paper I) that in all likelihood are originating from a previous outburst that occurred many decades ago. At the longest wavelengths it was possible to resolve the central source from the lobes only in the most extended array configuration. In the other cases, we could only measure the total flux density from all components and the corresponding values are reported in Table <ref> as upper limits. In a few cases, as indicated in the footnotes of the table, we attempted a correction to the flux, under the assumption that the flux of the lobes is constant in time and thus equal to the value measured (at a different epoch) with the A array. With this approach a caveat is in order, because a compact configuration may be sensitive to lobe structures that are resolved out in a more extended configuration. This implies that the corrected flux densities could be still an upper limit.In order to estimate the uncertainty on the flux density measurements of , we took advantage of the presence of a compact, marginally resolved continuum source located approximately at α(J2000)=06^ h 12^ m 5361, δ(J2000)=18 00 264, which happens to fall in the primary beam in all bands except band Q. Under the reasonable assumption that this object is not variable, by comparing the flux density measurements obtained at different epochs at the same frequency, we estimated a relative error of 10% in all bands.§.§ Atacama Large Millimeter/submillimeter Array was observed with ALMA in band 3 on June 6, 2019, and September 3, 2021 (project 2018.1.00864.S, P.I. R. Cesaroni). The main characteristics of the observations are summarised in Table <ref>. The correlator was configured with 4 units of 2 GHz, in double polarisation, centred at 85.2, 87.2, 97.2, and 99.2 GHz. The spectral resolution is 0.49 MHz (corresponding to ∼1.5–1.7 , depending on the frequency), sufficient to identify line-free channels and obtain a measurement of the continuum emission.The data were calibrated through the ALMA data reduction pipeline. For each 2 GHz correlator unit, we created a data cube using task tclean of CASA, adopting natural weighting and a circular beam of 0097 for the first epoch and 0087 for the second one. To create a continuum map for each 2 GHz band, we used the STATCONT software[https://hera.ph1.uni-koeln.de/∼sanchez/statcont] developed by Sánchez-Monge et al. (<cit.>). In this way we also obtained cubes of the continuum-subtracted line emission. Finally, the four continuum maps were averaged together to increase the signal-to-noise ratio.The measured fluxes at the two epochs are reported in Table <ref>.We note that the derivation of the continuum images described above also provides us with continuum-subtracted channel maps. Although a study of the line emission in this region goes beyond the purposes of this article, in the following we briefly consider the maps of two molecular transitions, SO(2_2–1_1) at 86093.983 MHz and H^13CN(1–0) at 86338.7 MHz. § RESULTS §.§ Continuum emission at λ=3 mmThe structure of the radio jet fromat the two epochs observed with ALMA is shown in Fig. <ref>. The most striking result is the clear expansion of the jet, which becomes significantly more elongated to both the NE and SW. At first look, the figure seems to outline a bipolar structure, where the star might be located close to the geometrical centre, between the two jet lobes. However, this is not the case. Careful comparison between Fig. <ref>a and <ref>b reveals that, while the NE peak is moving away from the centre, the SW peak stays still, consistent with this being the location of the (proto)star. This result is confirmed by the coincidence of the SW peak with the peak of the sub-millimetre emission (white cross) measured by Liu et al. (<cit.>), as expected if the (proto)star lies inside the parental molecular, dusty core. Figure <ref> clearly confirms this scenario by showing an overlay of the molecular emission maps of the SO(2_2–1_1) and H^13CN(1–0) lines observed by us, with an image of the 3 mm continuum emission. It is worth noting that in both transitions a dip is seen towards the continuum peak, which suggests that part of the line emission is likely absorbed against the bright continuum. This is consistent with the high brightness temperature of both 3 mm continuum peaks, of the order of 500 K (obtained from the maps cleaned with uniform weighting).With all the above in mind, in the following we assume that the (proto)star powering the radio jet is located at the SW peak of the 3 mm continuum emission, namely at α(J2000)=06^ h 12^ m 54012, δ(J2000)=17 59 2304. We prefer to use the peak position of our maps instead of that of the sub-millimetre maps of Liu et al. (<cit.>), because of the higher angular resolution (0087 instead of 014). It is also interesting to compare the jet lobes observed in our ALMA maps with those seen on a much larger scale (see Fig. 1 of Paper I). The comparison is presented in Fig. <ref>, where we show the maps of the 3.6 cm and 3 mm continuum emission. Clearly, the directions of the symmetry axes of the two pairs of lobes are remarkably different, with position angle (PA) of ∼70, on the large scale, and ∼48, on the small scale. The fact that the two axes have different orientations and do not intersect at the position of the star (i.e. at the SW peak of the 3 mm continuum) can be interpreted in two ways: either we are dealing with two jets originating from two different YSOs, or the jet is precessing and the star is moving on the plane of the sky at a different speed with respect to the large-scale lobes. The latter scenario implies that either the star or the ejected material is experiencing deceleration or acceleration, such that one of the two is lagging behind the other.Although it is impossible to rule out one of the two hypotheses, in the following we assume that all lobes belong to the same jet, because there is only one core lying along the jet axis and evidence for precession has been found in(Wang et al. <cit.>; Fedriani et al. <cit.>), as well as other similar objects (see e.g. Shepherd et al. <cit.>, Cesaroni et al. <cit.>, Sánchez-Monge et al. <cit.>, Beltrán et al. <cit.>). Further support for this hypothesis is given by the progressive change of the position angle of the jet from the large to the small scale, as shown in Table <ref>. This trend is indeed consistent with a jet outflow undergoing precession. §.§ Continuum emission at λ≥7 mmIn Fig. <ref> we present the continuum spectra obtained from the flux densities in Table <ref>.Our VLA observations ofspan an interval of time prior to the ALMA observations (see Table <ref>) and the radio emission of the variable source inis basically unresolved in all of our VLA observations. However, we can study the spatial evolution of the jet by determining the position of the peak at different times. For this purpose we fitted a 2D Gaussian to the K-band maps with sub-arcsecond resolution. We prefer the 1.3 cm data to the 7 mm data, which would provide us with better resolution, because in the K band the S/N is higher and in the Q band contamination by dust thermal emission might be present. In Fig. <ref> we plot the distribution of the peak positions thus obtained. To give an idea of the uncertainty on these positions, we also draw ellipses corresponding to one-fifth of the synthesised beams. For our analysis we also included the NE peak of our ALMA data and that of Obonyo et al. (<cit.>; hereafter OLHKP). Despite the large uncertainties, it is clear that the distance of the peak from the star is increasing with time, as expected if the jet is expanding. This expansion appears to slow down with time, because the mean velocity (projected on the plane of the sky) estimated from the ratio between the separation of the peaks at the last two epochs (ALMA data) and the corresponding time interval is ∼40 au/820 days≃84 , much less than the mean velocity from the beginning of the radio burst, namely ∼472 au/1881 days≃436 , where ∼472 au is the distance of the NE peak from the star at the last epoch. Using the same approach, Fedriani et al. (<cit.>) estimated an expansion speed of 450±50  from their IR data. We further analyse the expansion of the jet in Sect. <ref>.The flux density ofis changing with time in all bands, as shown in Fig. <ref>. One can identify two phases: the first when the flux increases exponentially for ∼200 days; the second when the flux remains basically constant, or slightly declines towards the end of our monitoring. This behaviour is the same at all frequencies ≥6 GHz, but is not so obvious at the two longest wavelengths, where a precise estimate of the flux density of the compact variable source is not possible for the reason explained in Sect. <ref>. Moreover, in these bands our monitoring is more limited in time. It is hence quite possible that the flux density below 6 GHz has a different behaviour than that at higher frequencies. Therefore, the radio emission at 1.5 and 3 GHz, and probably also part of the 6 GHz flux, might not be due to free-free radiation but to another mechanism, such as synchrotron emission, which has been detected towards a number of extended radio jets from YSOs (e.g. Carrasco-Gonzalez et al. <cit.>, Moscadelli et al. <cit.>, Brogan et al. <cit.>, Sanna et al. <cit.> and references therein). The evident change of slope in some of the VLA spectra seems to support this possibility. In this respect, the most representative is the spectrum acquired on 2016/10/15 (see Fig. <ref>), where the 1.5 GHz point lies well above any plausible extrapolation of the other fluxes. For all these reasons, in the following we focus our study on the emission above 6 GHz, with the caveat that even the 6 GHz flux might be partly contaminated by a non-thermal contribution, as suggested by OLHKP. It is worth noting that the existence of synchrotron emission from is supported by the recent possible detection of high-energy gamma-ray emission from this source (see Wilhelmi et al. <cit.>).§ ANALYSIS AND DISCUSSION§.§ Nature of the 3 mm continuum emissionFor the reasons presented in Sect. <ref>, we have concluded that the (proto)star should lie at the SW peak of the 3 mm continuum map. If this is the case, one expects this peak to have a significant contribution from thermal dust emission, whereas the NE peak, being part of a jet lobe, should be dominated by free-free emission. To investigate these assumptions, we computed the spectral index of the 3 mm continuum over the maps in Fig. <ref>. For this purpose, we used the maps obtained from the four correlator units centred at 85.2, 87.2, 97.2, and 99.2 GHz (see Sect. <ref>), created with the same clean beam. For each pixel of the maps, the spectral index was computed from a least-square fit to the four fluxes in a log S_ν–logν plot. The fit was performed only in those pixels where all of the four fluxes were above the 5σ level. The result is shown in Fig. <ref>, where the formal error obtained from the fit ranges from 0.03 towards the emission peaks to 0.3 towards the borders.At 3 mm it is reasonable to assume that dust emission is optically thin and the flux density is ∝ν^γ, with γ=2–4, because the dust absorption coefficient is believed to vary as ν^β with β=0–2 (see e.g. D'Alessio et al. <cit.>, Sadavoy et al. <cit.>), where β=0 corresponds to the case of large grains (`pebbles') in disks (Testi et al. <cit.>). The same assumption also holds for the free-free emission: the spectra in Fig. <ref> flatten beyond 45 GHz, and hence the flux density is ∝ν^-0.1. Therefore, more positive spectral indices can be associated with dust emission and, conversely, more negative ones with free-free emission. At both epochs dust emission arises from the region around the SW peak, as expected for a deeply embedded star, while the NE lobe of the jet is characterised by free-free emission. Noticeably, some free-free emission is also detected towards the most south-western tip.When estimating the flux density at wavelengths of 3 mm or shorter,it is thus necessary to distinguish between the NE lobe and the rest of the source. In Table <ref> we give all these fluxes for the two epochs of the ALMA observations. It is worth pointing out that the spectral index over the SW region is mostly 2, whereas the typical index expected for pure dust emission, should lie approximately between 2 and 4. This suggests the presence of non-negligible free-free emission around the SW peak as well. It is possible to estimate what fraction of the total flux is due to free-free as follows. The total flux can be written as S_ν=S_ν^ d+S_ν^ ff, where S_ν^ ff∝ν^-0.1 is the optically thin free-free flux and S_ν^ d∝ν^γ, with γ=2–4, is the dust flux. As previously explained, the spectral index between ν_1=85.2 GHz and ν_2=99.2 GHz was estimated assuming S_ν∝ν^α, and hence we haveS_ν_2/S_ν_1=(ν_2/ν_1)^α S_ν_2/S_ν_1=S^ d_ν_2 + S^ ff_ν_2/S^ d_ν_1 + S^ ff_ν_1 = S^ d_ν_1(ν_2/ν_1)^γ + S^ ff_ν_1(ν_2/ν_1)^-0.1/S^ d_ν_1 + S^ ff_ν_1.After some algebra, we obtain the ratioR_ S≡S^ ff_ν_1/S^ d_ν_1= (ν_2/ν_1)^γ - (ν_2/ν_1)^α/(ν_2/ν_1)^α - (ν_2/ν_1)^-0.1and, from this, the fraction of the total flux due to free-free emission, R_ S/(1+R_ S).Using the values of α in Fig. <ref>, we find that such a fraction on average ranges from 65%, for γ=2, to 85%, for γ=4. This implies that 15–20 mJy out of 23 mJy emitted by the SW component (see Table <ref>), are contributed by free-free emission, also consistent with the brightness temperature of ∼500 K measured at 3 mm towards the SW peak, probably too large to be due only to dust emission. We note that part of this free-free emission might be due to ionisation by the embedded star of ∼20  (Zinchenko et al. <cit.>).As already mentioned, some free-free emission is also detected towards the tip of the SW region and becomes more prominent and more extended with time, as one can see by comparing Fig. <ref>a to Fig. <ref>b. This hints at the existence of another jet lobe emerging from the dusty core and expanding towards the SW.To shed light on the nature of this putative lobe and set a constraint on its age, in Fig. <ref> we compare the jet structure observed by OLHKP at 1.3 cm on May 5, 2018, with our ALMA image at 3 mm.It seems that during the period of our monitoring, up to the observation of OLHKP (664 days after the onset of the radio outburst), no significant free-free emission was seen to the SW of the (proto)star at any of the wavelengths observed with the VLA. We thus conclude that in all likelihood the SW jet lobe appeared only recently, between May 2018 and June 2019. Therefore, the radio flux variations monitored by us until January 2018 are to be attributed only to the NE lobe. §.§ Origin of the SW lobe One may wonder if the emerging SW lobe corresponds to a new radio burst or is somehow related to the accretion outburst observed by Caratti o Garatti et al. (<cit.>). Only follow-up observations of the jet structure will allow us to establish if the new lobe is as prominent and long-lasting as the NE one. However, we point out that the IR monitoring performed by Uchiyama et al. (<cit.>) and Fedriani et al. (<cit.>) between November 2015 and February 2022 as well as the methanol maser observations[Available at http://vlbi.sci.ibaraki.ac.jp/iMet/data/192.6-00] of the Maser Monitoring Organization (M2O)[The M2O is a global cooperative of maser monitoring programmes; see https://MaserMonitoring.org] have not revealed any other burst after that of Caratti o Garatti et al. (<cit.>) and before the ejection of the SW lobe. We conclude that both jet lobes could arise from the same accretion event, although with a time lag between them of 22–35 months.This hypothesis is also supported by a noticeable feature of the jet system in , namely that the extension of the NE lobe is about twice as much as that of the SW lobe. This is true not only for the large-scale and small-scale lobes in Fig. <ref>, but also for themasers distribution, as shown in Fig. <ref>. The existence of the same asymmetry on different scales and tracers cannot be a coincidence and is suggestive of a mechanism that causes a delay of the ejection of the SW lobe with respect to the NE lobe. In our opinion, two explanations are possible: either the accretion (and hence ejection) event is intrinsically stronger on the NE side of the disk, or the expansion towards the SW is hindered by the presence of denser material. We favour the latter hypothesis since the near-IR emission is much fainter from the SW lobe than from the NE lobe (see Caratti o Garatti et al. <cit.> and Fedriani et al. <cit.>). This finding is surprising, because the SW lobe corresponds to the blue-shifted emission of the jet outflow (see Wang et al. <cit.>), which means that the jet is pointing towards the observer on that side and the extinction should be lower than on the NE side. The weakness of the IR emission to the SW is thus indicative of an asymmetry in the density distribution along the jet axis, a fact that could naturally also explain the delay of the expansion of the SW lobe with respect to the NE lobe. §.§ Evolution of the radio emission In this section we present a model fit to the observed spectra, following the approach adopted in Paper I, with some modifications. As done in Paper I, we adopted the `standard spherical' model from Reynolds (<cit.>) to describe the radio continuum emission from the jet. In practice, this means that we assume a jet where at a given time the opening angle, electron temperature, ionisation degree, and expansion speed do not depend on the distance from the star, r.§.§.§ Expansion law of the jetIn Paper I the jet was assumed to undergo expansion at constant velocity so that the maximum radius could be described by the simple expression (t)=(0)+ t, with =900 . While this assumption could hold for the first few months after the onset of the radio outburst, it is inconsistent with the most recent data. In fact, in Sect. <ref> we show that the jet expansion is slowing down with time. We thus need to adopt a more realistic law for (t).For this purpose, we assume that the jet is expanding in a medium with density ∝ r^-2, with r the distance from the star. It is possible to demonstrate (see Appendix <ref>) that applying momentum conservation one obtains(t) =+ 2Tcosψ(√(1+t/T)-1) ,where we have multiplied both terms of Eq. (<ref>) by cosψ, with ψ the angle between the jet axis and the plane of the sky. For consistency with Reynolds (<cit.>), we have indicated with y the projection of r on the plane of the sky. Here,T is a suitable timescale, andandare, respectively, the expansion velocity and the value ofat the onset of the radio outburst (i.e. on July 10, 2016). As in Paper I, we chose =900 , while the two parameters T andcan be determined from the values ofestimated from the two ALMA maps at t=1061 days and t=1881 days.The problem is thatcannot be trivially obtained from the position of the NE peak, which corresponds to the maximum brightness temperature. This temperature is attained either at the inner radius, if the whole jet is optically thin, or at the border between the optically thin and the optically thick parts of the jet (denoted by y_1 in Reynolds' notation). Beyond this point, the emission is optically thin and the brightness temperature scales with the opacity τ∝ r^-3, as from Reynolds' Eq. (4). For this reason, the jet can extend much beyond the position of the observed peak.In order to obtain a reliable estimate of , we have fitted the NE lobe in the two ALMA maps of Fig. <ref> assuming that the 3 mm emission is optically thin all over the jet surface. Under this hypothesis the brightness temperature can be expressed asT_ B = T_0 τ = T_0 (y/)^-3 ,where y=r cosψ is the projection of r on the plane of the sky and = cosψ withthe inner radius of the jet. For a given set of , , and(the opening angle of the jet) a map was generated, convolved with the instrumental beam, and the model brightness temperature was computed for each pixel of the observed map. The best fit was obtained by minimising the expressionχ^2=∑_i (T_ B^i( model)-T_ B^i( data))^2 ,with i a generic pixel, after varying the three parameters over suitable ranges. The best-fit models are compared to the observed maps in Fig. <ref> and correspond to =9, =0206=367 au, and =029=516 au, for the first map, and =24, =0207=368 au, and =0356=634 au, for the second map.From Eq. (<ref>) written for t=1061 days and t=1881 days, one obtains a system of two equations in the two unknowns T and , the solutions to which are T=124 days and =251 au. Hence, one has( au) = 251 + 127 (√(1+t( days)/124)-1).The value ofcan be computed from Eq. (<ref>) assuming ψ=10(see Paper I, where the complementary anglewas used). Figure <ref> compares our solution, , as a function of time (blue curve)with (i) the positions of the maximum brightness temperature (the same as in Fig. <ref>) and (ii) the value of y_1 obtained from the model fits described later in Sect. <ref>. The latter corresponds to the border between the optically thin and optically thick parts of the jet. Clearly, the brightness appears to peak much closer to y_1 than to , as previously mentioned.§.§.§ Modelling the jet variabilityWe wish to reproduce the spectral variation shown in Fig. <ref> and thus derive the values of the jet parameters as a function of time. For this purpose, we introduce some modifications with respect to the original model by Reynolds (<cit.>), which was used in Paper I. Reynolds' equations were derived under the approximation of small , the opening angle of the jet[We stress that our definition ofcorresponds to one-half of thedefined by Reynolds (<cit.>).]. However, in Paper I we find that ≃20–50is needed to fit the spectra. In order to overcome this limitation we re-wrote Reynolds' equations under suitable assumptions, as detailed in Appendix <ref>, so that they are now valid for any <90.All observed spectra have been fitted with Eq. (<ref>) using only the measurements with ν≥6 GHz, for the reasons discussed in Sect. <ref>. We stress that, unlike Paper I, here we assume the jet to be mono-polar, because there is no hint of the existence of a SW lobe during the whole period of our VLA monitoring (see Sect. <ref>). The input parameters of the model are the angle between the jet axis and the plane of the sky, ψ, the ionised gas temperature, T_0, the inner radius, , the projection of the outer radius on the plane of the sky, , the opening angle, , and the parameter =x_0 Ṁ/_0, where x_0 is the fraction of ionised gas, _0 the expansion velocity, and Ṁ the total mass loss rate (neutral plus ionised). The quantities T_0, _0, and x_0 are assumed to be constant along the jet, while T_0 is also constant in time.To simplify the fitting procedure as much as possible, we fixed T_0=10^4 K and ψ=10 (see Paper I), and computedfrom Eq. (<ref>). Unlike in Paper I, we decided to leavefree, because a priori the inner radius could change while the jet is expanding. So, we are left with three free parameters: , , and . The best fit to each spectrum has been obtained by minimising the χ^2 given in Eq. (10) of Paper I, after varying the parameters over the ranges =3–50, =50–300 au, and =10^-10–10^-7  yr^-1/(). The best-fit spectra are represented by the solid curves in Fig. <ref> and the best-fit parameters are given in Table <ref>. The errors on the parameters have been computed using the criterion of Lampton et al. (<cit.>), as done in Paper I.While most of the fits look to be in agreement with the data within the uncertainties, the 6 GHz fluxes appear to be underestimated by the model at the last five to six epochs. In our opinion, such a discrepancy could indicate contamination from non-thermal emission at this frequency, as already suggested by OLHKP. Indeed, as discussed in Sect. <ref>, non-thermal emission is very prominent at longer wavelengths in the same spectra (red points in Fig. <ref>) and it is hence not surprising that this type of emission can contribute significantly to the flux up to 6 GHz.In Fig. <ref> we plot the best-fit parameters as a function of time. One sees that the opening angle rapidly increases up to the maximum value allowed by us. It may seem that an opening angle of 50 is too large for a jet, but it is similar to that predicted by theory for a jet powered by a ∼20  YSO (Zinchenko et al. <cit.>), namely ∼52, obtained by interpolating the values in Table 2 of Staff et al. (<cit.>). Moreover, we can obtain a direct estimate offrom the observed peak brightness temperature in the synthesised beam, , assuming optically thick emission. Approximating the jet as a Gaussian source, one has=Θ_ S^2/Θ_ S^2+Θ_ B^2 ,whereis the intrinsic brightness temperature of the source and Θ_ S and Θ_B are, respectively, the full widths at half power of the source and synthesised beam. Consequently,Θ_ S = Θ_ B√(/-).This expression can be used to calculate a lower limit on the source diameter, Θ_ S^ min, assuming that the free-free emission is optically thick, namely for ==10^4 K. Correspondingly, a lower limit on the opening angle is obtained from^ min = arcsin(Θ_ S^ min/2Δ) = arcsin(Θ_ B/2Δ√(/-)),where Δ is the separation between the peak ofand the position of the (proto)star. We estimated ^ min for all of our maps, and for each epoch we computed the mean ^ min obtained from the four bands. This is plotted in Fig. <ref> as a function of time. For the sake of comparison we also plotobtained from our model fits (dashed line). We conclude that, despite the crude approximations adopted to derive Eq. (<ref>), values ofof a few times 10seem plausible and consistent with our model fit results. It is worth noting thatcomputed from the map of OLHKP and from our ALMA maps (Fig. <ref>) using Eq. (<ref>) turns out to be much less, namely 7, 17, and 13, respectively 664, 1061, and 1881 days after the radio outburst. We speculate that the jet could be re-collimating on a timescale of a couple of years.Another interesting result from Fig. <ref> is the behaviour of , which remains basically constant for ∼250 days after the radio outburst, thereafter showing a systematic increase until the end of our monitoring – and even beyond that, as we estimate values of ∼370 au at the time of our first ALMA observations (see Sect. <ref>). A straightforward interpretation is that the inner radius expands when the mechanism feeding the jet is switched off.Noticeably,appears to reach a maximum just whenstarts increasing, supporting the idea that no more material is added to the jet. This hypothesis is confirmed by Fig. <ref>, which shows how the ionised jet mass, computed from Eq. (<ref>), varies with time. In the same figure we also plotfor the sake of comparison. It is quite clear that both quantities have a bi-modal temporal behaviour, before and after the time interval marked by the grey area. From the onset of the radio outburst up to ∼250 days,remains approximately constant, whereas the jet mass increases. After that, the reverse occurs: as soon as the inner radius starts increasing, the jet stops growing in mass, with the grey area marking the transition between these two phases. This is consistent with mass conservation in an expanding jet that is not fed anymore by the outburst.It is also worth noting that the ratio, R_ e, between the mass ejected, M_ jet, and that accreted during the outburst, M_ acc, can be computed from the final value of M_ jet in Fig. <ref> and that quoted by Caratti o Garatti et al. (<cit.>) (M_ acc≃3.4×10^-3 ). One obtains R_ e≃7.5×10^-5/(3.4×10^-3 )≃2.2×10^-2/. Since by definition ≤1, we conclude that R_ e must be greater than a few percent. Vice versa, assuming that at least 10% of the infalling material will be redirected into outflow, one has R_ e>0.1, which sets an upper limit of ∼0.2 on the ionisation fraction, consistent with the estimate obtained in Paper I and the values estimated for similar sources (see Fedriani et al. <cit.>).§ SUMMARY AND CONCLUSIONSAs a follow-up to Paper I, we monitored the radio continuum emission of the outburst from the massive YSOover ∼13 months at six wavelengths with the VLA. We also imaged the radio jet at 3 mm with ALMA at two epochs separated by ∼27 months, with an angular resolution 01. Our results indicate that after an exponential increase in the radio flux in all observed bands, the intensity becomes constant or slightly decreasing. A comparison of the two ALMA maps shows that the radio jet is expanding both to the NE and SW, although only the NE lobe was present during our VLA monitoring. The SW lobe appeared between May 2018 and June 2019, namely at a much later time than the NE lobe. We believe that this ejection event is related to the same accretion outburst, which occurred in 2015. We speculate that the delay between the two lobes might be due to a greater density of the medium facing the SW lobe, which could curb the expansion of the jet on that side.From the analysis of the continuum spectra, we infer that two mechanisms are needed to explain the observed fluxes: free-free emission at short wavelengths and non-thermal (probably synchrotron) emission at long wavelengths. We believe that the latter should become dominant at frequencies 6 GHz. For this reason, we fitted only the data with ν≥6 GHz using a slightly modified version of the jet model adopted in Paper I, which works for any opening angle of the jet <90. We conclude that the spectra can be satisfactorily reproduced with an expanding, decelerating, mono-polar thermal jet that was actively powered by the outburst until mid-2017. After this date, no more mass is injected into the lobes and the inner jet radius expands, which, over the long term, is bound to give rise to one of the knots that characterise thermal jets from YSOs. A.C.G. acknowledges from PRIN-MUR 2022 20228JPA3A “The path to star and planet formation in the JWST era (PATH)” and by INAF-GoG 2022 “NIR-dark Accretion Outbursts in Massive Young stellar objects (NAOMY)” and Large Grant INAF 2022 “YSOs Outflows, Disks and Accretion: towards a global framework for the evolution of planet forming systems (YODA)”. R.F. acknowledges support from the grants Juan de la Cierva JC2021-046802-I, PID2020-114461GB-I00 and CEX2021-001131-S funded by MCIN/AEI/ 10.13039/501100011033 and by “European Union NextGenerationEU/PRTR”. T.P.R acknowledges support from ERC grant 743029 EASY. This study is based on observations made under project 16A-424, 16B-427, and 17B-045 of the VLA of NRAO. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.. This paper makes also use of the following ALMA data: ADS/JAO.ALMA#2018.1.00864.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.[2014]audAudard, M., Ábrahám, P., Dunham, M. M., et al. 2014, inProtostars and Planets VI, ed. T. Henning,C. P. Dullemond, R. S. Klessen and H. Beuther(Tucson: Univ. of Arizona Press), 387 [2016]beldewBeltrán, M. T. & de Wit, W. J. 2016, A&ARv, 24, 6 [2016]bel16Beltrán, M. T., Cesaroni, R., Moscadelli, L., et al. 2016, A&A, 593, A49 [2013]bole13Boley, P.A., Linz, H., van Boekel, R., et al. 2013, A&A, 558, A24 [2018]brog18Brogan, C. L., Hunter, T. R., Cyganowski, C. J., et al. 2018, ApJ, 866, 87 [2016]burns16Burns, R. A., Handa, T., Nagayama, T., Sunada, K., & Omodaka, T. 2016, MNRAS, 460, 283 [2020]burns20Burns, R. A., Sugyiama, K., Hirota, T., et al. 2020, Nature Astronomy, 4, 506 [2023]burns23Burns, R. A., Uno, Y., Sakai, N., et al. 2023, Nature Astronomy, 7, 557 [2017]caganaCaratti o Garatti, A., Stecklum, B., Garcia Lopez, R., et al. 2017, Nature Physics, 13, 276 [2010]cargon10Carraco-González, C., Rodríguez, L.F., Anglada, G. et al. 2010, Science, 330, 1209 [2005]cesa05Cesaroni, R., Neri, R., Olmi, L., et al. 2005, A&A, 434, 1039 [2018]cesa18Cesaroni, R., Moscadelli, L., Neri, R., et al. 2018, A&A, 612, A103 (Paper I) [2021]chen21Chen, Z., Sun, W., Chini, R., et al. 2021, ApJ, 922, 90 [2001]dale01D'Alessio, P., Calvet, N., & Hartmann, L. 2001, ApJ, 553, 321 [2019]fedr19 Fedriani, R., Caratti o Garatti, A., Purser, S. J. D., et al. 2019, Nature Communications, 10, 3630 [2023]fedr23 Fedriani, R., Caratti o Garatti, A., Cesaroni, R., et al. 2023, A&A, submitted [2023]fischppvii Fischer, W. J., Hillenbrand, L. A., Herczeg, G. J., et al. in Protostars and Planets VII, in press [2015]fujiFujisawa, K., Yonekura, Y., Sugiyama, K., et al. 2015, The Astronomer's Telegram, 8286, 1 [2007]goddi07Goddi, C., Moscadelli, L., Sanna, A., Cesaroni, R., & Minier, V. 2007, A&A, 461, 1027 [2021]hiro21Hirota, T., Cesaroni, R., Moscadelli, L., et al. 2021, A&A, 647, A23 [2017]hunt17Hunter, T. R., Brogan, C. L., MacLeod, G., et al. 2017, ApJ, 837, L29 [2021]hunt21Hunter, T. R., Brogan, C. L., De Buizer, J. M., et al. 2021, ApJ, 912, L17 [1976]lampLampton, M., Margon, B., & Bowyer, S. 1976, ApJ, 208, 177 [2018]liu18Liu, S.-Y., Su, Y.-N., Zinchenko, I., Wang, K.-S., Wang, Y. 2018, ApJ, 863, L12 [2020]liu20Liu, S.-Y., Su, Y.-N., Zinchenko, I., et al. 2020, ApJ, 904, 181 [2013]mosca13Moscadelli, L., Cesaroni, R., Sánchez-Monge, Á., et al. 2013, A&A, 558, A145 [2017]mosca17Moscadelli, L., Sanna, A., Goddi, C., et al. 2017, A&A, 600, L8 [2021]obonyoObonyo, W. O., Lumsden, S. L., Hoare, M. G., Kurtz, S. E., & Purser, S. J. D. 2021, MNRAS, 501, 5197 (OLHKP) [1986]reynReynolds, S.P. 1986, ApJ, 304, 713 [2013]sada13Sadavoy, S. I., Di Francesco, J., Johnstone, D., et al. 2013, ApJ, 767, 126 [2014]sanch14Sánchez-Monge, Á., Beltrán, M. T., Cesaroni, R., et al. 2014, A&A, 569, A11 [2018]statcontSánchez-Monge, Á., Schilke, P., Ginzburg, A., Cesaroni, R., & Schmiedeke, A. 2018, A&A, 609, A101 [2019]sanna19Sanna, A., Moscadelli, L., Goddi, C., et al. 2019, A&A, 623, L3 [2000]shep00Shepherd, D.S., Yu, K.C., Bally, J., & Testi, L. 2000, ApJ, 535, 833 [2019]staff19Staff, J. E., Tanaka, K. E. I., & Tan, J. C. 2019, ApJ 882, 123 [2016]steck16Stecklum, B., Caratti o Garatti, A., Cardenas, M. C., et al. 2016, The Astronomer's Telegram, 8732, 1 [2021]steck21Stecklum, B., Wolf V., Linz H., et al. 2021, A&A, 646, A161 [2014]testiTesti, L., Birnstiel, T., Ricci, L., et al. 2014, inProtostars and Planets VI, ed. T. Henning,C. P. Dullemond, R. S. Klessen and H. Beuther(Tucson: Univ. of Arizona Press), 339 [2020]uchiUchiyama, M., Yamashita, T., Sugyiama, K., et al. 2020, PASJ, 72, 4-1 [2023]wilh23Wilhelmi, E. de Oña, López-Coto, R., & Su, Y. 2023, MNRAS, in press [2011]wangWang, Y., Beuther, H., Bik, A., et al. 2011, A&A, 527, A32 [2015]zin15Zinchenko, I., Liu, S.-Y., Su, Y.-N., et al. 2015, ApJ, 810, 10§ JET EXPANSION LAWAs discussed in Sect. <ref>, the jet is not expanding at constant velocity during the period of our monitoring, but it appears to slow down. It is thus necessary to adopt an expression for the maximum radius, , that properly takes this effect into account.A reasonable scenario may be that of a jet confined in a solid angle , with initial mass _0, which expands with initial velocity _0 through a medium with density ρ∝ r^-2, where r is the distance from the jet origin. Because of momentum conservation one can write_0 _0 =(t) /ṭ =[_0 + ∫_^ρ_0(/r)^2r^2ṛ] /ṭ =[_0 + ρ_0 ^2 (-) ] /ṭ ,with ρ_0 density at radius . The solution of this differential equation is_0 _0 t =_0 (-)+ ρ_0 ^2 [^2-^2/2 - (-)] ,which after some algebra gives(t) =+ 2 T _0 ( √(1+t/T)),where we have defined T ≡_0/(2ρ_0^2 _0).While this expression provides us with a more realistic description of the jet expansion than the constant velocity assumption adopted in Paper I, we stress that it is not to be taken as the real equation of motion of the jet but as the simplest way to parametrise the observed deceleration of it.§ DESCRIPTION OF THE MODELIn Paper I we adopted the jet model by Reynolds (<cit.>) to describe the integrated flux density of the radio jet from . More specifically, we assumed what is defined as the `standard spherical' case in Reynolds' Table 1, namely a jet where the opening angle (), internal velocity (), ionisation degree (), and temperature () do not depend on the distance r from the star. Conservation of mass along the flow implies that the gas number density can be expressed as n=(r/)^-2.Despite its simplicity, the model was successful in fitting the observed spectra in Paper I. However, Reynolds' equations have been derived under the assumptions of small , an approximation that is not satisfied by the best fit to the radio spectra obtained in Paper I (see Table 3 there), which requires angles as large as ∼50. In order to overcome this limitation, we propose here a slightly modified version of Reynolds' model that works for any <90.To allow for an analytic solution of the equations, we maintain Reynolds' assumption that the opacity depends only on r. This is equivalent to assuming that the jet is not conical but has a pyramidal shape with two faces parallel to the line of sight. We also assumed that the jet is delimited by two cylindrical surfaces, with radiiand , and that its axis is inclined by a small angle, ψ, with respect to the plane of the sky. Figure <ref> schematically illustrates the geometry of the jet, where the star lies at the origin of the axes and the line of sight is parallel to the z-axis.Following Reynolds, the absorption coefficient of the ionised gas and the opacity can be written asκ(r) = κ_0 (r/)^-4 ,whereκ_0 = a_κ^2 ^2 ^-1.35ν^-2.1 ,with a_κ=0.212 in CGS units, andτ(R) =∫_-Rtan(-ψ)^Rtan(+ψ)κ ẓ = κ_0 ^4 ∫_-Rtan(-ψ)^Rtan(+ψ)(R^2+z^2)^-2 ẓ =κ_0^4/2 R^3[ tan(+ψ)/1+tan^2(+ψ) + tan(-ψ)/1+tan^2(-ψ) + 2]=κ_0^4/2 R^3[ sin(2) cos(2ψ) + 2]=τ_0 (/R)^3.Here we have defined the two quantitiesR=√(x^2+y^2)andτ_0 = κ_0 sin(2) cos(2ψ) + 2/2. To ease the comparison with Reynolds' expressions, we indicate withandthe projections ofandon the plane of the sky, namely =cosψ and =cosψ, and with y_1 the value of R at which τ(R)=1. From Eq. (<ref>) one hasy_1 = τ_0^1/3. For the calculation of the total flux density of the jet we follow Reynolds' approach and assume that the emission between R=0 and R=y_1 is optically thick, and that between R=y_1 and R= is optically thin. Under this approximation the flux can be written asS_ν=1/d^2∫_^ B_ν() (1-^-τ) 2 RṚ≃ 2 B_ν()/d^2[ ∫_^y_1 RṚ + ∫_y_1^τ(R) RṚ]=2 B_ν()/d^2[ y_1^2-^2/2 + y_1^3 (1/y_1-1/) ] ,where B_ν is the Planck function andis the projection ofon the plane of the sky.The two angles are related by the expressiontan = tan/cosψ.We note that it is not strictly correct to integrate fromtobecause the projections of the two circles with radiiandon the plane of the sky are ellipses, and the integral should be made not only in the variable R but also in the azimuthal angle. However, the approximation adopted by us is acceptable for small ψ.The expression of S_ν in the case of a jet totally thin (y_1<) or thick (y_1>) can be obtained in a similar way, by considering only the relevant approximation in the argument of the integral in Eq. (<ref>). In conclusion, one obtainsS_ν=2 B_ν()/d^2 ×{[ y_1^3 (1/-1/) ⇔y_1≤; [ y_1^2-^2/2 + y_1^3 (1/y_1-1/) ] ⇔ <y_1<; ^2-^2/2 ⇔y_1≥ ]..We note that this expression gives the total flux emitted by a single jet lobe, whereas Eq. (5) of Paper I takes into account both lobes. The different approach is justified by the fact that our new findings have proved that only the NE lobe was present during our monitoring (see Sect. <ref>).The quantity y_1 is a function of , in addition to other parameters. However, following Reynolds, it may be convenient to express it in terms of the mass loss rate of the jet, Ṁ, which is obtained by integrating the flux of mass through the inner surface of the jet. Obviously, Ṁ does not depend on the inclination of the jet with respect to the line of sight, so we can simplify the calculation by assuming ψ=0. Hence, we obtainṀ=∫_-tan^tanμ^2/^2+z^2/√(^2+z^2) 2 ẓ = 4 μ^4 ∫_0^tan(^2+z^2)^-3/2 ẓ = 4 μ^2 tan/√(1+tan^2) ,where μ is the mean particle mass per hydrogen atom (we assume μ=1.67×10^-24 g) and /√(^2+z^2) is the component of the velocity perpendicular to the inner surface of the jet. From this it is trivial to expressas a function of Ṁ, and from Eqs. (<ref>), (<ref>), and (<ref>) one obtainsy_1 = ( a_κ/16μ^2^2 ^-1.35ν^-2.1 1+tan^2/tan^2 sin(2)cos(2ψ)+2/2^2)^1/3 ,where we have defined ≡Ṁ/.Finally, a quantity of interest for our purposes is the ionised mass of the jet, which is given by the expression=∫_-^θ̣∫_^ RṚ∫_-Rtan^Rtanμ^2/R^2+z^2 ẓ = 4 μ^2 ^2 (-)=√(1+tan^2)/tan ,where we have used Eq. (<ref>) to replace .
http://arxiv.org/abs/2310.18002v1
{ "authors": [ "R. Cesaroni", "L. Moscadelli", "A. Caratti o Garatti", "J. Eisloeffel", "R. Fedriani", "R. Neri", "T. Ray", "A. Sanna", "B. Stecklum" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231027091721", "title": "Radio outburst from a massive (proto)star. II. A portrait in space and time of the expanding radio jet from S255 NIRS3" }
Stability and Accuracy analysis of the θ Method and 3-Point Time filterThe research was partially supported by NSF grant DMS-2110379.Nicholas HurlDepartment of Mathematics, Duquesne University, Pittsbugh, PA-15282 ([email protected]). Farjana SiddiquaDepartment of Mathematics, University of Pittsburgh, Pittsburgh, PA-15260([email protected] ). Shuxian XuDepartment of Mathematics, University of Pittsburgh ([email protected]).January 14, 2024 =========================================================================================================================================================================================================================================================================================================== Functional magnetic resonance imaging or functional MRI (fMRI) is a very popular tool used for differing brain regions by measuring brain activity. It is affected by physiological noise, such as head and brain movement in the scanner from breathing, heart beats, or the subject fidgeting. The purpose of this paper is to propose a novel approach to handling fMRI data for infants with high volatility caused by sudden head movements. Another purpose is to evaluate the volatility modelling performance of multiple dependent fMRI time series data. The models examined in this paper are AR and GARCH and the modelling performance is evaluated by several statistical performance measures. The conclusions of this paper are that multiple dependent fMRI series data can be fitted with AR + GARCH model if the multiple fMRI data have many sudden head movements. The GARCH model can capture the shared volatility clustering caused by head movements across brain regions. However, the multiple fMRI data without many head movements have fitted AR + GARCH model with different performance. The conclusions are supported by statistical tests and measures. This paper highlights the difference between the proposed approach from traditional approaches when estimating model parameters and modelling conditional variances on multiple dependent time series. In the future, the proposed approach can be applied to other research fields, such as financial economics, and signal processing. Code is available at <https://github.com/13204942/STAT40710>.§ INTRODUCTIONIn the statistical analysis of time series, the ARMA model describes stationary stochastic process with two polynomials, one is autoregression (AR), and the other one is moving average (MA) <cit.>. The GARCH model describes the conditional variance of the current error term related to the squares of previous innovations. Since such a model is used to model time series that exhibit volatility clustering across time <cit.> and forecast variances across time, the family of GARCH models has been widely developed <cit.>.In recent years, many researchers have achieved outstanding performance in applications of ARMA and GARCH models in different research fields, e.g., forecasting stock index by applying ARIMA + GARCH models in 2021 <cit.>; obtaining corresponding fluctuation characteristics and forecasting transport flow by ARIMA + GARCH models <cit.>; and so on.In neural science, researchers are interested in differing intrinsic timescales of brain regions. Takuya Ito used the autocorrelation (AR) function to perform the analysis <cit.>. John Fallon characterised BOLD signal dynamics by comparing over 6,000 neural activity time series data. Typically, the fMRI data in different brain regions are modelled with AR(u) or an ARMA(1,1) process <cit.>. However, the BOLD signal variability is sensitive to age differences and cognitive function <cit.>. In terms of analysing infants’ brain fMRI data, similar technique approaches are still applied. Certainly, some challenges cannot be ignored as they are specific to infants’ brain fMRI studies. Thus, more novel technical approaches are required to be proposed. One obvious challenge is motion. In general, head movement presents a substantial challenge to fMRI research. Infants tend to squirm and hard to stay still, which results in that head movement becoming problematic in infants. As a result, the infants’ brain fMRI data shows a presence of sudden timescale volatility. Nevertheless, few studies have directly explored the relationship between timescale volatility in infants’ different brain regions. Therefore, this research project focuses on differing and modelling the dependent volatility clustering across different brain regions by examining heteroscedasticity arising from head motion.Many researchers have performed statistical approaches and model ARMA + GARCH for analysing time-varying variances and correlations primarily in financial time series analysis. This is because economic time series has the unique features of volatility clustering and limitations of using ARMA models. The novel aspect of this research project is adapting such approaches to differing volatility in fMRI data for infants.This research project is concerned with the statistical analysis of functional magnetic resonance imaging (fMRI) time series data for infants. Functional MRI is widely used to measure brain activity. The technology behind it relies on the theory that blood flows increase in the area of the human brain which is in use <cit.>. A common method to perform fMRI data is blood oxygenation level- dependent (BOLD). The reason is that neural activity triggers changes in brain blood volume, flow, and oxygenation. However, this mechanism is not fully understood <cit.>. The fMRI data can be coded as a set of time series. Each time series contains observed BOLD signal variation for a single voxel (a unit representing the signal in brain scans on a three-dimensional grid) with the same time points acquired in a single session. A single voxel represents a tidy cube of brain tissue that can consist of a million brain cells. The researchers can perform statistical analysis of fMRI data to determine the signals induced by neural activity and noise. The BOLD signal observed in the measured signal is used to infer task-related activations in specific brain regions. Many studies have successfully captured task- evoked neural response patterns identified as associated with cognitive processes in brain regions <cit.>.Because the BOLD signal in the brain is sensitive to fluctuations non-neuronal activity, the head motion has significant, systematic effects on fMRI network measures. Different levels of head motion cause a difference in BOLD signal that could be considered neuronal activity by mistake <cit.>. Typically, the fMRI data can be modelled by using the time series Autoregressive model AR (u) or Autoregressive–moving-average model ARMA (u,v). Note that infants tend to move more than adults when they have fMRI scans for the brain, resulting in sudden volatility of infants’ fMRI data over time. However, few studies perform statistical analysis on this type of time series data.This research studies modelling problems for a cluster of associated time series data sets with shared volatility over time. The proposed novel statistical approach fits the model by infant’s fMRI time series data of voxels. The research is challenging as it aims to build up multiple complex models constructed of independent AR(u) models and one shared Generalized Autoregressive Conditional Heteroskedasticity GARCH (p,q) model. Equivalently, the white noise term of each AR (u) model is modelled by the same GARCH (p,q) model. No published studies show abilities to successfully identify the correct orders of AR + GARCH and estimate the model parameters. Therefore, the main concern in this research includes model identification, model diagnostics and parameter estimation for the multiple associated models ARMA (u,v) + GARCH (p,q).Many Neural Sciences researchers will benefit from this research since they have shown high interest in measuring the activity of different brain regions by exploring fMRI data <cit.>. Moreover, the novel approach is extended to modelling finance time series data with sudden volatility caused by exogenous shocks, e.g., company share value. The researchers can adapt this novel approach to perform statistical analysis of complex nonlinear data with volatility in various research fields.§ THEORY §.§ ARMA time series modelsIf a series is partly autoregressive and partly moving average, then it is a general time series model. It can be named a mixed autoregressive moving average model of orders p and q, ARMA(p,q). It is generally represented as below equation <cit.>:Y_t=ϕ_1 Y_t-1+ϕ_2 Y_t-2+⋯+ϕ_p Y_t-p+e_t-θ_1 e_t-1-θ_2 e_t-2-⋯-θ_q e_t-q For the general ARMA(p,q) model, the condition of stationary is |ϕ| < 1 <cit.>. To identify the value of p and q, the sample autocorrelation function (ACF) in Equation <ref> and the partial autocorrelation function (PACF) in Equation <ref> is applied <cit.>:ρ_k=∑_t=1^n-k(Y_t-Y̅)(Y_t+k-Y̅)/∑_t=1^n(Y_t-Y̅)^2 k=1,2, ⋯ϕ_k, k=ρ_k-∑_j=1^k-1ϕ_k-1, jρ_k-j/1-∑_j=1^k-1ϕ_k-1, jρ_j j=1,2, ⋯, k-1 For ARMA(p,q), to identify p, Quenouille (1949) showed that the following approximation holds <cit.> for a white noise process:var(ϕ_k, k) ≈1/nThus ±2/n is the critical limit on ϕ_k,k to test the null hypothesis that an AR(p) model is correct. If ϕ_k,k breaks down to 0 for some k, there is evidence for p = k. To identify q, ±2/n can also be used as the critical limits on ρ_k . The null hypothesis can be rejected if and only if ρ_k exceeds these limits. And it shows evidence for q = k.The sample ACF and PACF are practical visual tools for identifying orders of AR and MA models. But for a mixed ARMA model, the ACF and PACF have infinitely nonzero values. The summary of ACF and PACF behaviours are shown in Table <ref>. So it is challenging to identify p and q <cit.>. Although other graphical tools are proposed to support us in identifying p and q, in this project, only AR(p) model or MA(q) model is considered instead of a mixed ARMA(p,q) model. This is due to the invertibility characteristic of the AR and MA model, which is explained in section <ref>. §.§ Invertibility An MA (q) can be invertible, which means it can be inverted into an infinite-order AR model. For instance, an MA (1) model is considered as:Y_t=ε_t-θε_t-1 Equation <ref> can be rewritten as:ε_t=Y_t+θ Y_t-1+θ^2 Y_t-2+⋯orY_t=(-θ Y_t-1-θ^2 Y_t-2-θ^3 Y_t-3-⋯)+ε_tby continuously replacing t with t - 1 and substituting for ϵ_t-1 when |theta| < 1 <cit.>. For a general MA (q) or ARMA(p,q) model, if it is invertible, it can be inverted to an AR(p) model with a large p. In this project, the proposed model will only have AR(p) component instead of ARMA(p,q). When p is increased to a large value, the model equivalently has an ARMA(p,q) component. §.§ Time series model of heteroscedasticity Consider a single time series data Y_t with high volatility, the conditional variance of Y_t is given by the past Y values, Y_t-1, Y_t-2, ⋯. In practice, the one-step-ahead conditional variance varies with the current and past values. Therefore, the conditional variance is a random process. To study the volatility of a time series, applying the McLeod-Li test Field <cit.> for the presence of volatility is useful. In 1982, Engle first proposed the autoregressive conditional heteroscedasticity (ARCH) model for modelling the changing variance of a time series <cit.>. The null hypothesis of the McLeod- Li test is that no autoregressive conditional heteroskedasticity (ARCH) is present among the lags considered. It is computing the squared series data or the squared residuals from an ARMA model and then performing Ljung-Box test <cit.> with the calculated results. When there is k out of n p-values significant at the 0.05 significance level among lags n, and k/n > 0.05, the null hypothesis is likely to reject.Another popular model to represent the dynamic evolution of volatility in time series is the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model <cit.>.Y_t= σ_t | t-1ε_tσ_t | t-1^2= α_0+α_1 Y_t-1^2+⋯+α_q Y_t-q^2+β_1 σ_t-1 | t-2^2+⋯+β_p σ_t-p | t-p-1^2ε_t∼ N(0,1) The orders of GARCH are p and q. In Equation <ref>, the standardized residuals ε̂_t are computed as Y_t / σ_t | t-1. If the GARCH model is correct, ε̂_t is independent and identically distributed <cit.>. To identify p and q in GARCH(p,q), the method examines the ACF and PACF of squared Y_t. If Y_t is following GARCH(p,q), then Y^2_t is following ARMA(max(p,q),p) <cit.>. However, sometimes it is difficult to identify p and q because of more fluctuation and high variance in data. Some existing well-known Information Criteria, Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), can help us choose the correct model from a list of candidate GARCH(p,q) models <cit.>. The lower AIC or BIC, the candidate GARCH model is better. The AIC value can be calculated from the maximum likelihood estimate of the GARCH model, and it is defined as:AIC = 2K - 2ln(L)In Equation <ref>, K is the number of independent variables used, and L is the log-likelihood estimate of the candidate GARCH model. § METHODSThis research project will implement the proposed statistical analysis approach in R programming language using the software RStudio.Initially, a single time series data will be simulated from a model AR(1) + GARCH(1,1) with fixed orders and model parameters. The simulation time series data will be used for statistical analysis afterwards. The sample Autocorrelation Function (ACF) and the sample Partial Autocorrelation Function (PACF) (Corbyn, 2011) plots provide practical graphical tools for identifying the orders of AR(u) or MA(v) models. Fit the identified AR (u) model to the simulation data, and the squared residuals from this model can be used to test for the presence of ARCH (or GARCH). The McLeod-Li (McLeod and Li, 1983) test can refer to conditional heteroscedasticity (ARCH) effects by using several lags and plotting the p-value of the statistical tests. The GARCH model is an extension of the ARCH model that incorporates a moving average component together with the autoregressive part. The model identification techniques for ARMA models can also be applied to the squared residuals. Plotting ACF and PACF of the squared residuals typically indicates ARMA(max(p,q),p) model is the suitable model for the squared residuals <cit.>. Thus, a GARCH(p,p) model is fitted at first, and then q can be estimated by examining the significance of the resulting ARCH coefficient estimates. After this, a model AR(u) + GARCH(p,q) is fitted to the simulation data to estimate the model parameters ϕ, α, β.Similarly, the technical approach is applied to multiple dependent heteroskedastic time series. It starts with simulating data from various models with shared volatility clustering across time y_i = AR(u_i) + GARCH(p,q) as below: y_1 =AR(u_1)+GARCH(p, q)y_2 =AR(u_2)+GARCH(p, q) y_3 =AR(u_3)+GARCH(p, q) ⋯⋯ Then sample ACF and sample PACF are used to identify each AR orders u_i. A collection of AR(u_i) models are fitted to each time series separately. Therefore, a series of estimated residuals η̂_i returned from each time series {y_i} is obtained. The next step is averaging over all the N series (1/N∑_i=1^N η̂_i) to obtain the average value of all estimated residuals. Note that the estimated residuals are shared components in all N series. Calculate the average of N estimated residuals {η̂_t} as η̅̂̅_t and plot ACF and PACF of the η̅̂̅_t so that the GARCH orders can be identified. Equivalently, the model identification techniques for ARMA models are used to identify p and max(p,q). When fitting the GARCH(p,q) to the average value of N estimated residuals {η̅̂̅_t}, the estimated coefficients (α_i, β_i) are obtained by maximising the likelihood function of the GARCH model <cit.>.L(ω, α, β)=-n/2log (2 π)-1/2∑_i=1^n{log(σ̂_t | t-1^2)+η̂_t^2 /(2 σ̂_t | t-1^2)} There is no closed-form solution for the maximum likelihood estimators of ω, α and β, but they can be computed numerically. A series of models can be built, including a shared GARCH model plus multiple independent AR(u) models. Afterwards, fitting such a series of models to fMRI data performs the goodness of fit test for the fitted AR + GARCH model. §.§ Simulate dataThe simulation work starts with simple models and fixed parameters. The objectives are simulating multiple time series data sets with given parameters and orders. The simulated data will be used for model fitting and parameter estimation.The mixed model AR(u) + GARCH(p,q) is proposed. In general, the model is used for modelling multiple dependent series data, so it contains multiple AR + GARCH formulas that can be written as:Y_1t = μ + ∑_i=1^u ϕ_1i Y_t-i + η_1tY_2t = μ + ∑_i=1^u ϕ_2i Y_t-i + η_2t⋮Y_kt = μ + ∑_i=1^u ϕ_ki Y_t-i + η_ktAnd the squared residuals {η_kt} are shared with each formula. It is assumed that the average of {η_kt}, η̅_t, can be fitted by a GARCH(p,q) model. Thus, its formula is written as:η̅_t= σ_t | t-1ε_tσ_t | t-1^2= α_0+α_1 η_t-1^2+⋯+α_q η_t-q^2+β_1 σ_t-1 | t-2^2+⋯+β_p σ_t-p | t-p-1^2 In the beginning, the orders {u, p, q} are given fixed values {1, 1, 1}. Therefore, the simple model is AR(1) + GARCH(1,1). The coefficient parameter φ of AR model is given 0.05, close to 0.0. The coefficient parameter {α, β} of GARCH model are randomly given fixed values {0.2, 0.5}. In total, 20, 100 and 400 time series data are simulated with this AR(1) + GARCH(1,1) model. These time series data are used to assess whether the size of the data set impacts the results of parameter estimation.Afterwards, another AR(1) + GARCH(1,1) model is created with the parameters as the previous simulation, except parameter φ. The parameter φ of AR model is set as a random number in a range (0.7, 0.9), close to 1.0. This new model is used for simulating 400 time series data sets. Parameter estimation is performed on these data sets. Then the results are compared with previous parameter estimation with 400 time series data with fixed φ. The objective of this simulation work is to assess whether parameter estimation results behave differently when the parameter φ is a fixed value and a random value. In the end, 400 time series data are simulated with AR(1) + GARCH(1,1) model. But the first 200 time series have α = 0.2, β = 0.5, and φ is a random number in range (0.01, 0.05). The reset 200 time series have the same {α, β} as {0.2, 0.5}. The only difference is that parameter φ is in the range (0.7, 0.9). These 400 time series data together are used for parameter estimation. The objective is to understand the behaviour of parameter estimation when the parameter φ is close to 0.0 and 1.0.All simulated time series data contain 300 time points (N = 300) as default. §.§ Model identification and parameter estimationThe following estimation work is programmed in R language using RStudio.When 20, 100, 400 time series data are simulated, each time series’ ACF and PACF values are calculated in a created loop function. (1) Comparing the ACF and PACF among the first 20 lags to count the number of PACF values which are significantly not null (±2/n is the critical limit) when ACF decays exponentially fast. The number of these non-null PACF values k is equal to the AR orders, p. The estimated model is named AR (p̂). If only PACF decays fast, then counting the number of ACF values that are not equal to null. This number k is equal to the MA orders, q, so the estimated model is named MA(q). If ACF and PACF decay fast and exponentially, this is evidence that the ARMA model is present. But the fixed model ARMA is not considered at the stage of this project. (2) After the estimated AR model is fitted on each series Y_it , a set of squared residuals {η̂_it} is returned. Then, compute the average of {η̂_it} for 20, 100 and 400. (3) Afterwards, the ACF and PACF of the averaged η̅̂̅_t are calculated for identifying GARCH orders p and q. It is the same identification method used for identifying AR orders. The estimated GARCH(p,q) model is fitted on averaged η̅̂̅_t and model diagnostic is performed afterwards. The goal of the diagnostic is to measure the goodness of fit of the estimated GARCH model. (4) Each series data Y_it minus averaged η̅̂̅_t before calculating ACF and PACF of series again. This step is to identify the AR orders p again and estimate parameter φ_i after each series Y_it is fitted by the estimated AR(p) model.The previous four analysis steps are also executed on the 400 simulated time series data with parameter φ_i in range (0.7, 0.9). (1) Computing ACF and PACF of each series Y_it and checking their behaviour to identify the order of the AR model for Y_it (±2/N is the critical limit). (2) Once the AR model is fitted on each series, the returned residuals {η̂_it} from each fitted model are collected. (3) The averaged residuals η̅̂̅_t is computed from the collection of returned residuals {η̂_it}. After this, the η̅̂̅_t is removed from each series Y_it and ACF and PACF of η̅̂̅_t are calculated for identifying orders of GARCH model p and q. The estimated GARCH(p,q) model is fitted on η̅̂̅_t before performing model diagnostic. (4) Calculating ACF and PACF on each series data Y_it to identify orders of the AR model u_i. To estimate coefficient parameters φ_i, it can be retrieved from results of fitting the estimated AR(u_i) model.The methods used for analysing the mixed 400 simulated time series with variant parameters differ from previous methods. The reason is that some backwards are found when estimating parameters. The details of backwards will be explained in the following few sections. This new analysis methods are described as: (1) The ACF and PACF of series data Y_it are computed for the first 20 lags. And they are used for identifying AR orders p and q. Then {Y_it} is fitted with the estimated ARMA(p̂,q̂) model to get model residuals {η̂_it}. Also, the coefficient parameters φ_i are estimated after model fitting. All estimated φ̂_i are stored in a numerical vector φ̂_1i that is used for final estimation later. (2) A weight parameter W_i is calculated by using each estimated φ̂_1i, and it can be written as:w_i= 1/φ̂_1iW_i= w_i/∑ w_iAfterwards, the averaged residuals η̅̂̅_t is calculated by multiplying the weight W_i with each model residuals {η̂_it}:η̅̂̅_t = ∑_i=1^K W_i η̂_it (3) Calculating the ACF and PACF of averaged residuals η̅̂̅_t is the method used for identifying the orders of GARCH model p and q. An estimated GARCH model can be built with p̂ and q̂. The model is fitted on the averaged residuals η̅̂̅_t so that model diagnostic can be performed. (4) The averaged residuals η̅̂̅_t is removed from each series Y_it. The ACF and PACF of each series {Y_it - η̅̂̅_t} are identified to build the estimated ARMA(p̂, q̂). The estimated ARMA models are fitted on each {Y_it - η̅̂̅_t} to get the estimated coefficient parameters φ̂_2. All φ̂_2 are stored in numerical vector φ̂_2i. (5) Average two numerical vectors φ̂_1i and φ̂_2i to get the final estimation of parameter φ̂_l. §.§ Fit model on real-world dataThe real-world data set is collected from two real fMRI data sets, subject CC110045 and subject CC110056. And subject CC110056 shows high volatility than subject CC110045. Each subject represents the time course for a different area, called Region of Interest (ROI) in the brain. There are 400 ROIs in each subject, and each ROI time series data contains 261 time courses. Typically, ROIs beside each other in the brain have high correlations. Both data sets have no missing values and no invalid numerical values, see Figure <ref>. They are valid time series data examined by visualisation tools.The improved analysis method is applied to real-world fMRI data. The estimation of parameters and model fitting are examined statistically. It includes estimated AR orders p̂_i, estimated GARCH orders (p̂,q̂), estimated AR coefficients φ̂_i, the averaged residuals η̅̂̅_t, and standardized residuals ε̂_t. Especially, to evaluate the new improved method, the coefficients estimator φ̂_i is compared to the old coefficients’ estimator φ̂_i_old that are generated from the old statistical analysis method.§ RESULTS AND DISCUSSIONThe idea of simulating multiple time series in different scenarios is to help us evaluate the performance of the proposed modelling methods and to understand the behaviour of numerous dependent time series data with volatility. In this section, there are some plots and tables to present the results of the modelling.In the modelling, there are four critical parameters in consideration and comparison, η̂_t, σ_t | t-1, ε_t and φ_i. The first parameter η̅̂̅_t is the averaged value of multiple residuals returned from AR models. It is possible to assess the modelling of η̅_t by evaluating the estimation of σ_t | t-1 and returned standardized residuals ε̂_t, because Equation <ref> shows that η̅_t is determined by the product of σ_t | t-1 and ε_t. As our assumptions state, the distribution of ε_t follows a normal distribution. Thus, the distribution of ε̂_t is assessed by a normal quantile-quantile plot (Q-Q plot) <cit.>. Comparing the estimated σ̂_t | t-1 to the actual σ_t | t-1 to assess its accuracy of estimation. This method evaluates the estimation of σ_t | t-1 and examines the fitting performance of GARCH model.Another critical parameter φ̂_i is estimated from the fitted AR model. Since the actual values of φ_i are known, its estimation is assessed by computing the mean squared error (MSE) of φ̂_i. And the scatter plot of φ̂_i and φ_i is a visualisation tool to assess the estimation accuracy. §.§ Simulation study 1 - Different sizes of data setsWhen the parameters are given fixed values and the size of the simulated time series data set increases from 20 to 100, the estimation of σ_t | t-1 can be evaluated by the below plots:The above plots show that the estimator σ̂_t | t-1 are very close to the actual values of σ_t | t-1. And the mean value of the estimator σ̂_t | t-1 is approximately equal to the expected mean of σ_t | t-1. Also, the plots indicate that the values of estimator σ̂_t | t-1 do not change a lot when the size of the data set is increasing. When the size of data set increases to 400, the values of estimator σ̂_t | t-1 are still close to the actual values of σ_t | t-1 without significant variance. Thus, the estimation of σ_t | t-1 is quite good.When the size of data sets increases, the distribution of standard residuals ε̂_t follows a normal distribution. The Q-Q plots of estimator ε̂_t are shown in Figure <ref>: It also means that the assumption of the proposed model parameter ε̂_t is correct and accepted. Equivalently, the value of averaged estimator η̅̂̅_t should be good as expected because it is only determined by values of σ̂_t | t-1 and ε̂_t.Further, the orders of the averaged estimator η̅̂̅_t is consistently identified as (1,1) when the size of data sets increases from 20 to 400. Table <ref> presents that fitting the GARCH(1,1) model on estimator η̅̂̅_t always returns the lowest AIC using MLE. Although GARCH(2,2) is fitted on η̅̂̅_t, the model GARCH(1,1) achieves the lowest AIC. Thus, GARCH(1,1) is considered the best-fit model.After removing η̅̂̅_t from each series, the values of estimator φ̂_l varies differently. To compare the actual values to the estimated values, it is necessary to plot values with scatter plots, Figure <ref>.When looking at the first plot of φ_i and estimator φ̂_l, it is clear to see few estimated values larger than the actual value (0.05), and the reset of values are less than 0.05. The estimator mainly underestimates the parameters φ_i. When the size of data sets increases to 400, all estimator values φ̂_l are less than the actual value (0.05). Referring to these plots, the estimation of φ_i is biased when the size of data sets is large. §.§ Simulation study 2 - Modelling with dynamic parametersIn this section of study 2, some statistical analysis results are presented to compare modelling performance when φ_i is fixed and dynamic.To compare estimator σ̂_t | t-1 when φ_i is in different scenarios, it is necessary to plot actual values and estimated values together to study.Figure <ref> demonstrates that the values of two different estimators σ̂_t | t-1 are quite good, because both estimators fit the actual values. While the figure reveals some estimation errors on large σ̂_t | t-1, the estimators do not perform well when estimating large values.Similarly, the distribution of standardized residuals ε_t can be examined by Q-Q plots. The above Q-Q plots, Figure <ref>, indicate that both estimators ε̂_t follow a normal distribution. The data points follow the central line very closely for estimators ε̂_t.Moreover, the orders of GARCH model are accurately identified as (1,1). AIC is used to compare GARCH(1,1) with GARCH(2,2) to prove the identification method works correctly. According to AIC, the best-fit model explains the most significant amount of variation using the fewest possible independent variables. Regarding Table <ref>, it gives apparent results that GARCH(1,1) is always better than GARCH(2,2) to fit shared volatility clustering ε̂_t no matters whether φ_i is fixed to 0.05 or dynamic in range (0.7,0.9). When only considering model GARCH(1,1), it performs better when φ_i is dynamic in range (0.7,0.9).In terms of the estimator φ̂_l, the results are presented by a scatter plot.For each φ_i, the scatter plot can illustrate the distance between the actual value and the estimated value. The scatter plot, Figure <ref>, indicates the bias of this estimator φ̂_l exists when the actual value of φ_i is fixed or dynamic. Therefore, it requires improvement of the analysis methods to achieve better estimation. §.§ Simulation study 3 - Dynamic parameters in different rangesIn this section of study 3, the outcomes of modelling and parameter estimation illustrate that the improvement of previous statistical methods impact estimation accuracy.The first parameter to compare is σ_t | t-1 (σ̂_t | t-1). The data set consists of 200 simulated time series with φ_i close to 0.0 and 200 simulated time series with φ_i close to 1.0. After fitting AR models on each series and computing the averaged residuals η̅̂̅_t with weights W_i, the estimator σ̂_t | t-1 is still close to σ_t | t-1. Although Figure <ref> displays the bias of estimator σ̂_t | t-1 on local minimum values and local maximum values on the left line plot, the boxplot on the right indicates the mean value of estimator σ̂_t | t-1 is equal to mean of σ_t | t-1. Also, the variance of estimation is approximately equal to 0.00081572.The estimation of standardized residuals ε_t is good as expected since the Q-Q plot of estimator ε̂_t follows a normal distribution without many differences from ε̂_t generated from previous methods. In other words, the estimator ε̂_t is not impacted by averaging residuals {η̂_it} with weights W_i. This supports the assumption that the improvement of statistical methods reduces the bias of estimator φ̂_l without influencing estimator ε̂_t.The idea of the improved analysis methods is calculating the estimation of φ_i by averaging φ̂_1i and φ̂_2i. For this reason, the new estimator φ̂_l considers outputs from two estimators φ̂_1i and φ̂_2i generated from two different steps during modelling. The first estimator φ̂_1i is generated with bias after fitting AR model on each series. The top left plot in Figure <ref> reveals that most estimation values are smaller than the actual values. Furthermore, the top right plot shows bias when most estimation values are larger than the actual values. Hence, the second estimator φ̂_2i also generates the bias. Whereas the way of averaging φ̂_1i and φ̂_2i reduce the bias of estimator φ̂_l remarkably, illustrated by the bottom left plot in Figure <ref>. It is expected to achieve minimum bias of estimator φ̂_l without significantly influencing the accuracy of other estimators σ̂_t | t-1 and ε̂_t. Table <ref> summarises the estimation results of φ_i that are achieved when φ_i is fixed and dynamic in different ranges, (0.01,0.05) and (0.7,0.9), with varying sizes of simulated data sets. It demonstrates that the improved method with proposed weights parameter W_i avoids the apparent bias of estimator φ̂_l as expected. It is necessary to calculate the estimator φ̂_l twice to give the final estimation of φ_i. §.§ Practice study – Real-world fMRI dataThe following steps are performed to model the real-world fMRI data: (1) Identify AR orders u̅ of each fMRI series and fit estimated AR (u̅) model to get estimator φ̂_1i and residuals {η̂_it}. (2) Use estimator φ̂_1i to calculate weights W_i, then calculate averaged residuals η̅̂̅_t with the equation: η̅̂̅_t = ∑ W_i η̂_it. (3) Identify GARCH orders (p̂,q̂) on η̅̂̅_t and fit a GARCH(p̂,q̂) model. (4) Without averaging residuals, fit each residuals η_it with GARCH(p̂,q̂) models to get estimator φ̂_i_old (this is the old analysis method used for estimating φ_i). (5) Remove η̅_t from each fMRI series and fit returns with AR(u̅) model to get estimator φ̂_2i. (6) Calculate estimator φ̂_i by averaging φ̂_1i and φ̂_2i. Statically assess GARCH model fitting and compare parameter estimators φ̂_i (= (φ̂_1i + φ̂_2i) / 2) to φ̂_i_old. Subject CC10056: The findings of our study on CC10056 are presented in this section.It is necessary to justify the GARCH model choice that the volatility is shared over the 400 ROIs. Accordingly, the 400 × 400 cross-correlation matrix between the squared residuals {η̂_it} is calculated and plotted visually. Figure <ref> shows that the distribution of cross-correlation is not centred on zero, instead it is centred on a positive value. The plot of averaged residuals η̅̂̅_t clearly reveals volatility clustering. Also, it fails McLeod-Li, so there is evidence of ARCH type behaviour in the model. The averaged residuals η̅̂̅_t can be fitted with a GARCH(1,1) model. The coefficients α̂_1 is significant, although β̂_1 is slightly larger than 0.05 if the confidence interval is considered 0.05. Another model, GARCH(2,2), is also fitted on η̅̂̅_t to compare fitting results to GARCH(1,1). The AIC indicates that GARCH(1,1) model is better than GARCH(2,2) in terms of fitting η̅̂̅_t. Therefore, the equation of the GARCH model is written as below:η̅_t= σ_t | t-1ε_tσ^2_t | t-1 = 0.52535 + 0.06399η^2_t-1 + 0.86132σ^2_t-1 | t-2 ε_t∼ N(0,1) The final line shows the Li-Mak test (Li and Mak, 1994) results, indicating that we successfully modelled out the ARCH behaviour in the series. Equivalently, the Li-Mak test inspects the presence of autocorrelation in their squares, showing a sign that the GARCH model captures all autoregressive conditional heteroskedastic patterns there are.The standard residuals ε̂_t is examined by a Q-Q plot with the confidence interval of 0.05. It illustrates standardized residuals follow a normal distribution, see Figure <ref>. The estimator φ̂_1i, φ̂_2i and φ̂_i_old are examined by scatter plots. Since some fMRI series are identified as AR models with orders larger than 2, in this case, only the first two coefficients φ̂_1 and φ̂_2 are discussed. To compare the estimator φ̂_i (generated from the improved analysis method) to the estimator φ̂_i_old (generated from old analysis method), a possible method is comparing their standard error returned from modelling. Remember that estimator φ̂_i is equal to (φ̂_1i + φ̂_2i)/2, thus the standard error of φ̂_i is given by the following equation:SE(φ̂_i)= √(Var(φ̂_i))= √(1/4Var(φ̂_1i + φ̂_2i))= 1/2√(Var(φ̂_1i) + Var(φ̂_2i)) The variance of φ̂_1i and φ̂_2i can be calculated by squaring off the standard error of φ̂_1i and φ̂_2i returned by AR model. Using visualization tools is the straight method to compare two estimators for the first coefficient φ_1 and the second coefficient φ_2. The scatter plots, Figure <ref> reveal that the standard error of the estimator φ̂_i is smaller than the standard error of old estimator φ̂_i_old if the coefficients φ̂_i and φ̂_i_old are neither equal to zero. When averaged residuals η̅̂̅_t and conditional variance σ̂_t | t-1 are displayed together, Figure <ref> and <ref>, the GARCH(1,1) model tries to capture volatility. It mainly evidences that the model tracks some significant volatility clusters well. Moreover, Figure <ref>, the ACF plot of squared η̅̂̅_t shows high cross-correlation, and the ACF of standardized residuals ε̂_t, Figure <ref>, does not display many autocorrelations at low lags.Subject CC10045 The findings of our study on CC10045 are presented in this section.The plot of averaged residuals η̅̂̅_t does not show much high volatility. Instead, it shows volatility clustering at the last few time points. Also, it passed McLeod-Li test, so the null hypothesis cannot be rejected. Thus there is no evidence of ARCH type behaviour in the model, see Figure <ref>. It is still necessary to fit the averaged residuals η̅̂̅_t with a GARCH model to assess its parameters statistically. The orders of GARCH are identified as (2,1). A GARCH(2,1) model is fitted. The coefficients β̂_1 is significant if the confidence interval is considered 0.05. But α̂_1 and α̂_2 have NaN p-values, which means their fitted probabilities are 0 or 1. The p-values suggest that only β̂_1 is the significant coefficient required. Nevertheless, building any GARCH or ARCH model with coefficients β̂_1 only is impossible, see Table <ref>.Despite this, other models, GARCH(1,1) and GARCH(2,2), are still fitted on η̅̂̅_t to compare fitting results to GARCH(2,1). The AIC, in Table <ref>, indicates that GARCH(2,1) model is still better than others in terms of fitting η̅̂̅_t. When squared residuals η̅̂̅_t and conditional variance σ̂_t | t-1 are plotted together, Figure <ref> and <ref>, it indicates that the model fitting fails a test for normality. The GARCH(2,1) model cannot capture any volatility. The ACF plot of squared η̅̂̅_t, Figure <ref>, only shows non-null autocorrelations at lag 2. Figure <ref>, the plot of standardized residuals ε̂_t illustrates non-null autocorrelations at low lags, so it does not follow a normal distribution. It is not necessary to compare the estimator φ̂_i (or φ̂_1i, φ̂_2i) to φ̂_i_old. This is because the squared residual η̅̂̅_t does not have much high volatility, and fitting the GARCH model is not a suitable modelling method. Instead, most fMRI series are identified as AR models with orders 1, 2 and 3. It implies that the subject CC10045 can be modelled with multiple AR(û_i) models.§ CONCLUSIONSThis paper evaluates a novel aspect of adapting an approach to handle fMRI time series data CC10045 and CC10056 for infants because infant data have “innovations” (sudden movements) associated with them, which results in jumps in the time series data. The proposed approach tries to model these jumps as shared volatility clustering across the time series (brain regions). The shared volatility clustering is modelled as GARCH models configured with the normal distribution, and the superior GARCH model is chosen based on which model achieves the smallest AIC. Each series is modelled as independent AR models. The estimations of model parameters are examined by the standard error, MSE and graphical examinations. The performance of modelling is evaluated by the performance measures MSE.All GARCH models in this project are configured with the normal distribution. The simulation work proves that multiple dependent AR(1) + GARCH(1,1) time series data have shared volatility modelled successfully. The Q-Q plot of the standardized residuals shows that our prespecified distribution assumption (∼ N(0,1)) was correct. The graphical examination of conditional variance shows that GARCH model tends to estimate the shared volatility clustering. The parameters of AR parts are underestimated when shared volatility clustering is not weighted. After calculating the weighted shared volatility clustering by AR coefficients and averaging coefficient estimators, the estimation of AR parameters has increased accuracy. Equivalently, the MSE of parameter estimators are heavily reduced. The real-world fMRI data CC10056 with many movements is identified as multiple AR models with different orders in the range [1, 5] and a GARCH(1,1) model. The GARCH(1,1) model tries to capture the shared volatility clustering. Also, the graphical examination of AR coefficients shows that they are estimated accurately. The real-world fMRI data CC10045 is identified as multiple AR models. However, the share volatility clustering cannot be modelled by a GARCH model. It does not reveal conditional heteroscedasticity, as it does not pass statistical tests. Moreover, the best model, GARCH(2,1) chosen, cannot capture shared volatility clustering.The results, therefore, imply that the weighted shared volatility improves the modelling performance of the GARCH models when handling fMRI time series data for infants with many movements across brain regions. Furthermore, averaging the AR coefficient estimators is a vital characteristic to account for when modelling multiple dependent time series. unsrtnat
http://arxiv.org/abs/2310.17760v1
{ "authors": [ "Fangyijie Wang", "Michael Salter-Townshend" ], "categories": [ "stat.ME", "eess.SP" ], "primary_category": "stat.ME", "published": "20231026200725", "title": "Novel Models for Multiple Dependent Heteroskedastic Time Series" }
Quantified Effects of the Laser Seeding Attack in Quantum Key Distribution A. J. Shields January 14, 2024 ========================================================================== Temporal relation extraction models have thus far been hindered by a number of issues in existing temporal relation-annotated news datasets, including: (1) low inter-annotator agreement due to the lack of specificity of their annotation guidelines in terms of what counts as a temporal relation; (2) the exclusion of long-distance relations within a given document (those spanning across different paragraphs); and (3) the exclusion of events that are not centred on verbs. This paper aims to alleviate these issues by presenting a new annotation scheme that clearly defines the criteria based on which temporal relations should be annotated.Additionally, the scheme includes events even if they are not expressed as verbs (e.g., nominalised events). Furthermore, we propose a method for annotating all temporal relations—including long-distance ones—which automates the process, hence reducing time and manual effort on the part of annotators. The result is a new dataset, the TIMELINE corpus, in which improved inter-annotator agreement was obtained, in comparison with previously reported temporal relation datasets. We report the results of training and evaluating baseline temporal relation extraction models on the new corpus, and compare them with results obtained on the widely used MATRES corpus. § INTRODUCTIONUnderstanding the temporal structure of events in text is essential for a wide range of natural language processing tasks, e.g., question answering, information retrieval and inference <cit.>. Often, however, there is no explicit temporal information associated with most of the events in news articles. For instance, in the sentence “He pointed to the possibilities of new business models, products and ways of working that could have a dynamic impact on living standards.”, there is no temporal expression associated with the event “pointed” that conveys when exactly it occurred.The extraction of temporal relations, i.e., determining whether an event occurred before, after or at the same time as another event, makes it possible to capture the temporal sequence of events, even in cases where the text does not explicitly mention any temporal information with respect to an event <cit.>.Extracting temporal relations relies heavily on the annotation scheme adopted, which determines the granularity of the types of extracted temporal relations <cit.>. In existing temporal information-annotated datasets <cit.>, many types of temporal relations are ignored, ill-defined or focussed only on specific types of events. In most datasets, only relations between events in the same or adjacent sentences are tagged <cit.>. Such limitation is the main reason for losing more precise temporal information for almost half of the events <cit.>. In addition, low agreement between human annotators is a common issue and needs to be improved by making the annotation task more clearly defined <cit.>.Our work seeks to address these issues by making the following contributions: * A novel annotation scheme with an unambiguous definition of the types of events and temporal relations of interest. We also provide a method for automatically identifying and annotating every possible temporal relation in a given document.* A new dataset called TIMELINE[Available at <https://github.com/Alsayyahi/TIMELINE>] consisting of 48 news articles, whereby a higher inter-annotator agreement was obtained in comparison with previously published temporal relation datasets.* An empirical analysis and an ablation study demonstrating the extent to which the TIMELINE dataset supports the development of models for ordering events in news articles.§ RELATED WORK TimeBank is the first temporal information-labelled dataset to provide different types of temporal annotations (i.e., events, time expressions, and temporal relations) in news articles <cit.>. However, there are two main issues with TimeBank: (1) the annotators tagged only temporal relations (referred to as TLINKs) which are considered as important <cit.>, leading to sparse annotations; and (2) the scheme did not specify when two events should be paired up in a relation; as a result, inter-annotator agreement (IAA) was only around 55%. Similar to TimeBank is the TimeEval3 dataset as it is a cleansed version of the former; it was created mainly for the TempEval shared tasks <cit.>. Meanwhile, the RED corpus considered different relations between events (e.g., temporal, coreference, causal and sub-event relations). It is a rich dataset created mainly to support the development of multi-task systems; the IAA for the relations of interest (e.g., “before” relations) is relatively low, i.e., around 41% <cit.>.TimeBank-DENSE is a subset of TimeBank, which was introduced to address the sparsity issue in TimeBank by annotating all possible event pairs, and all event and time expression pairs in each given sentence and its surrounding sentences <cit.>. However, many ill-defined temporal relations were annotated, leading to low IAA. The MATRES corpus tried to solve this issue by adopting a scheme that takes into consideration multiple timelines, i.e., axes, distinguishing between events that actually happened (which belong to the main axis) and those which are only hypothetical (which belong to an axis parallel to the main one), for example. This multi-axis scheme required that each event relation is annotated while considering the relevant axis, thus improving the IAA significantly (84%) <cit.>. However, they focussed only on events centred on verbs and ignored nominalised events.cheng2018inducing proposed automatically annotating temporal relations between events in a sentence and its surrounding sentences, using predefined rules based on the events' time anchors. They annotated temporal relations based on an existing dataset where the time anchors for events are already labelled <cit.>.naik2019tddiscourse suggested a heuristic algorithm for the automatic inference of relations using the corpus developed by reimers2016temporal. Moreover, they made the first attempt to capture long-distance relations by asking experts to manually annotate a subset of unlabelled long-distance relations based on textual cues, external knowledge and narrative ordering. However, state-of-the-art models perform worse on their dataset, TDDiscourse, compared with other datasets. Error analysis shows that the models failed to deal with some of the phenomena in their dataset (e.g., negated/conditional events, event coreference, and the requirement to have access to real-world knowledge). § MOTIVATION This work is motivated by earlier relevant studies in the literature; we refer the reader to Table <ref> for more information about previously proposed temporal relation datasets. We, however, attempted to address the shortcomings of the previously proposed annotation schemes and developed a new dataset that specifies the relative order of events mentioned in a given news article. In designing our annotation scheme, we considered the following questions: * What types of events will be included? * Is it possible to annotate the relations automatically based on the time anchors of events, and subsequently allow for retrieving the temporal order of any two events? We reviewed existing temporal relation annotation schemes <cit.> to answer the first question. We then decided to discard events that cannot be anchored onto a timeline; these include intended, negated events and events involved in conditional constructions. Such events are the source of many ill-defined relations (e.g., vague relations) in existing datasets. A specific temporal relation between two events is labelled vague if there is not enough information about the two events in the text that makes the annotator decide if the first event occurred before or after or at the same time as the second event. Consider the following example sentence: “She planned to attend the conference yesterday.” The temporal relation between the two events (“planned” and “attend”) is vague as we cannot confidently determine the temporal relation between thembased on the context alone, i.e., it is possible that the event centred on “attend” did not occur.According to Ning et al., (2018), events belong to different time axes, hence the distinction between the following axes: (1) the main axis, i.e., a horizontal line where events that actually happened are represented (e.g., the event “planned” in the example sentence); (2) an orthogonal axis (a vertical line that is orthogonal to the main axis) where opinions/intentions are placed (e.g., the event “attend” in the same example); and (3) a parallel axis (a horizontal line parallel to the main axis) where generic and hypothetical events are placed. Hence, we focussed only on all events that belong to the main axis (events in the main storyline). We refer the reader to Table <ref> for examples of events that do not belong to the main axis. We discuss details of how we identified events that need annotation in Section <ref>.Regarding the second question, we concluded that annotating every possible temporal relation in a specific news article is a non-feasible task <cit.>. Importantly, inconsistent temporal relation annotations are to be expected from a human annotator (e.g., a transitive constraint is not always satisfied) and have been noted in the TimeBank corpus <cit.>. Additionally, employing crowdsourcing for the annotation of these relations is expensive. For instance, ning2018multi reported that it costs about 400 USD to annotate temporal relations between events in a given sentence and its surrounding sentences in only 36 news articles. Also, reimers2018event highlighted that considering long-distance relations is required to retrieve correct temporal information for 40 % of events in news articles. Therefore, to address these issues, we decided to automatically generate temporal relations and to directly infer consistent relations within different windows, i.e., relations between events which are separated by 0...n sentences. Further details on how temporal relations are generated will be given later in Section 4.2. Please refer to Figure <ref> for an illustration of the relation window. § DATASET CONSTRUCTIONIn this section, we describe the process for collecting the documents included in our corpus. This is followed by a discussion of the details of our proposed annotation scheme. §.§ Document collectionThe LexisNexis library is an online resource that offers access to court cases, commentaries, handbooks and news articles, amongst others[<https://www.lexisnexis.com/uk/legal/>]. The library was used to retrieve a total of 48 news articles published in a UK newspaper: The Times (London). Table <ref> presents the queries that we used to retrieve the articles. §.§ Annotation Scheme Our scheme consists of multiple layers of annotation which are described below. Event annotation. Events in our corpus were annotated according to the TimeML guidelines <cit.>, which define an event as a situation that occurs. Events are centred on one or more trigger words and can be expressed in different ways. This includes verbs, e.g., “said”, or phrasal verbs, e.g., “woke up”, as well as nominal events, e.g., “World Cup” or “demonstration”. We included all events that can be anchored onto a timeline as long as they belong to the main axis. However, as discussed in Section <ref>, we excluded specific types of events: intended, negated, static, generic, and hypothetical events. In the Appendix, we provide a complete list of the broad types of events that we excluded, alongside some examples.Time anchor annotation. Drawing inspiration from previous studies, we adopted the use of the concept of narrative container (NC) in order to increase the accuracy of temporal relation annotation <cit.>. NC is the default interval surrounding the document creation time (DCT) of an article, and provides an estimate of when a given event with no explicit time anchor, happened. It is affected by different variables related to text style and genre; for example, the NC value for newspapers is 24 hours, while that for weekly and monthly publications is a week and a month, respectively.Since our corpus consists of newspaper articles published on a daily basis, we can set the value of the narrative container to 24 hours—this was made clear to our annotators. Furthermore, annotators were provided with the DCT for every news article. Annotators were advised to use external and background knowledge if it helps them in providing more accurate time anchors. Where an event occurred over an interval, annotators were asked to provide the time anchor based on the start of the interval. Earlier work which attempted to automatically generate temporal relations based on time anchors of events <cit.> were hindered by their reliance on the EventTime corpus <cit.>. In this corpus, some events were given under-specified dates (e.g., “after 1990-XX-XX”) which made it difficult to form temporal relation annotations involving such events. In contrast, in our annotation scheme, events are always given explicit or implicit dates. Specifically, annotators were asked to enter the time anchor of the form YYYY-MM-DD for each event, by choosing one of six possible options based on the type of temporal information associated with the event. For instance, if the temporal information associated with the event is mentioned explicitly in the text (e.g., “June 14, 2022”), the annotator specifies “2022-06-14”. If the temporal information associated with the event is mentioned in the text in a vague manner (e.g., “August”), the annotator specifies “2022-08-XX”. We refer the reader to the annotation guidelines in the Appendix for a list of all possible options with examples. Temporal relation annotation. Before the automatic generation of relation annotations, the annotators were asked to answer a set of questions for each annotated event. These questions, for example, help determine whether two events happened at the same time (and thus should be given the “equal” label), and help reduce the number of “vague” relations by prompting the annotator to consider details within the context of events. We refer the reader to the annotation guidelines in the Appendix for a list of all the questions. Then, we developed a method for generating temporal relations that: (1) identifies every possible pair of events in a given document, and (2) generates consistent temporal relation labels based on the annotation given in the previous steps. The method handles every possible case to generate one of the following labels: ,,andfor each relation. For further details, we refer the reader to the Appendix, which shows the algorithm that generates a label for every possible relation. Figure <ref> shows the distribution of the generated temporal relation labels. As illustrated in the figure, most of the relations are “vague” due to the inherent ambiguity of temporal information in natural language text.§ CORPUS RELIABILITYThree annotators contributed to the annotation of our corpus: the first one (the first author of this paper) annotated all the articles, whilst the second and third annotators annotated 31% of the articles. Table <ref> presents the average inter-annotator agreement between the annotators at the level of events (calculated using F1-score) and temporal relations (calculated using micro-averaged F1-score and Cohen’s Kappa). It is worth noting that the agreement over temporal relation annotations is based on events that annotators agreed on. The contingency matrix in Table <ref> shows the agreement and disagreement between the first and second annotators with respect to temporal relation annotation. One can observe that the agreement between the annotators is high for all temporal relation types, which implies that the annotation scheme led to consistent annotations. The second and third annotators are PhD Computer Science students who have received training on the proposed annotation scheme. Upon completion of the annotation tasks, they were compensated at an hourly rate of £15.Three subsets were defined, containing randomly selected documents: training (70%), development (10%) and test (20%). We refer the reader to Table <ref> for details on the number of documents and event pairs (annotated with temporal relations) in each subset. § BASELINE METHODSIn order to assess the extent to which our proposed TIMELINE corpus supports the development of temporal relation extraction approaches, we sought to train and evaluate two baseline models for temporal relation extraction. Specifically, we employed two temporal relation classification models proposed by han2019deep as baseline methods. Both models are based on bidirectional long short-term memory networks (BiLSTMs), but one of them re-optimises the network to adjust for global properties, i.e., symmetry and transitivity constraints. These models were selected based on their highly competitive performance and the availability of their source code.We note that prior to training and evaluating each of the said models, all temporal relations labelled asin the TIMELINE corpus, were discarded for the following reasons: (1) this type serves as a catch-all category for any relations which are ambiguously expressed in text and yet is over-represented (accounting for 60.81% of the annotated relations); and (2) more importantly, the performance for therelation type was not considered in previously reported work—they treated this label similarly to how they handled events with no temporal relations between them <cit.>.In preparation for training the models, we generated BERT embeddings <cit.> and part-of-speech (POS) embeddings for every token in the sentences containing events that are involved in a temporal relation, as both models take these as input representation. For the training process, we adopted the hyperparameter values used by han2019deep.§ EVALUATION RESULTS AND ABLATION STUDYTable <ref> and <ref> show the performance of the baseline models on the MATRES and TIMELINE datasets, respectively. han2019deep reported slightly different performance obtained by both models on MATRES. They discussed in their paper that they used three random seeds; however, since the value of these seeds were not made available, we have been unable to replicate the same results.As one can observe in Table <ref>, employing the second baseline model that adjusts for global constraints, leads to a performance improvement of 1.54 percentage points. This is slightly higher than the improvement (0.26 percentage points) obtained by the second model on the MATRES corpus. This is likely due to the higher number of globally consistent temporal relations in TIMELINE. In the subsections below, we discuss the main differences between the two corpora, MATRES and TIMELINE, and the impact of each key difference on the performance of the baseline models. §.§ Inclusion of non-verb eventsNews articles contain different events which, realistically, are not limited to events centred on verbs. Thus, we investigated the impact of including non-verb events on model performance, particularly on the F1-score for the , , andtemporal relations. Specifically, we sought to assess whether a model has learned relations involving non-verb events to the same extent that it has learned relations involving verb-centred events. To this end, we performed an ablation study by dividing the test set into two splits: (1) Split 1A contains samples with relations between verb events; and (2) Split 1B contains samples with relations where non-verb events are involved. We then evaluated the baseline models (trained and validated on the entire training/development sets) on each of the two splits. Unsurprisingly, we found that the performance on the first split (Table <ref>) is higher than the performance on the second split (Table <ref>). This indicates that the models are able to learn relations involving verb-centred events better than the relations with non-verb events during the training. This explains the higher performance of the models on the MATRES corpus given that it contains only verb-centred events. Moreover, one factor that may contribute to the reduced performance on the split that contains relations with non-verb events is that 70% of the relations in the training and development splits involve only verb-centred events. §.§ Increasing the relation windowRetrieving the temporal relation between any two events (regardless of how far they are from each other in a given news article) is an essential requirement for different tasks and domains. The three following cases are a good illustration of the importance of considering such types of relations.Case 1: In question answering (QA) tasks, to answer a specific time-based question, it is often necessary to retrieve the temporal relationship between an event in one of the first sentences and an event in the last few sentences of a given news article. It is impossible to retrieve this kind of long-distance relation in the previously published temporal information-annotated datasets since it is not tagged or cannot be retrieved using temporal reasoning (e.g., using transitive inference).Case 2: In the medical domain, to extract useful information (e.g., a timeline of medical events) from clinical notes and reports, it is important to identify temporal relations between events that are not in subsequent sentences.Case 3: Extracting a timeline of events from news articles allows decision-makers to conduct fine-grained analysis of these events; it is possible that events of interest are not in adjacent sentences.We set out to investigate the impact of increasing the relation window on model performance, particularly on the F1-score for the ,andtemporal relations. Specifically, we seek to determine if the models learned long-distance relations to the same extent as short-distance temporal relations.To this end, we conducted an ablation study based on the test set subdivided into two splits: (1) Split 2A contains examples with short-distance relations, i.e., relation window <= 4, and (2) Split 2B contains examples with long-distance relations, i.e., relation window > 4. We set the threshold to 4 considering that the average relation window in our corpus is 9. If we split the set of relations in the corpus in this way, the short-distance relations involve events with 0 to 4 sentences between them; the long-distance ones involve events with more than 4 sentences between them. The trained baseline models were then evaluated on each of the two splits. Interestingly, the performance on the second split (Table <ref>) is higher than on the first split (Table <ref>). This demonstrates that the models have learned long-distance relations better than short-distance temporal relations during the training process. A contributing factor to this is the fact that Split 2A has a slightly larger percentage of non-verb events, which we now know are more difficult for the models to learn (26.11% of the relations), compared with Split 2B (21.51% of the relations). §.§ Extracting more relationsIn an earlier section, we have shown that models find it more challenging to learn relations involving non-verb events, compared with verb-centred events. Despite the lower performance of the two baseline models on our proposed dataset, the models are able to extract more temporal relations than in MATRES. As shown in Table <ref>, in MATRES, only 8.52% of the possible relations were annotated as non-vague; meanwhile, 39.19% of the possible relations were labelled as non-vague in TIMELINE. In Table <ref>, we show that the second baseline model extracted only 16.39% of the possible relations in the test set of the MATRES dataset. In our proposed dataset, TIMELINE, the model was able to extract 23.32% of the possible relations in the test set.§ REASONING BEHIND THIS ANNOTATION IN THE LLMS ERAWe believe that despite the advent of large language models (LLMs), this kind of fine-grained annotation is still necessary to support the development of supervised models. We argue that the temporal relation extraction performance of an LLM such as ChatGPT, for example, is not comparable in relation to that of supervised models. Firstly, yuan2023zero investigated ChatGPT's capability in zero-shot temporal relation extraction and showed that ChatGPT's performance is lower by up to 30% in terms of F1-score compared to supervised methods. Furthermore, we investigated the extent to which ChatGPT can extract temporal relations by prompting it using the zero-shot prompt proposed by yuan2023zero to identify temporal relations between events in the TIMELINE test set.Overall, ChatGPT obtained precision, recall and F1-scores of 31.11%, 35.67% and 33.24%, respectively. These are substantially lower than those of the second baseline method, which obtained 69.05% for precision, 69.05% for recall and 69.05% for F1-score. § POTENTIAL APPLICATIONSOur annotation scheme and dataset hold promise for various practical uses. Extracting temporal relations from news articles can support information extraction applications such as automatic timeline extraction and question answering (QA). Moreover, considering that the focus of the dataset is on events on the main axis (i.e., events in the main storyline), this work can potentially support narrative extraction applications such as the analysis of events related to financial markets and event monitoring, e.g., in the context of disaster management <cit.>.§ CONCLUSION In this paper, we present a new corpus, TIMELINE, which was annotated following a novel annotation scheme whereby non-verb-centred events are included, as well as long-distance temporal relations between events. The corpus was used in training and evaluating two baseline temporal relation extraction models. Based on our evaluation results, we assessed the impact of increasing the relation window and including non-verb-centred events on model performance. In addition, we demonstrated how our annotation scheme can support the development of models that can extract more relations in comparison with earlier datasets. In the future, we aim to increase the size of the dataset and employ it in a timeline generation task. § LIMITATIONSThis temporal relation research focussed on a specific type of publication, namely, newspapers articles published on a daily basis. As a result, we did not consider other types of publications which are published weekly or monthly. The primary motivation for this consideration is to use the narrative container concept <cit.>, which has helped significantly to increase our annotation accuracy. Also, as we mentioned previously, we considered only events that can be anchored onto a timeline and that belong to the main axis (storyline).acl_natbib§ APPENDIX§.§ Annotation GuidelinesStep 1: Event annotation. All events according to the TimeML guidelines<cit.> will be tagged, except for the following: * Cancelled or negated events will not be tagged; for example, “He failed to find buyers”, “They don't want to play with us”, or “She cancelled the meeting”. Moreover, uncertain events will not be annotated, e.g., “We may go.”* Inspired by the TimeML guidelines, the following events will not be tagged: (1) generics (abstract and non-specific events), e.g., “Fruit contains water.”, “Lions hunt Zebra.”; (2) static events, e.g., “New York is on the east coast.”* Hypothetical/conditioned events will not be annotated. For example, “If I'm elected as president, I will cut income tax for everyone.”* Inspired by the annotation scheme followed by <cit.>, adjectives express the property or attribute of an entity and anchoring them in time is not simple. Thus, adjectives will not be tagged. * Events after modal verbs will not be tagged. For example, “We have to leave.”, or “You must be sending the email by the end of the day.”* Intended events will not be tagged. They express intentions or things that are meant to happen or occur and signified by words such as “plan”, “aim”, “intend” and “hope”.Step 2: Time anchor annotation. The annotators were asked to enter the time anchor for each event by choosing one out of six options:* Option 1: If the text explicitly mentions the time of the event (e.g., “Feb 1, 2021”), the annotator should enter that date as a time anchor for the event. If the text does not mention the exact date but uses temporal expressions that are relative to the document creation time (DCT), e.g., “today”, “last Friday”, the annotator should use the calendar to enter the date in relation to the DCT. * Option 2: If the text implicitly mentions the event's time (e.g., “last August”), the annotator should enter the date as a fuzzy date (e.g., “2020-08-XX”). Alternatively, if the text mentions that the event happened last year, the annotator should enter e.g., “2020-XX-XX”. * Option 3: If the event has no temporal information, but it is clear from the text that the event happened around the document creation time (DCT), the date should be set to the default narrative container (NC) value for newspaper publications which is one day before the DCT. * Option 4: If the event happens in the future, the default date will be one day after the DCT. Alternatively, if it is mentioned in the text that the event will happen sometime relative to the DCT, e.g.,“next Friday”, the annotator can enter that day's date.* Option 5: If the event happened in the past but the time is not mentioned in the text explicitly, the annotators can use any background or external knowledge to provide an accurate time anchor.* Option 6: If the annotator understands from the text that the event did not happen around the document creation time, and the text does not provide any hints on when the event happened, the date should be entered as “XXXX-XX-XX”. Figure<ref> shows how the events are represented in a timeline. Step 3: Answer a set of questions for each annotated event.* Question 1: To annotate the relation between events that are the same (event coreference) with anlabel.Q1: Does the event refer to another event in the document? (Q1.a: Yes/No, Q1.b: event ID). * Question 2: To annotate temporal relations with anlabel. Q2: Did the event start or happen at the same time when another event in the same sentence happened? (Q2.a: Yes/No, Q2.b: event ID). * Questions 3, 4 and 5: To increase informativeness, i.e., to increase the number of non-vague relations.Q3: Did the event happen on the same day as another event in the same sentence? If so, did the event happen at a different time compared with the other event? (Q3.a: Yes/No, Q3.b: before/after, Q3.c: event ID) Q4: Is this event with an unknown date (Option 6)? If so, did it happen before or after another event in the same sentence? (Q4.a: Yes/No, Q4.b: before/after, Q4.c: event ID)Q5: Were this event and another event in the same sentence given the same implicit time? If so, did this event happen before/after the other one? (Q5.a: Yes/No, Q5.b: before/after, Q5.c: event ID) * Question 6: To annotate the relation between events that happened around the DCT but were given different time anchors, as .Q6: Did the event happen around the document creation time (e.g., within 24 hours)? (Yes/No) For instance, consider the two events in the following sentences. Sentence 1: “The pound gained almost 6 per cent against the dollar in July, approaching $1.32 at one point yesterday before settling in evening trading at $1.31, up 0.04 per cent for the day and 5.7 per cent for the month.”. The event “approaching” happened “yesterday”. The temporal information is mentioned explicitly in the text for this event. Sentence 2: “Bank of America Merrill Lynch strategists said the rest of 2020 could still see weakness for the pound as the period of August to December historically contains four negative months for sterling.” The event “said” happened possibly one day before the publication date (based on what the reader of the news article could infer according to the narrative container concept). However, it is not clear from the text which of the two events “approaching” or “said”happened first. Therefore, if the annotator answered the question withfor both events, our temporal relation generator will assign the relation labelto these events to ensure accuracy. * Question 7: To annotate a relation between events that are happening in the future but were given different time anchors, as arelation.Q7: Is the event happening in the future? (Yes/No)For instance, sometimes in the text, it is mentioned that some event (Event 1) will happen in the future without any time anchor; for another event (Event 2), the text says that it will occur at a specific time (e.g., “next month”). However, it might be unclear from the text which event will happen first. Therefore, if the annotator answered the question withfor both events, our temporal relation generator will assign the relationto these events. Step 4: Temporal relation annotation. The temporal relations are annotated automatically based on Algorithm <ref>.[hbt!]Temporal Relation Generation Method§.§ Special CasesBelow are two cases encountered during the annotation process that we needed to make the annotators aware of. * Event Coreference: When we have more than two events referring to the same thing, the relations that involve these events have to be annotated manually after Step 4. * Subsequent events only: In Q1, the annotators should verify that the event ID is associated with a subsequent event mentioned in the same or any following sentence. In Q2-Q5, the annotators should ensure that the event ID is associated with a subsequent event mentioned in the same sentence.
http://arxiv.org/abs/2310.17802v1
{ "authors": [ "Sarah Alsayyahi", "Riza Batista-Navarro" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231026222338", "title": "TIMELINE: Exhaustive Annotation of Temporal Relations Supporting the Automatic Ordering of Events in News Articles" }
Efficient Fully Bayesian Approach to Brain Activity Mapping with Complex-Valued fMRI Data Zhengxin Wang Clemson University Daniel B. Rowe Marquette University Xinyi Li Clemson University D. Andrew Brown Address for correspondence:D. Andrew Brown, School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, USA. Email: [email protected] Clemson University=========================================================================================================================================================================================================================================================================================================== Missingness is a common issue for neuroimaging data, and neglecting it in downstream statistical analysis can introduce bias and lead to misguided inferential conclusions. It is therefore crucial to conduct appropriate statistical methods to address this issue. While multiple imputation is a popular technique for handling missing data, its application to neuroimaging data is hindered by high dimensionality and complex dependence structures of multivariate neuroimaging variables.To tackle this challenge, we propose a novel approach, named High dimensional Multiple Imputation (HIMA), based on Bayesian models. HIMA develops a new computational strategy for sampling large covariance matrices based on a robustly estimated posterior mode, which drastically enhances computational efficiency and numerical stability. To assess the effectiveness of HIMA, we conducted extensive simulation studies and real-data analysis using neuroimaging data from a Schizophrenia study. HIMA showcases a computational efficiency improvement of over 2000 times when compared to traditional approaches, while also producing imputed datasets with improved precision and stability.§ INTRODUCTIONNeuroimaging data are fundamental for studying the brain's structure and function, providing valuable insights into various neurological disorders and cognitive processes. Missing data, however, occur frequently in brain imaging research due to limited image acquisition and susceptibility artifacts, causing signal loss and spatial distortion in the images <cit.>. An example of the spatial distribution of missing voxels in a magnetic resonance imaging (MRI) dataset is shown in <ref>. Despite advancements in statistical techniques for processing imaging data, the proper handling of neuroimaging missingness remains inadequately studied, which impedes the accurate analysis and interpretation of findings. For example, missing data can lead to biased estimation, reduce statistical power, and limit the generalizability of results <cit.>. To address these challenges, we are motivated to propose a practical yet robust multivariate multiple imputation technique specifically designed for high-dimensional neuroimaging data.Nowadays, commonly used strategies to handle incomplete data include (i) complete data analysis, (ii) single imputation, and (iii) multiple imputation (MI). In certain scenarios, particularly when the missingness is minimal (e.g.,less than 5%)and occurs completely at random, complete case analysis can be considered the best approach to prevent analysis bias. However, neuroimaging data often exhibit complex missing patterns that do not conform to such straightforward criteria.Additionally, simply omitting these voxels may risk excluding brain regions of particular research interest and may be costly to imaging spatial coverage, especially along cortical boundaries.This, in turn, could raise the risk of Type II errors.Improving upon (i), simple imputation involves replacing individual missing value with a single value, often using methods like mean or mode substitution. While simple imputation may offer quick solutions in certain situations, it frequently introduces bias into the data (e.g., artificial reductions in variability) and results in overly precise results without accounting for any uncertainty.<cit.> and <cit.> addressed this by developing MI techniques that can incorporate uncertainty about the unknown missing values. MI replaces each missing value with a set of plausible values imputed based on two factors: (a) the observed values for a given subject; (b) the relationships observed in the data for other subjects. Statistical literature on MI techniques has surged <cit.>. Applying MI to neuroimaging data, however, is limited, primarily due to computational tractability issues. Take Multivariate Imputation by Chained Equations (MICE), one of the most commonly used MI toolboxes, as an example <cit.>. Given approximate normal data following 𝒩(μ,Σ), MICE specifies an inverse Wishart distribution as a conjugate prior distribution for the covariance Σ. Sampling a large Σ_p× p matrix (e.g., p=1000, as is common in neuroimaging data) from the corresponding posterior inverse Wishart distributioncan become computationally unstable and intractable. This may lead to inaccuracies in subsequent data sampling, which ultimately impacts the overall precision of imputation results. Additionally, sampling Σ_p× pinvolves matrix inversion, which requires a computational cost of 𝒪(p^3). This cubic time complexity will be further compounded by the number of sampling iterations and the total number of imputed datasets needed. In <ref>, we have shown the time required to impute missingness in a real MRI dataset from a schizophrenia study using the MICE package. Notably, computational time increases exponentially as the number of voxels grows. To impute missingness in a typical brain region with hundreds of variables, it can take thousands of hours to run MICE, which is not so computationally feasible.Computational complexity and tractability can pose significant bottlenecks for handling high-dimensional imaging data, as indicated by <ref>. To address this challenge, we propose a new High-Dimensional Multiple Imputation (HIMA) method, designed specifically for high-dimensional neuroimaging data. HIMA adopts the commonly used Bayesian framework through Markov chain Monte Carlo (MCMC):it first implements the same imputation step as in classical MI techniques to impute missing entries by considering a joint multivariate normal model; next, it modifies the posterior step by updating the normal covariance matrix with a robustly estimated posterior mode. The posterior mode represents the most probable draws from the posterior distribution of covariance, which is essentially identical to the maximum likelihood estimates of the likelihood functions <cit.>. We propose a posterior mode estimator that is suitable for situations where n≪ p and have establishedasymptotic properties of it. This article presents three main contributions. Firstly, we introduce a novel MI technique called HIMA, specifically tailored for high-dimensional neuroimaging data. HIMA substantially alleviates the computational burden from 𝒪(p^3) to 𝒪(p) per MCMC iteration.Secondly, extensive simulation studies demonstrate reduced bias and dispersion in the imputed data generated by HIMA. The imputation expands brain map coverage, which in turn improves the interpretation of imaging results. Lastly, we have developed a user-friendly package for implementing HIMA, making it easily accessible and convenient for researchers to apply in their neuroimaging studies.The rest of this paper is structured as follows. In Section 2, we introduce the HIMA method, posterior mode estimation, and imputation algorithms. In Section 3, we assess the performance of HIMA using both semi-synthetic and real MRI imaging datasets and comparing it to frequently used imputation methods. We conclude with a discussion in Section 4.§ METHODS§.§ Background Our proposed imputation method HIMA is designed for voxel-level neuroimaging data, such as voxel-level hemodynamic response for fMRI[Functional Magnetic Resonance Imaging], voxel-level fractional anisotropy for DTI[Diffusion Tensor Imaging], and ALFF[Amplitude of low-frequency fluctuation] for rs-fMRI[Resting state fMRI]. HIMA can be easily adaptable to region-level data as well, offering versatility across various neuroimaging applications.Without loss of generality, we let y_i= {y_ij}_j ={1,…, p} denote the brain signals of interest for the i-th subject with i ={1,…, n}, where j represents the j-th voxel. Typically, brain signals exhibit approximate normal distribution characteristics <cit.>; we thus consider a joint multivariate normal (MVN) model for y_i.Specifically,we express the observed and missing part of y_i by y_i^obs and y_i^mis, and assume thatthey follow:[ y_i^obs; y_i^mis ] ∼𝒩[[ μ_obs; μ_mis ], ([ Σ_obs,obs Σ_obs,mis; Σ_mis,obs Σ_mis,mis ]) ], where[ μ_obs; μ_mis ] is the partitioned mean vector and the four sub-covariance-matrices are partitioned from covariance Σ. Σ is crucial for jointly leveraging different levels of associations, including voxel-wise and subject-wise associations, during the imputation process.Under the assumption of missing at random (commonly adopted for imputing neuroimaging data <cit.>),our goal is to impute Y_mis={y_i^mis}_i=1^n based on the observed data Y_obs={y_i^obs}_i=1^nwhile preserving the uncertainty of Y_mis.Bayesian models are widely used for handling multivariate MI applications <cit.>. In Bayesian models, parameters (e.g., μ and Σ) and Y_mis can be iteratively updated until convergence, where the imputed datasets can be sampled from converged posterior distribution. Nonetheless, computational challenges arise in the classic Bayesian-based MI methods, especially for data with high dimensions <cit.>). For example,when imputing a missing dataset of 500 variables using a single node, MICE, a classical MI based method, may take up to 2300 hours (see Figure <ref>).§.§ HIMA method To address the aforementioned challenges, we propose HIMA, a relaxed multivariate imputation approach designed for handling missingness in data with high dimension (n ≪ p). HIMA follows established data augmentation algorithms <cit.> andMCMC procedures for multivariate data imputation.Specifically, HIMA iteratively imputes Y^[t]_mis (t=1, ⋯, T is the iteration index), and updates μ^[t] and Σ^[t] until convergence. The detailed iteration steps are provided below:1. I-step (Impute Y^[t]_mis). Given parameters μ^[t-1]and Σ^[t-1] at the (t-1)-th iteration, we generate MVN missing values Y^[t]_mis by Y^[t]_mis∼𝒩_Y_mis | Y_obs(μ^[t-1]_mis|obs,Σ^[t-1]_mis|obs),whereμ_mis|obs= μ_mis + Σ_mis,obsΣ_obs,obs^-1(Y_obs-μ_obs), Σ_mis|obs=Σ_mis,mis-Σ_mis,obsΣ_obs,obs^-1Σ_obs,mis.2. P-step (Update Σ^[t] and μ^[t]).After imputing Y^[t]_mis and augmenting it with Y_obs, we proceed to update parameters Σ^[t] and μ^[t] using a standard MCMC procedure. We introduce a relaxed parameter sampling strategy in the MCMC procedure outlined as follows: Update Σ^[t]. The traditional approach for sampling Σ relies on a posterior distribution with an inverse Wishart conjugate prior distribution: Σ∼W^-1(Ψ,ν), where Ψ is a positive definite scale matrix and ν is a degrees of freedom. Accordingly, the posterior distribution becomes W^-1(Ψ+nS,ν+n), where S is the sample covariance. In practice, sampling Σ^[t] with a high-dimensional p is challenging due to the computational intractability and instability <cit.>.A sound remedy for sampling large posterior Σ is to estimate its Maximum a Posterior (MAP), the value that is most likely to be sampled (i.e., the mode) <cit.>. Specifically, given a complete sample augmented by previous imputed Y^[t]_mis, we estimate the posterior mode by Σ^[t] = max_Σ̂ p(Σ|Y_obs, Y^[t]_mis)= max_Σ̂W^-1(Ψ+nS,ν+n) =max_Σ̂( |Ψ+nS|^ν+n/2/2^(ν+n)p/2Γ_p(ν+n/2)|Σ|^-(ν+n+p+1)/2exp(-1/2tr( (Ψ+nS) Σ^-1)) ). It is generally challenging to solve <ref> directly. We develop a new computational approach to implement the optimization step introduced in Section <ref>. Update μ^[t]. Given the augmented data [Y_obs, Y^[t]_mis] and updated Σ^[t], we generate the posterior mean μ^[t] byμ^[t] | Y_obs, Y^[t]_mis, Σ^[t]∼𝒩_μ(Y̅,Σ^[t]/n),where Y̅ is the mean vector of the augmented data [Y_obs, Y^[t]_mis]. The posterior mean is derived based on a non-informative prior for μ (uniform over the p-dimensional real space) <cit.>.Remarks on updating Σ^[t]. In Bayesian analysis,MAP estimationis designed to maximize a conditional probability distribution. Sampling the mode of a posterior distribution has been carefully studied in the statistical literature, with notable examples including <cit.>. As pointed out in <cit.>, maximizing p(Σ| Y) is nearly identical to obtaining the ML estimates by maximizing the normal likelihood L(Σ| Y)= Π_i=1^n  p(y_i|Σ).Additionally, updating Σ^mode eliminates the need to compute the inverse of a large covariance matrix Σ_p × p. This relaxation reduces the computational burdens from 𝒪(Cp^3) to 𝒪(Cp), where C depends on factors including the number of observations, iterations in MCMC and the number of imputed datasets. Updating Σ^mode also avoids the sampling process of large matrices at each individual iteration, which enhances computational traceability and ultimately the imputation results. Lastly, the posterior mode can be estimated by empirical Bayesian methods. In the following section, we introduced a tailored approach particularly designed for data with n≪ p.§.§.§ Estimating posterior mode Σ In this section, we present a new computational strategy to estimate the posterior mode of Σ following <ref> under the scenario of n≪ p. Following many prior works on posterior mode estimation of Σ, we adopt an unorthodox representation for inverse Wishart distribution, denoting as W^-1(ζ,λ), to facilitate easier demonstration <cit.>. Here, ζ represents the mean and λ represents a measure of precision depending on the standard degrees of freedom v by v=λ+p+1. With these notations in place, the prior density can be written asζ∼|λz|^λ+p+1/2/2^p(λ+p+1)/2π^p(p-1)/4∏_j=0^p-1Γ(λ+j/2+1)|ζ|^-(λ+2 p+2)/2exp{-1/2tr (λzζ^-1)},where λz=Ψ>0 (Ψ is the scale matrix in the traditional notation for inverse Wishart distribution).Accordingly, we update the posterior distribution of Σ proportional to|ζ|^-(n+λ+2 p+2)/2exp{-1/2tr (λz+S) ζ^-1},and further derive the posterior mode asΣ^mode=λz+S/n+λ+2 p+2,where S is the sample covariance and λ, z are two parameters to be estimated.We adopt an empirical Bayesian approach proposed in <cit.> to estimate λ and z. Specifically, the estimation criterion is to minimize the expected estimation loss between Σ̂^mode(λ̂, ẑ) (posterior mode estimator) and Σ_0 (true parameter) using the Kullback–Leibler distance:min_λ̂, ẑ KL( Σ_0,Σ̂^mode(λ̂, ẑ) )= min_λ̂, ẑ∫p(Y|Σ_0) logp(Y|Σ_0)/p( Y|Σ̂^mode(λ̂, ẑ) )dY.The detailed estimation procedure and the empirically based estimates λ̂, ẑ are provided in Appendix A, where no steps in the procedure require n ≫ p, which is desirable in our application. We further establish the theoretical guarantee thatΣ̂^𝐦𝐨𝐝𝐞(λ̂, ẑ)n⟶Σ_0, where Σ_0 denotes the true covariance parameter (see details in Appendix B.1). Therefore, the mode covariance estimated by (<ref>) asymptotically converges to the true parameter. §.§.§ AlgorithmWe implement HIMA by Algorithm <ref>. In the algorithm, Y_(0) denotes the initial dataset. Since HIMA is a MI procedure, we impute Y_(0) for M times by sampling from the posterior distribution and denote them-th imputed dataset asY_(m) (m∈[M]).For each Y_(m), we iteratively sample the multivariate missing data Y_mis and the parameters μ and Σ by following the two steps in the previous section. The posterior distributions converge at the T-th iteration and we now obtain one imputed dataset Y_(m). The output of the algorithm is M imputed datasets {Y_(m)}_m∈ M. Subsequently, statistical inference can be further made by collectively pooling the estimated results by various pooling methods (e.g., Rubin’s rules outlined in <cit.>).Using feasible method (mean imputation, median imputation, hot deck imputation, etc.) To summarize, HIMA retains the traditional imputation I-step in classical MI and introduces relaxation to the posterior P-step to accommodate the large covariance matrix. Both steps are iterated sufficiently long until the sequence {(Y^[t]_mis,μ^[t],Σ^[t]): t=1,2,…}(an MCMC sequence) converges to a stationary distribution ℙ(Y_mis,μ,Σ| Y_obs). We justify Algorithm <ref> from a rigorous Bayesian point of view in Appendix B.2.Simulations have indicated that HIMA can perform well with a small number of imputations M (e.g., 5-20). The number of iterations T can be defined by users. In practice, a few cycles (e.g., 20-30) is typically sufficient to ensure the convergence of the distributions of parameters and imputed values.§ RESULTSA real MRI dataset from a schizophrenia study <cit.> is used to assess the accuracy and efficiency of HIMA in comparison to existing imputation methods. The study focuses on cross-sectional neurovascular water exchange (Kw) data collected from 58 subjects (age: 37.8 ± 14.0; sex, 37 M: 21 F) with schizophrenia spectrum disorder. HIMA is applied to two different versions of the data: (i) semi-synthetic data, which includes originally unknown missing entries and known missing entries that are artificiallyinserted; (ii) real data, which contains completely unknown missing values.The subjects were recruited from mental health clinics and through media advertisements in Baltimore, MD. Imaging data were collected using a diffusion-prepared arterial spin labeling protocol from 2019 to 2022 and preprocessed using ASL (BASIL) pipeline <cit.>.Detailed information on the imaging acquisition and preprocessing procedures can be found in Appendix C. This schizophrenia study aims to investigate the association between neural-capillary water exchange function and schizophrenia spectrum disorder, with a specific focus on 99 brain regions of interest obtained using the International Consortium for Brain Mapping (ICBM) brain template.§.§ Semi-synthetic data analysisThe primary measure of this schizophrenia study was the whole-brain average Kw values. We intend to apply our method to impute voxel-level missing values within each of the 99 regions individually because within-region Kw values are typically more homogeneous and strongly associated. Accordingly, the data structure to be imputed is denoted as y_ij^(r), where i∈[58] represents subjects, r∈[99] represents regions, and the number of voxels j∈[p^(r)] depends on region rThe overall missing rate of the data y_ij^(r): r∈[99] is 14.60%.We construct a semi-synthetic dataset by artificially removing voxel values randomly (i, j), i.e., random voxel locations of random subjects. Specifically, we randomly remove t data entries from each voxel vector, where t ranges from 1 to 8. On average, this process results in an additional 4.5 missing spots (out of 58) in each voxel vector.The missing rate of the semi-synthetic dataset is now 20.34%, compared to the original rate of 14.60%. We apply several processing steps to the semi-synthetic dataset as follows. First, we drop out voxel vectors with remarkably high missing rates, setting the threshold at 40% to ensure robust performance. Experiments demonstrate that in this dataset, HIMA may become unstable and less accurate if the voxel-vector-level threshold exceeds 40%. On average, only 3.17% of voxels exceed the threshold within each region. For example, in a region with 800 voxels, only 25 voxels need to be dropped out. Next, we perform kernel smoothing on subjects and align their smoothed probability density estimates to account for subject-level random effects.Lastly, we perform HIMA to impute the missing data on each post-processed Y^(r)^*={y^*_ij^(r)}, where we set M=15 (number of imputed datasets) and T=20 (number of iterations).Since these missing voxels are artificially removed, we can compare the imputed values with `true' (but removed) values to assess the performance of imputation algorithms. We first define an n× p indicator matrix τ for a data matrix Y={y_ij}_i∈[n],j∈[p], where each element τ_ij=1 if y_ij is observed; τ_ij=0 otherwise. Using τ, we assess imputation accuracy by the following metrics: (i) Weighted mean absolute error (wMAE): ∑_i∈[n]∑_j∈[p] I(τ_ij=0)× |y_ij^imputed - y_ij^true|  / √(Var(Y_.j))/∑_i∈[n]∑_j∈[p] I(τ_ij=0)for a single imputed dataset. We then compute the mean and standard deviation of wMAE across all M imputed datasets. (ii) Weighted mean square error (wMSE): ∑_i∈[n]∑_j∈[p] I(τ_ij=0)× (y_ij^imputed - y_ij^true)^2/√(Var(Y_.j))/∑_i∈[n]∑_j∈[p] I(τ_ij=0)for a single imputed dataset. Again, we collect the mean and standard deviation of wMSE across all M imputed datasets. (iii) Weighted mean bias error (wMBE):∑_i∈[n]∑_j∈[p] I(τ_ij=0)× (𝔼[y_ij^imputed] - y_ij^true) / √(Var(Y_.j))/∑_i∈[n]∑_j∈[p] I(τ_ij=0)across all M imputed datasets, where 𝔼[y_ij^imputed] can be estimated by ∑_m∈ [M] y_ij^(m)imputed/M.We select three regions for result demonstration: right insular cortex (Ins), right caudate nucleus (Caud), and right hippocampus (Hippo), which are brain areas frequently associated with information processing in schizophrenia <cit.>.Based on ICBM, there are respectively 1196, 553, and 622 voxels in the right Ins, right Caud, and right Hippo. We applied HIMA on the post-processed data Y^*_Ins, Y^*_Caud, Y^*_Hippo and compared the imputation performance with frequently used approaches in both simple imputation (e.g., mean-substitution imputation) and multiple imputation (e.g., MICE ).As mentioned previously, almost all existing MI methods and toolkits for brain imaging data were based on MICE.We provide the imputation error measure and computational time of these three methods in Table 1.In addition, <ref> shows the trace plots of estimates over iterations.labelformat=emptylabelformat=defaultBased on Table 1 and <ref>, we asssess the imputation performance from the following three aspects: * Accuracy:Based on various error metrics (wMAE, wMSE, and wMBE), HIMA shows lower imputation errors in all three selected brain regions, compared to Mean imputation and MICE. Furthermore, the imputed results generated by HIMA exhibit lower variances, indicating high consistency and stability of imputed results. In summary, HIMA demonstrates improved imputation accuracy, as evidenced by the reduction in errors and dispersion.* Convergence: We created trace plots to visualize the convergence of the estimated mean of Kw values against iteration numbers. <ref> showed that both methods had reached a stable posterior distribution after a few iterations, indicating quick convergence to stationarity. With HIMA, voxels converged to stationary estimates with smaller variations, suggesting a higher level of stationarity. Additionally, neither method produced any discernible trends, suggesting sufficient randomness in the estimates across iterations. * Computational cost: Mean imputation is the fastest method in terms of running time due to its straightforward operation, as it doesn't require drawing and updating multiple values during each iteration. However, its imputation accuracy is relatively low. In contrast, HIMA employs the principle of MI methods and demonstrates significantly improved computational efficiency compared to MICE, while still reducing imputation errors. For regions with varying sizes and missing rates, HIMA shows over 2,000 times greater computational efficiency compared to conventional methods, indicating a significant advancement in the computational competency for imputing ultra-high dimensional imaging data.§.§ Real data analysisIn this real data analysis, we apply HIMA on each of the 99 distinct regions without artificiallyinserting any missingness this time. Hence, the dataset {y_ij}^r∈ [99] to be imputed is the raw MRI data, where its missing entries were mainly caused by image acquisition limitations and susceptibility artifacts. The overall missing rate of the data is 14.6%. We first applied the same preprocessing procedures on each {y_ij}^(r): (i) exclude voxels with missing rates higher than 40% to ensure HIMA's stability and accuracy (on average, less than 3% voxels in this real dataset were excluded for each region); (ii) align kernel-smoothed probability density estimate of each subject to eliminate subject-level random effect during imputation. We next applied HIMA on each post-processed data {Y^(r)^*, r∈ [99]} with T=20 iterations and M=50 imputations. Here, we increased the number of imputations to better assess the stability and accuracy of imputed datasets since true information about missing Kw was not available.We used the same three schizophrenia-associated regions (right Ins, right Caud, and right Hippo) to evaluate the performance of imputation methods. Since the true values of missing data entries are unknown, previous error metrics cannot be evaluated. Instead, we evaluated the results by examining the distributions of the observed and imputed values.In <ref>(a), we plotted the imputed data against the observed data from three randomly selected voxels. The shape of the red points (imputed data) closely matches the shape of the purple dots (observed data). This alignment indicates the plausibility of the imputed values.In <ref>(b), we plotted the probability density estimate of the observed data against the imputed data for the same three voxels.The distributions of imputed values are similar to the observed values. In summary, the imputed values in general approximate the observed values, which can facilitate subsequent statistical inference with improved accuracy. § DISCUSSIONIn this study, we developed a multiple imputation tool, HIMA, specifically designed for analyzing high-throughput multivariate imaging data with missing values. It has been well-studied that simply neglecting missing values or relying on single imputation methods in brain imaging data analysis often leads to suboptimal accuracy in statistical inference <cit.>.In practical applications, however, missing values in neuroimaging data emerge at various spatial locations across different participants, introducing computational challenges for Bayesian-based multivariate MI methods with the MAR assumption.Particularly, the high dimensionality of imaging data leads to intractable posterior sampling of large covariance matrices and necessitates computational time of months when using classic multivariate MI tools, such as MICE. To meet the demand, we developed a new computational algorithm with remarkably improved efficiency for implementing the posterior sampling (in minutes). In addition to the improved computational efficiency, HIMA improves imputation accuracy. Both extensive simulation experiments and real data analysis demonstrated robust and accurate performance of multivariate missing data imputation. HIMA can perform effectively with up to 40% missing observations. Using semi-synthetic data, we showed that the imputed values by HIMA yield less bias in the mean and reduced dispersion when compared to existing methods. In short, HIMA provides a fast and accurate MI solution for multivariate neuroimaging data with varying missing values. The sample codes for HIMA are available at https://github.com/TongLu-bit/HighDim-MultipleImputation-HIMAhttps://github.com/TongLu-bit/HighDim-MultipleImputation-HIMA.Declaration of interest: none.Acknowledgments This project was in part supported by the National Institutes of Health under Award Numbers 1DP1DA04896801. We would also like to extend our sincere appreciation to Eric Goldwaser and Bhim Adhikari for their efforts in collecting, preprocessing, and providing the imaging data for this research.apalike
http://arxiv.org/abs/2310.18527v1
{ "authors": [ "Tong Lu", "Chixiang Chen", "Hsin-Hsiung Huang", "Peter Kochunov", "Elliot Hong", "Shuo Chen" ], "categories": [ "stat.ME", "stat.AP", "stat.CO" ], "primary_category": "stat.ME", "published": "20231027225635", "title": "Multiple Imputation Method for High-Dimensional Neuroimaging Data" }
[email protected] Department of Physics and QUEST Center for Quantum Science and Technology, Bar-Ilan University, Ramat Gan 5290002, Israel Protocols of quantum information processing are the foundation of quantum technology, allowing to share secrets at a distance for secure communication (quantum key distribution), to teleport quantum states, and to implement quantum computation. While various protocols have already been realized, and even commercialized, the throughput and processing speed of standard protocols is generally low, limited by the narrow electronic bandwidth of the measurement apparatus in the MHz-to-GHz range, which is orders-of-magnitude lower than the optical bandwidth of available quantum optical sources (10-100 THz). We present a general concept and methods to process quantum information in parallel over multiplexed frequency channels using parametric homodyne detection for measurement of all the channels simultaneously, thereby harnessing the optical bandwidth for quantum information in an efficient manner. We exemplify the concept through two basic protocols: Multiplexed Continuous-Variable Quantum Key Distribution (CV-QKD) and multiplexed continuous-variable quantum teleportation. We demonstrate the multiplexed CV-QKD protocol in a proof-of-principle experiment, where we successfully carry out QKD over 23 uncorrelated spectral channels, with capability to detect eavesdropping in any channel. These multiplexed methods (and similar) will enable to carry out quantum processing in parallel over hundreds of channels, potentially increasing the throughput of quantum protocols by orders of magnitude. Multiplexed Processing of Quantum Information Across an Ultra-wide Optical Bandwidth Alon Eldan, Ofek Gilon, Asher Lagemi, Elai Fishman Furman, Avi Pe'er January 14, 2024 ==================================================================================== In the decades since the conception of quantum information in the 1980s, many practical applications were developed, ranging form secure communications <cit.> and quantum information transmission protocols <cit.>, through quantum sensing schemes <cit.> to quantum computation <cit.>. All these applications rely on some uniquely quantum properties, such as entanglement, squeezing, etc., to encode, process and decode the desired quantum information. Examples include various degrees of freedom of matter, such as the energy levels of trapped ions <cit.> and neutral atoms <cit.> or the phase / charge of Josephson junctions <cit.>; and of light, either in the discrete photon-basis (e.g. polarization <cit.>, spatial mode <cit.>, frequency <cit.> and time <cit.>) or in the continuous quadrature-basis (squeezing <cit.>).In many applications, the quadratures of the optical electric field are the backbone of optical quantum processing. Classically, the quadratures x,y are the cosine and sine components of the optical field at frequency ω E(t)=x cos(ω t) + y sin(ω t) = |a| cos(ω t+φ), where a=|a|e^iφ is the complex field amplitude. The quadratures are the real x=a+a^* and imaginary y=i(a-a^*) components of the complex amplitude. In quantum optics, the optical quadratures (x=(a+a^†)/2,y=i(a-a^†)/2) are non-commuting observables [x,y]=i/2 that maintain the canonical quantum uncertainty Δ x Δ y ≥ 1, analogous to the canonical position and momentum of quantum mechanics. One can therefore encode, store, process and decode quantum information on the optical quadratures.The focus of this paper is to harness the bandwidth of ultra-broadband optical sources to drastically enhance the rate of quantum optical processing. Most generally, quantum information processing can be broken into three primary stages: generation of the quantum state, manipulation of the state, and its measurement. While the speed of each stage can be limited by different factors, the primary bottleneck in quantum optical protocols is the measurement, where the relatively slow response of photo-detectors limits the processing rates at several orders-of-magnitudes below the optical bandwidth of available sources, even with the fastest available detectors. In particular, sources of broadband squeezed light with 10-100THz of bandwidth (up to an optical octave) are readily available <cit.>, as well as methods of broadband manipulation using pulse shaping in the spectral domain <cit.>. In contrast, the bandwidth of traditional measurement techniques was always limited by the narrowband electronic response of optical detectors, in the MHz-to-GHz range. Luckily, this electronic bandwidth limit was recently overcome with the conception of optical parametric homodyne <cit.>, which enables to measure an optical quadrature of interest across a wide, practically unlimited optical spectrum, opening the way to much faster quantum processing. We present a general approach for parallel processing of quantum information, encoded across the entire optical spectrum of the quadratures of broadband two mode squeezed light. We highlight a set of tools (see figure <ref>) to simultaneously generate, manipulate and measure quantum information over multiple frequency channels, up to 10^3 - 10^4 channels in realistic configurations, limited only by the available optical bandwidth. Here, these tools helped us to develop a multiplexed quantum teleportation protocol, which can teleport multiple quantum states simultaneously, as well as a multiplexed QKD protocol (BB84-like), which we demonstrated experimentally over 23 spectral channels in parallel. We note however that the presented toolkit is useful far beyond those two examples, and can be used to form multiplexed variations of any existing quantum protocol, thereby enhancing the processing throughput by orders of magnitude.§.§ Multiplexed QKDAs a first example for a multiplexed protocol, let us present (and later demonstrate experimentally) a simple protocol of multiplexed QKD. We note up front that this protocol is not intended as an immediate practical implementation of ultrafast QKD, but rather as an illustration of the viability of frequency-multiplexed quantum processing. Our scheme forms a continuous-variable analog of the BB84 protocol using an unseeded SU(1,1) interferometer. Specifically, both Alice and Bob have an unseeded OPA that generates broadband SPDC and a broadband phase modulation device that consists of a Fourier-domain spectral shaper (see figure <ref> and caption for details). Since an unseeded SU(1,1) interferometer generates a wide spectrum of signal-idler pairs, the different frequencies within this spectrum can be used as separate QKD channels.When a pump laser passes through two OPAs in series, the SPDC generation in the 2nd OPA can interfere with the SPDC generation in the 1st OPA, depending on the phase of the signal-idler pair relative to the pump. This leads us to the following 4-steps protocol: * To encodes her information, Alice modulates the phase of each signal-idler channel in one of two mutually unbiased bases (chosen at random): Basis 1 uses ϕ=0 (constructive interference) for logic '1' and ϕ=π (destructive) for logic '0', whereas basis 2 employs ϕ=±π/2. After the spectral modulation of all the channels (in parallel), Alice sends the phase modulated spectrum to Bob. * To detect the information, Bob randomly chooses a measurement basis (for each channel separately) using his spectral shaper - by setting the phase to 0 for basis 1 or to π/2 for basis 2, and passes the light again through his OPA, where the SU(1,1) interference occurs. Bob measures the spectrally resolved light intensity with a spectrometer, which reflects the number of photons in each channel at the output of the complete SU(1,1) interferometer. If Bob sets the correct basis for a channel, the interference of that channel at the output will be either fully constructive (high probability for photo-detection) or destructive (low probability) and Bob will be able to detect Alice's phase. However, if Bob sets the phase to the wrong basis, his interference will be intermediate, preventing Bob from deducing the information. * After the communication is complete, Alice and Bob use a public channel to compare their bases for each channel, keeping only the bits where the encoding and decoding bases matched. * Finally, to detect a possible eavesdropper, Alice and Bob compare a fraction of their data, searching for errors that Eve's measurements would have introduced (just like any other QKD protocol). The security of each channel within this scheme can be analyzed similar to the standard analysis of the BB84 protocol, as summarized hereon (the complete derivation of the security is given in the methods - section <ref>). Assuming a weak parametric gain in the OPAs, we can employ the perturbative quantum propagator through the nonlinear crystal <cit.>, as U(t) = e^iHt≈ 1 + iHt = 1 + ig_ω a^†_ω a^†_-ω, where a_±ω represent the field operators of the signal-idler mode pair at ω_p/2 ±ω and g_ω represent the parametric gain of that pair, which includes the interaction time, t (proportional to the crystal length). Assuming the vacuum state as the input to Alice's OPA, the output state after Bob's OPA is |ψ>_2 =|0> +ig_ω (1 + e^iϕ_ω) |1_ω, 1_-ω>,where ϕ_ω=ϕ_A + ϕ_B is the total phase that Alice and Bob apply to the signal-idler pair (relative to the pump field) during steps 1 and 2 of the protocol. The phase ϕ_A(B) indicates the basis of encoding (measurement) that Alice (Bob) employ for each bit of the channel. The average number of photon that Bob will measure at a specific channel isN_ω = |<1_ω|ψ>_2|^2 =|g_ω|^2(2 + 2cos(ϕ_ω))Notice that when Alice and Bob use different bases, ϕ_ω = ϕ_A + ϕ_B = π/2 / 3π/2 and the average number of photons is simply 2|g_ω|^2, independent of the phase. However, when the bases match, ϕ_ω = ϕ_A + ϕ_B = π / 0 the average number of photon equals 0 / 4|g_ω|^2 respectively, so Bob's measurement can decode Alice's information.When Eve tries to attack the communication the situation changes noticeably. For example, if Eve "steals" some of the light using a beam splitter with transmission T, then the number of photons after Bob's OPA becomes (see derivation in the methods, section <ref>)N_ω(T) =|g_ω| ^ 2(1 + T + 2Tcos(ϕ_ω))which diminishes the interference contrast and introduces errors for Bob. Thus, a good discriminator for eavesdroppers is the contrast of the interference after Bob's crystal,V(T) ≡I_max - I_min/I_max + I_min = 2T/1+T.The contrast is a witness for eavesdropping since Eve must steal some of the photons, i.e. reduce the transmission (T) which will lower the contrast. Notice that Eve's ability to extract information relies on a similar interference contrast (in her own measurements), V_Eve=V(R=1-T)=2(1-T)/2-T. Thus, to obtain a sufficient contrast in her measurements, Eve must induce a sufficiently high loss in her beam splitter, which Alice and Bob can later identify. §.§ Multiplexed Quantum TeleportationMulti-channel quantum teleportation, which we propose and analyze here, is another example for harnessing the optical bandwidth to multiplex an important protocol of quantum information. Again we note that this protocol should not be judged as immediately applicable for technology, but rather as an illustration of the range of possibilities that our multiplexing scheme offers for quantum information. This new protocol is a broadband, multiplexed version of the Braunstein's & Kimble's protocol (suggested in <cit.> and demonstrated in <cit.>). For this scheme we utilize two sources of broadband squeezed vacuum (OPAs) to simultaneously teleport a set of quantum states across the spectrum of a general broadband field with arbitrary two-mode quadratures at each frequency (see figure <ref>). As opposed to the QKD protocol above, which operates in the regime of low squeezing with single pairs of entangled photons, the teleportation protocol we now discuss operates best at the high-squeezing regime. Thus, for the sake of presentation simplicity only, let us assume initially that the two sources are "infinitely squeezed" to the level that we can completely neglect one of their quadratures. Later we will alleviate this assumption and consider the implications of finite squeezing to the teleportation precision. When the two highly squeezed sources (marked (1) and (2) on the figure) are mixed on a beam splitter (BS) with the correct phase, they generate an entangled quantum state, where the quadratures at the two outputs of the BS (marked (3) and (4)) are quantum-correlated. To teleport the input state, we mix it with one of the entangled arms (on another BS) and measure quadratures of the BS outputs (marked (5) and (6)). Based on this measurement, we introduce a quadrature shift (marked (7)) to the unmeasured entangled arm (marked (4)) to reproduce the original quadratures of the input state at the teleportation output (marked (8)).To describe the steps of this protocol, we will use the field operator â_ω at each frequency which can be decomposed into quadratures as â_ω = x̂_ω + iŷ_ω^† (here we use the definition of the two-mode quadratures, x̂_ω≡1/2(â_ω + â_-ω^†) and ŷ_ω≡1/2i(â_ω - â_-ω^†) <cit.>, which is a convenient generalization of the standard single-mode quadratures to the two-mode squeezed pair of signal-idler modes). With this definition, we can represent the input field operator as â_ω,in = ξ(ω)x̂_ω + iη(ω)ŷ_ω and the field operator of the squeezed state generated by an OPA as â_ω,OPA = X(ω)x̂_ω + iy(ω)ŷ_ω, where without loss of generality, X represent the stretched two-mode quadrature (at each frequency ω), whereas y represent the squeezing of the other quadrature (dictated by the parametric gain g(ω) of the squeezers at each frequency, which ideally set X=e^g, y=e^-g). For convenience, we will drop the frequency index from now on (assuming we look at a specific frequency component of the broad spectrum). Let us now describe in detail each step of this protocol: * Two broadband squeezed sources generate two orthogonal squeezed states, â_1 = √(2)(Xx̂ + iyŷ^†) and â_2 = √(2)(xx̂ + iYŷ^†), where X,Y (y,x) are the stretched (squeezed) quadratures of the sources. We will assume that the squeezing is sufficiently high to ensure that the squeezed quadratures are small compared to the input field, i.e. x ≪ξ and y ≪η. * Using a beam splitter, the squeezed states are interfered to generate two quadratures-entangled states, â_3 = 1/√(2)(â_1 + â_2) ≈ Xx̂ + iYŷ^† and â_4= 1/√(2)(â_1 - â_2) ≈ Xx̂ - iYŷ^†, where x,y are neglected for now. * The broadband input state that we wish to teleport, represented by the field operator â_in = ξx̂ + iηŷ^†, is mixed with one of the entangled beams â_4 using a second beam splitter to obtain the encoded states, â_5≈1/√(2)(ξ-X)x̂ + i/√(2)(η+Y)ŷ^† and â_6≈1/√(2)(ξ+X)x̂ + i/√(2)(η-Y)ŷ^†.* The quadratures of the two encoded states are measured with parametric homodyne measurement simultaneously across the two-mode spectrum, such that the quadrature x̂ is measured for â_5 and ŷ^† for â_6. As a result, we obtain information on the difference of the signals quadrature, 1/√(2)(ξ-X) and 1/√(2)(η-Y) without any knowledge about the quadratures themselves. * The measurement results of the quadratures are transmitted through a classical channel to the desired teleportation location, where a strong coherent state (effectively classical) is generated from the received measurements (using spectral shaper) according to â_7≈α(ξ-X)x̂ + iα(η-Y)ŷ^†. To recreate the original input state at the teleportation output, we use this coherent state to shift the quadratures of the remaining part of the entangled state, â_3 using a beam splitter with high-transmission (t ≈ 1, r ≪ 1). To this end, we set α = t/r, which yields â_8 = tâ_3 + râ_7≈ξx̂ + iηŷ^† = â_in. It is important to note that the level of squeezing of the OPAs is a key factor for the fidelity of the protocol, indicating that the teleportation error is a direct result of the finite squeezing used. If we calculate the output of the protocol â_8without assuming high squeezing, i.e. including also the squeezed quadratures of the OPAs (x, y), the output operator becomes â_8 = t((2x + ξ)x̂ + i(2y + η)ŷ^†) ≈ (2x + ξ)x̂ + i(2y + η)ŷ^†,indicating that the residue of the squeezed quadratures act as a source of noise, added to the teleportation output. Thus, in order to reduce these errors, it is important to maximize the squeezing of the original signals. For example, one can enhance the squeezing level by replacing the single-pass OPAs in our protocol with multi-pass OPOs (optical parametric oscillator) that offer higher squeezing (up to 15 dB demonstrated <cit.>). §.§ Experimental Demonstration To demonstrate multiplexed quantum information processing across the optical spectrum we implemented the multi-channel QKD scheme of figure <ref> in a proof-of-principle experiment, as outlined in figure <ref>. Our configuration realized the simultaneous generation, control and measurement of multiplexed QKD frequency channels. The experimental setup is illustrated in figure <ref>a.During the experiment the pump passes through the OPA and generates a broadband spectrum of signal and idler pairs across 150nm bandwidth around 1560nm. The pairs are separated from the pump with a Harmonic Separator (H.S) and sent to a spectral shaper to encode Alice's information by modulating the phase of each frequency pair (channel) to 0, π or π/2, 3π/2 randomly (step 1). Simultaneously, the pump passes through an EOM used to stabilize the phase of the pump to that of the pairs, as well as to choose the measurement basis for Bob(same basis for all channels in this case). Finally, the pump and the pairs are reflected back to recombine and pass through the OPA once more in the opposite direction, completing the SU1,1 interference. To implement Bob's parallel detection of all channels we measure the output spectral intensity using a home-built spectrometer composed of a grating and a line CCD-camera (step 2). To maximize the data capacity while preserving the security of the protocol, the spectral width of the channels at the spectral shaper was chosen to be the smallest possible without leaking to the neighboring channels (see experimental verification in the methods, section <ref>). This allowed us to encode and decode in this preliminary configuration up to 23 channels in parallel with a bare interference contrast of 75%. This 23-fold enhancement of the data capacity (compared to a single channel) is far from any fundamental limit. It can easily be pushed up to >200 channels by improving the spectral resolution of both the encoding shaper and the spectrometer with standard technology of optical wavelength division multiplexing (see section <ref> in the supplementary).Figure <ref>b presents the experimental results. To demonstrate our ability to decode the information in both bases, we encoded the 23 channels (at random bases) using the spectral shaper and then, measured them all simultaneously, where in order to select the measurement basis we set the pump-phase to 0 or π/2. As can be seen, for each measurement the correct basis was detected with good visibility across the entire spectrum, allowing to decode the channels, whereas the channels in the incorrect basis showed no visibility at all. This confirms the ability of Alice and Bob to communicate freely, while preventing attacks on the communication by intercept-resend.As discussed above, our scheme is a CV, multiplexed version of the well-known BB84 protocol, indicating that all the security proofs of BB84 are directly applicable to our protocol as well. We chose to demonstrate the immunity of our scheme to the steal-attack, i.e. an attempt of Eve to split off some of the light between Alice and Bob, which is a most common attack on QKD. If Eve uses a beam splitter to "steal" part of the quantum state, she inevitably will reduce the interference contrast for Bob. We therefore simulated Eve's operation by introducing loss at the spectral shaper. Although the initial loss in our simple configuration was relatively high (44%, probably due to imperfect components and alignment), we could still clearly detect even a small additional loss of 5% since it visibly reduces the contrast (compared to the measurement error). For more details about the loss detection see the methods, section <ref>. §.§ Discussion High data throughput is an important attribute of any information processing scheme. Although the bandwidth of standard broadband sources of squeezed light and entangled photons can easily exceed 10THz (even up to an octave in frequency <cit.>), the bandwidth resource of the light is yet to be utilized, mainly due to the lack of efficient measurement techniques with sufficient bandwidth. Our method harnessed optical parametric homodyne in order to efficiently utilize the optical bandwidth to increase the processing capacity by several orders of magnitude. With this method we proposed and demonstrated a multiplexed, BB84-like protocol of QKD, as well as a new quantum teleportation protocol. In spite of the conceptual difference between the teleportation and the QKD applications, they both shared the same set of tools for broadband state generation, broadband state manipulation and broadband state measurement. If we examine these quantum tools in a broader view, we can realize that they are applicable in the general context of broadband quantum information processing, well beyond QKD and teleportation. An evident example will be to realize frequency multiplexed versions of other protocols of quantum communication, such entanglement-based QKD <cit.>, quantum coin flipping <cit.>, entanglement based sensing <cit.>, etc.. Furthermore, repeating those operations and combining them in various manners is key to implement a broadband quantum network with ultra-fast communication speed. Another thrust of possible application is towards high-bandwidth quantum computation. Specifically, the ability to simultaneously generate, control and measure a large set of separated squeezed qubits ( implemented as signal-idler pairs, as presented above), along with a multiplexed two-qubit operation, is sufficient for universal quantum computation that will exploit the frequency dimension to generate much larger entangled states with the same squeezing resources <cit.>. Note that a multiplexed two-qubit gate was already demonstrated across the quantum frequency comb of a broadband OPO, that is either pumped by several frequencies <cit.> or phase-modulated in time <cit.>. Such a multiplexed quantum computer will be naturally compatible with the multiplexed quantum network, mentioned above. §.§ AcknowledgementsThis research was funded in part by SPARQL consortium, under the QuantERA program of the EU. apsrev41Control § METHODS§.§ Security Analysis of Frequency-Multiplexed QKDIn section <ref> of the main text above, we proposed a frequency-multiplexed QKD scheme that employs broadband squeezing and broadband quantum detection, and described its use to share information securely between Alice and Bob using the phases of multiple frequency channels. The security of these channels relies on the ability to efficiently detect eavesdroppers, which requires to identify a difference between the quantum state with and without an eavesdropper (and to measure this difference). For an undisturbed system, the output state is given by equation <ref> (section <ref>), |ψ>_2 =|0> +ig_ω (1 + e^i(ϕ_A + ϕ_B)) |1_ω, 1_-ω>,which yields the detection-probability of photons (phase-dependent) in mode ω, as given by equation <ref> (section <ref>),N_ω =|g_ω|^2(2 + 2cos(ϕ_A + ϕ_B)).In what follows, we analyze how this output measurement will change under two major attacks - the steal attack and the intercept-resend attack, and how these attacks will be reflected in the statistics of the results. §.§.§ Steal Attack Under a steal attack, Eve tries to "steal" some of the transmitted light by a beam splitter in the channel with a reflection R representing the stolen amplitude. Eve can then try to use her stolen part to obtain some information (even partial) on the generated key. In this case, Eve modifies the transmitted state to Bob by mixing it with another vacuum mode |0>_2 through the beam-splitter, resulting in:|ψ>_BS =|0>_1|0>_2 +ig_ωe^iϕ_A[ t^2|1_ω, 1_-ω>_1|0>_2; - r^2|0>_1|1_ω, 1_-ω>_2; +irt |1_ω, 0_-ω>_1|0_ω, 1_-ω>_2; +irt |0_ω, 1_-ω>_1|1_ω, 0_-ω>_2 ],where t,r are the transmission and reflection amplitudes of Eve's beam-splitter (t^2+r^2=1).Then, the quantum state after Bob's OPA becomes:|ψ>_BS =|0>_1|0>_2 +ig_ωe^i(ϕ_A + ϕ_B)[ (e^-i(ϕ_A + ϕ_B) + t^2) |1_ω, 1_-ω>_1|0>_2;- r^2|0>_1|1_ω, 1_-ω>_2;+irt |1_ω, 0_-ω>_1|0_ω, 1_-ω>_2;+irt |0_ω, 1_-ω>_1|1_ω, 0_-ω>_2 ], where the modulation phase, ϕ_B sets Bob's measurement basis. The average number of photons that Bob will measure in mode ω is:N_ω = |<1_ω|ψ>_BS|^2 =|g_ω| ^ 2(1 + T + 2Tcos(ϕ_ω))where T = t^2, R=r^2 are the transmission / reflection probabilities.The primary method to detect Eve is through the errors she will induce in the measurements, which appear in two major forms: First, Eve's beam-splitter may steal one photon of an entangled pair, but not the other, which leads to the observation of single photons without a matching twin (i.e. the states |1_ω, 0_-ω> and |0_ω, 1_-ω>). Since these possibilities cannot exist in the ideal case, they are good indicators for the amount of loss in the communication channel, which we attribute to Eve. The second type of errors is the "information" errors, where the outcome of legitimate measurements is altered. Specifically, photons can be detected even when the interference is destructive, which reduces the interference contrast. Thus, evaluating the contrast of a large set of measurements is a good discriminator, which is given byV ≡I_max - I_min/I_max + I_min = 2T/1+T.The contrast is a witness for eavesdropping since Eve must steal some of the photons, i.e. reduce the transmission (T) which will lower the contrast. Additionally, we can notice that a background signal (such as noises and signals generated by the attacker) will reduce the contrast even further, and so reduce the signal's credibility. Notice that Bob and Alice can easily calculate both I_max and I_min during the comparison step of our QKD protocol (section <ref>, step 4). Specifically, when Alice reveals the transmitted states of the compared data, this allows Bob to calculate the average number of photons that he received for constructive (destructive) interference and obtain I_max (I_min). Note that for Eve to obtain useful information, she must steal a substantial fraction of the light, since the transmission of her beam splitter (from Alice to Bob) acts as the loss value for Eve. For example, stealing 5% is equivalent to 95% loss for Eve, which results in only ∼10% contrast for Eve and provides little information. §.§.§ Intercept-Resend Attack Intercept-resend attack was introduced originally with the BB84 protocol <cit.>. During an intercept-resend attack, Eve tries to imitate Alice by reading the quantum state between Alice and Bob and generating a new quantum state according to her measurement, that she sends to Bob.Following the calculation in section <ref>, the average number of photons that Eve will measure after her OPA isN_ω = |g_ω|^2(2 + 2cos(ϕ_A + ϕ_E)),where ϕ_E is the modulated phase by Eve.We can gain intuition to the limitations of the intercept-resend attack by considering the simplifying assumption that the number of measured photons during each integration time is exactly 1 (in every shot) for constructive interference (and zero for destructive). In this case, Eve cannot gain any information about Alice's basis, since her measurements will always yield either one photon or none, independent of Alice's encoding. Now, exactly as in BB84, if Eve measured in the correct basis, she will know the state correctly and will be able to impersonate Alice perfectly. However, if Eve measured in the incorrect basis, her reading is random and she cannot recover the encoded phase, thereby introducing an error to the channel with probability 0.5. The total error probability per bit is therefore P_Err = 0.25, as in BB84, which can be detected easily. §.§ Channels Uncorrelation MeasurementIn our experiment of multiplexed QKD (described in section <ref>), we used a ∼100nm bandwidth of SPDC to encode and decode 23 QKD channels simultaneously. Those frequency channels were separated and controlled using a pulse shaper, composed of a grating (to spread the spectrum into different angles), a lens, and an SLM to encode the phase (per channel). A spectrometer was used to measure all the channels simultaneously, composed of a grating, a lens and a line-CCD camera. In theory, the only mechanism for crosstalk may be the finite frequency resolution of the shaper and the spectrometer (due to the diffraction limit). In practice additional technical imperfections in both the SLM and the camera can lead to crosstalk, such as voltage leakage between neighboring pixels on the SLM. Such problems may cause unwanted correlation between neighboring channels, which lead to security problems. To rule out crosstalk between neighboring channels of the experiment (and so ensure the security of our scheme), we measured the correlation between the channels, as shown in figure <ref>. In optimal conditions of alignment (diffraction-limited resolution for Alice's shaper and for Bob's measurement spectrometer (CCD pixels), with perfect correspondence between them) no correlation was observed between neighboring channels down to the noise floor of our measurement (figure <ref>a). However, when misalignment was introduced, correlation between neighboring channels appeared in the form of leakage of the phase modulation from one channel into the measurement of the neighboring channel (see figure <ref>b). This correlation is an example of information leakage that may help an eavesdropper to reveal data on one channel from the measurement of its neighbor.In addition to security verification, the uncorrelation measurement serves another purpose - to maximize the number of channels for a given experimental configuration. We used this measurement to find the smallest spectral separation between channels that preserves the uncorrelation property, allowing to maximize the number of channels within the available spectrum. Thus, the uncorrelation measurement is an important step in the calibration of our experiment. This resulted in 130μ m wide channels with 30 μ m wide gaps between them. §.§ Loss DetectionTo verify the security of our multiplexed QKD scheme, we demonstrated the detection of eavesdroppers. Experimentally, we identified the steal attack and quantified its detectability. As we have seen in the security analysis above (section <ref>), a steal attack reduces the average number of photons in mode ω to N_ω = |g_ω|^2(1+T+2Tcos(ϕ_ω)) (eq. <ref>) which we can measure. However, as we have seen in <ref>), the better detection parameter is the contrast (visibility) of the interference after the second crystal, which is given by V ≡I_max - I_min/I_max + I_min = 2T/1+T. (eq. <ref>). The contrast is a good witness for eavesdropping since Eve must steal some of the photons, i.e. reduce the transmission (T), which will lower the contrast and therefore introduce errors. Figure <ref> shows the measured dependence of the contrast on the loss (equation <ref>) along with the theoretical curve, with very good agreement, indicating our ability to detect the loss from the contrast. The loss was introduced by amplitude modulation with the SLM. The only fit parameter of the theoretical curve in figure <ref> was the initial loss of the channel which turned out to be around 44%. This high loss was probably caused by a combination of the finite diffraction efficiency of the gratings and the SLM, imperfect AR-coating on the optical components, uneven propagation of the pump, the signal and the idler and other practical factors, which can all be improved in future experiments. And yet, we can clearly detect even a small additional loss of 5% since it reduces the contrast visibly (compared to the measurement error). Thus, Eve will be easily detected even with realistic, rather high initial losses. Note that for Eve to obtain useful information, she must steal a substantial fraction of the light since high loss directly affects the amount of data received during the communication. § SUPPLEMENTARY INFORMATION§.§ Experimental considerations on the Number of ChannelsAlthough the spectrum of our SPDC source is continuous, the effective number of channels is limited by the spectral resolutions of both the shaper and the spectrometer. These spectral resolutions are governed digitally by the available number of pixels on the SLM (or linear CCD camera) within the spectrum and by the analog spectral resolution of the shaper (spectrometer), as dictated by the grating line-density and by the diffraction limit of the lenses. In this section we explain how all these parameters were tuned to optimize the number of channels in our specific experiment, laying out the "points for future improvement" that can serve future experiments.Let us first consider the spatial width of each channel on the SLM (or spectrometer). In order to make the most out of the available pixels we optimize the diffraction limit of the lens d ≈ 2.44λ f/D to match approximately the pixel size. This sets an appropriate focal length for the lenses:f_lens≈ 0.4d_pixelD/λWhere d is the diameter of the beam at the focal point, λ is the channel wavelength (1560nm in our setup), D is the beam's diameter on the lens and f is the lens focal length.The second factor to consider is the SPDC spectrum, which is limited by the phase matching bandwidth of the PPLN OPA. To utilize the most of the available SPDC, one needs to span the SPDC spectrum across the entire SLM (camera). For this we need to consider the angular dispersion of the grating on the SPDC spectrum of width 2ω. The diffraction angle θ of the grating adheres tod sinθ = λ.To first order approximation sinθ≈θ we obtaind ·θ_ω = 2π c/ω,which allows to calculate the angular span of the spectrum asd ·Δθ = 2π c/ω_p/2 - ω - 2π c/ω_p/2 + ω = 4ωπ c/ω_p^2/4 - ω^2.Finally, we choose the grating period d, such that the spectrum will cover the spatial span L of the SLM (camera) at the Fourier plane of the lens, which requires thatf_lens·Δθ = L.Clearly, the aperture of the focusing lenses D must be sufficient to capture the complete angular spectrum of the light (D>L). We can now use equation (<ref>) to obtain the optimal grating period:d = f_lens/L4ωπ c/ω_p^2/4 - ω^2 The final factor to consider is chromatic dispersion. The optical elements in the beam, such as lenses, filters, polarizers, etc. incur chromatic dispersion on the SPDC spectrum due to the variance of the index of refraction with frequency. Specifically, variation of the phase-sum of the signal-idler pair due to dispersion will shift the overall phase of that pair (channel), resulting in a shift of the interference pattern at this channel. Dispersion therefore should be either compensated or calibrated in order to correctly detect the transmitted information. Dispersion correction can be included in two ways: First, for relatively weak dispersion, where the phase variation across the bandwidth of a single channel is negligible, we can use the phase modulation of the SLM to pre-compensate the dispersion of each channel, which for our experiment was sufficient. However, for cases of high dispersion, as may be the case after passage through a long optical fiber, the phase variation across a single channel may no longer be small, which will lead to reduction of the interference contrast, or even to complete wash-out of the spectral interference fringes. In such a case, physical compensation of the dispersion will be necessary (e.g. with a negative-dispersion fiber or with a prism-pair, etc.).
http://arxiv.org/abs/2310.17819v3
{ "authors": [ "Alon Eldan", "Ofek Gilon", "Asher Lagemi", "Elai Fishman Furman", "Avi Pe'er" ], "categories": [ "quant-ph", "physics.optics" ], "primary_category": "quant-ph", "published": "20231026235020", "title": "Multiplexed Processing of Quantum Information Across an Ultra-wide Optical Bandwidth" }
BSCbinary symmetric channel LDPClow-density parity-check MAPmaximum a posteriori deg-MAPdegenerate MAP PCMparity-check matrix CSSCalderbank-Shor-Steane empty empty Optimal Single-Shot Decoding of Quantum CodesAldo Cumitini0009-0006-5962-4880, Stefano Tinelli0009-0008-2336-5885, Balázs Matuz0000-0002-0133-6564, Francisco Lázaro0000-0003-0761-7700, Luca Barletta0000-0003-4052-2092Stefano Tinelli, Balázs Matuz and Francisco Lázaro are with the Institute of Communications and Navigation of DLR (German Aerospace Center),Wessling, Germany. (email: {stefano.tinelli, balazs.matuz, francisco.lazaroblasco}@dlr.de) Corresponding author: Stefano Tinelli. Aldo Cumitini and Luca Barletta are with Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy. (email: [email protected], [email protected]) The two first authors contributed equally and are listed in alphabetical order. Copyright 2023 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [email protected] 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We discuss single-shot decoding of quantum CSS codes with faulty syndrome measurements. We state the problem as a joint source-channel coding problem. By adding redundant rows to the code's parity-check matrix we obtain an additional syndrome error correcting code which addresses faulty syndrome measurements. Thereby, the redundant rows are chosen to obtain good syndrome error correcting capabilities while keeping the stabilizer weights low. Optimal joint decoding rules are derived which, though too complex for general codes, can be evaluated for short quantum codes. Optimal decoding, CSS codes, quantum error correcting codes, joint source-channel coding.§ INTRODUCTION Recently, quantum information technologies attracted great interest, since for certain applications they promise significant advantages compared to conventional technologies. One prominent example is Shor's algorithm for finding the prime factors of an integer <cit.>, which provides an exponential speed-up compared to the best known classical algorithm.A major challenge for quantum computers is decoherence, i.e., the unintended interaction ofqubits with their environment that leads to a loss of quantum information. This calls for powerful quantum error correction schemes. Although the noisy codeword cannot be directly observed, syndrome measurements using ancillaqubits can be performed in order to extract information about the errors affecting a quantum system <cit.>. However, the quantum circuits used to extract these syndrome measurements are themselves faulty. Thus, one has to deal with both qubit and syndrome measurement errors. A straightforward approach to combat syndrome errors is to repeat the syndrome measurements multiple times <cit.>, a process known as Shor's syndrome extraction. In Shor's syndrome extraction the number of measurement repetitions has to scale linearly with the code distance in order to achieve fault-tolerance. An alternative is relying on so-called single-shot error correction <cit.> which implies carrying out redundant syndrome measurements, which are not necessarily repetitions of the previously carried out measurements, but rather linear combinations thereof <cit.>.Such linear combinations might be subject to a higher measurement uncertainty, but employing them it is sometimes possible to achieve fault tolerance using only a constant number of measurement rounds <cit.>.This work focuses on single-shot decoding of quantum error correcting codes. In particular, by stating the problem as a joint source-channel coding problem, we gain further insights in the construction of the syndrome error correcting code. We derive the optimal joint decoding rule (for the qubit and syndrome codes), as well as a relaxation thereof that ignores error degeneracy. The evaluation of these expressions is in general complex, albeit feasible for small codes. Finally, experimental results illustrate the performance of different syndrome error correcting code constructions.§ QUANTUM ERROR CORRECTION We consider [[n_q,k_q]] CSS codes <cit.>. The code constraints can be represented by a binary (n_q-k_q) × 2n_q parity-check matrix of form H_q = [ H_X 0; 0 H_Z ] . The(n_q - k_x)× n_q and (n_q - k_z) × n_q sub-matrices H_X and H_Z (with k_q = k_x + k_z - n_q) must fulfill H_XH_Z^ = 0 to comply with the commutation requirement of the stabilizers. In quantum systems, it is not possible to measure the qubits without perturbing the state. Instead, quantum error correction is performed on the basis of so-called (quantum) syndrome measurements that yield a syndrome vector s_q. This vector can be expressed ass_q^ = H_q e_q^where e_q=[ e_Z| e_X] is a binary vector of length 2n_q uniquely associated with a Pauli error. In particular, when the i-th qubit is subject to a Pauli X error, the i-th element in e_X is set to one, whereas when it is subject to a Pauli Z error the i-th element of e_Z is set to one.§ SYSTEM MODELAs in<cit.>, we model the channel error vector e_q=[ e_Z| e_X] as the outputof a BSC which introduces independent Z and X errors with the same probability ϵ. Due to the independence of X and Z errors, we can decode them independently using the matrices H_Z and H_X, respectively. To simplify notation, in the following, we drop the subscripts X and Z. Thus, we denote by H the (n-k)× nbinary parity-check matrix of an (n,k) linear block code 𝒞. The error vector is denoted by e. The (error-free) syndrome s is computed as s= e H^. In quantum systems, not only the qubits are subject to errors, but also the syndrome measurements can be faulty. In this work, we model errors in the syndrome measurements as the transmission of the syndrome over a BSC with error probability δ. To provide resilience against syndrome errors, s is encoded using an (m,n-k) binary linear code 𝒞_s with an (n-k)× m generator matrix G_s.This yields a redundant (or encoded) syndrome z. Figure <ref> illustrates the abstracted model of our transmission system.§.§ Syndrome Error Probability Let h_1,…,h_n-k denote the rows of H. In quantum jargon, we also refer to the stabilizers of the code, where each stabilizer is associated with a syndrome measurement. In order to perform the syndrome measurement associated with the i-th stabilizer h_i, typically an ancilla-qubit is injected, and it needs to interact with w(h_i) data qubits, where w(h_i) denotes the Hamming weight of h_i. A simple error model for syndrome measurement errors is obtained assuming that each of these interactions fails with a given probability q <cit.>, yielding the following syndrome measurement error probability (z_j ≠z̃_j)=∑_i w(h_j)i q^i (1-q)^w(h_j)-i. Observe that the probability in (<ref>) increases with the Hamming weight of h_j. Since h_j for j∈{1,…, m} may not have constant weights, it is convenient to define theaverage error probability δ asδ=∑_j=1^m(z_j ≠z̃_j)/m which is the syndrome error probability assumed throughout this work. § SYNDROME ERROR CORRECTING CODEElaborating on the redundant syndrome z, we obtain z =sG_s = eH^G_s = eH^_o By definition, we have (G_s)=n-k and (H)=n-k.Exploiting well-known results from linear algebra, it follows that matrix H_o= G_s^H has size m × n and(H_o)=n-k, since (H_o)≤min((G_s) , (H))(H_o)≥(G_s) + (H)- (n - k) ,where the lower bound on the rank of H_o is also known as Sylvester's inequality.We make the following observations. First, the m × n matrix H_o, m>n-k, isovercomplete, i.e., it contains linearly dependent rows. These linearly dependent rows enable correction of syndrome errors. Second, (<ref>) describes a joint source-channel coding problem <cit.>, where e is first compressed with the help of H yielding s which is then encoded to z.§.§ Code Construction Let H_o = [ [ H; P ]]where the (m-n+k)× n matrix P represents the redundant part of H_o. Note that any matrix H_o can be rearranged as in (<ref>), e.g., by means of Gaussian elimination that identifies n-k linearly independent rows. Then, the generator matrix G_s=[ I|A] (in systematic form) of the syndrome error-correcting code is the solution ofA^H = P .One may use Gaussian elimination to solve (<ref>) for A. In the sequel, we assume that the stabilisers of the quantum error correcting code 𝒞, hence H, are given. Under this constraint, we are interested in the design of the syndrome error correcting code specified by G_s.The classical code design approach is to find a code 𝒞_s with good distance, and thus, good error-correcting properties. However, in the quantum setting, we aim at matrices H_o with low-weight rows which not only facilitate implementation, but also minimize the syndrome measurement error probability δ (see Section <ref>). More precisely, we would like to ensure that P is sparse which is not guaranteed even if H andG_s are sparse. In this work, starting from H we generate low-weight redundant rows. The problem of finding a sparse representation of a code can be addressed, e.g., by relying on probabilistic approaches as in <cit.>. For the short code examples in Sec. <ref> we can directly exploit the structure of H and construct m'>m low-weight redundant rows (which form P) simply by inspection. These m' rows form a low-rate (m',n-k) syndrome error correcting code 𝒞_s'.Finally, we selectm of these rows to obtain an (m, n-k) code C_s. For the codes under consideration, the selection can be done by an exhaustive search to maximize the minimum distance (and minimize the multiplicity of minimum weight codewords).Alternatively, we will also provide examples of a concatenation of the syndrome error correcting code with a repetition code, since it is always possible to repeat the syndrome measurements. § DECODING §.§ Degenerate Maximum A Posteriori DecodingIn the quantum setting, error operators that differ by a stabilizer are indistinguishable from each other, i.e., they lead to the same quantum state. Thus, it is possible to group the error operators into cosets ℰ, which can be thought of as equivalence classes. All errors in a coset ℰ can be corrected by the same recovery operator.The task of a degenerate decoder is to identify the right coset and to apply the respective correction on the corrupted state. Hence, a degenerate MAP decoder computes the most probable coset given the noisy syndrome observation z̃ asℰ̂ = _ℰ(ℰ|z ) = _ℰ(z |ℰ)(ℰ)/(z)= _ℰ(z |ℰ)(ℰ). Elaborating on (z |ℰ) we obtain(z| ℰ)=(z, ℰ)/(ℰ)(a)=1/(ℰ)∑_e∈ℰ(z,e)= 1/(ℰ)∑_e∈ℰ(z|e)(e) ,where in (a) we exploited the fact that the error events in ℰ are all disjoint.Inserting (<ref>) in (<ref>) we obtainℰ̂ = _ℰ∑_e∈ℰ(z|e)(e). Note that evaluating the expression in (<ref>) requiresprocessing of all 2^n different error vectors, and is thus only feasible for small values of n.In this paper, the following error model is assumed. Let d(z,z) be the Hamming distance between vectors z and z. The probability associated with an error vector e is defined as,( e)=( ε/1-ε)^w(e) (1-ε)^nso that we have(z|e) = (z|z( e)) = ( δ/1-δ)^d(z( e),z) (1-δ)^m, where z( e) is the redundant syndrome vector induced by the error pattern e. §.§ Maximum A Posteriori DecodingIgnoring the effect of degeneracy, a classical MAP decoder would compute ê= _e∈𝔽_2^n(e|z)= _e∈𝔽_2^n(z|e) (e) = _e∈𝔽_2^n(z| z(e)) (e). The expression in (<ref>) can be computed using (<ref>) and (<ref>).Note again that evaluating (<ref>) requires processing all 2^n error vectors.Let us now see how this complexity can be reduced. First, (z| z(e))=(z| s(e)) has to be evaluated for 2^n-k different syndrome vectors. This is because all the 2^k error vectors in a coset differ by a stabilizer and yield the same corrupted state, thus also the same syndrome.Second, for a given syndrome vector s, the lowest weight error vector e^*( s) which is consistent with s maximizes the expression in (<ref>).[We require ϵ<0.5 and in case there are multiple error vectors with the lowest weight, we pick one of them randomly.] Therefore, before decoding, we can determine a one-to-one mapping between sand e^*( s). For this step, we need to check at least 2^n-k error patterns, but usually less than 2^n. This step has to be performed once for a given code prior to decoding. Thus, we can reformulate MAP decoding as follows ŝ = _s∈𝔽_2^n-k(z| s) (e^*( s)) .The estimated error vector ê^*(ŝ) can be directly obtained from ŝ through the one-to-one mapping.According to (<ref>), 2^n-k<2^n syndrome vectors have to be processed during decoding.§ EXPERIMENTAL RESULTSWe present experimental results for two families of CSS codes whose parity-check matrix structure is as in (<ref>). In both cases, the submatrices H_X and H_Z represent two equivalent codes. Therefore, similarly to <cit.>, we only show simulation results for the code represented by H_X over the BSC with error probability ϵ. Simulations are performed under both MAP and degenerate MAP decoding for fixed syndrome error probability δ. We determine the probability of decoding failure P_e versus ϵ via Monte Carlo simulations.A decoding failure is declared whenever a logical error occurs, i.e., when the decoded error pattern is not in the same coset as the one introduced by the channel. Note that the redundant rows of the parity-check matrix may have different weights in our experiments. Therefore, for a fixed q in (<ref>), δ in (<ref>) will change depending on the code that is considered. For a fair comparison, different codes need to be compared for different values of δ. §.§ [[16,2]] Product CodeWe consider the [[16,2]] quantum product code <cit.>. The 8× 16 binary matrix H_Xin (<ref>) is given byH_X=[[ 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1; 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0; 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0; 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0; 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1; ]] .Note that H_X by construction already contains one redundant weight-4 row. Additional redundant rows of weight 6 are generated by exploiting the code structure. By splitting the rows of H_X into two sets, one containing the first four rows and one the others, in total 16 weight-6 rows can be obtained with any linear combination of one element from each set. As a result we obtain a (24,7) syndrome error correcting code with d_min = 8. For completeness, thesubmatrix A of the code's generator matrix G_s=[ I|A] is A= [[ 1 1 1 1 0 0 0 0 1 0 0 0 1 0 0 0 1; 1 0 0 0 1 1 1 1 0 0 0 0 1 0 0 0 1; 1 0 0 0 1 0 0 0 1 1 1 1 0 0 0 0 1; 1 0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0; 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1; 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1; 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 ]] . We now remove three of the weight-6 redundant rows and obtain a (21,7) syndrome error correcting code with d_min=6. Likewise, we consider H_X without the last redundant row and repeat the seven syndrome measurements three times. The result is a (21,7) syndrome error correcting code with d_min=3. For a fair comparison between the codes, a higher δ has to be considered for owing to the higher stabilizer weights. Figure <ref> shows the probability of decoding failure versus ϵ for different values of δ. While for we consider δ=0.05 and δ=0.08, for we consider δ = 0.0654 and δ =0.1 due to the additional weight-6 rows. While shows a visible performance gain for both values of δ, deg-MAP decoding does not yield performance benefits compared to MAP decoding for the current setup.We provide further code design examples of syndrome error correcting codes. First, we consider H_X including the last redundant row, and repeat the eight measurements three times. Formally, the resulting (24,7) syndrome error correcting code with d_min=6 can be described as the serial concatenation of a (8,7) single parity-check code with d_min=2 and a (24,8) code with d_min=3. The (24,8) code repeats each of the eight information bits three times. Its generator matrix is I ⊗ [1 1 1], where I is an 8 × 8 identity matrix. Second, we consider H_X without the last redundant row and repeat the seven syndrome measurements four times. The result is a (28,7) syndrome error correcting code with d_min=4. Third, we consider and repeat only the first four measurements once. The resulting code is a concatenation of a (24,7) code with d_min=8 and a (28,24) code with d_min=1. The concatenated code has parameters (28,7) and d_min=9. The probability of decoding failure versus ϵ is shown in Fig. <ref>. Again, for a fair comparison, δ has been adjusted to account for the changing stabilizer weights. Observe that plain repetition of the syndrome measurements shows visible losses while using redundant measurements, including also concatenated schemes (with an inner repetition code), show performance improvements. Again, degenerate MAP decoding does not yield visible advantages. §.§ [[18,2]] Toric CodeThe second code investigated is the[[18,2]] toric code <cit.> withH_X = [[ 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0; 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0; 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1; 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 0; 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0; 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 0 0 0; 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0 0; 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 1 0; 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 ]] H_X has only one redundant row. The last row of H_X in (<ref>) can be obtained as the sum of all other rows. Overall, we can construct additional 24 weight-6 redundant rows yielding a (33,8) code with d_min=10. Thesubmatrix A of the code's generator matrix G_s=[ I|A] is A=[ [ 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 1 1 1 1 1 1; 1 1 0 0 0 1 1 1 0 0 0 0 0 0 0 1 0 1 0 1 1 1 1 1 1; 1 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1 0; 1 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 1 0 1 1 1 1 1 1 1; 1 0 0 0 0 0 1 0 0 1 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1; 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 1 1 0 1 1 0 1; 1 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 1 0 1 1; 1 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 0 1 1 1 ]] .Next, we show examples of codes with different code parameters. First, removing 9 weight-6 redundant rows we obtain a (24,8) code with d_min=6. Recall, that the removal is done such that the minimum distance is kept as large as possible. Likewise, a (24,8) code can be constructed by repeating the first 8 non-redundant syndrome measurements 3 times. This code has, however, d_min=3. Second, removing 6 rowswe obtain a (27,8) code with d_min=8. A (27,8) code (24,8) codecan be constructed by repeating all 9 syndrome measurements. This code can be seen as a serial concatenation of a (9,8) single parity-check code and a code with generator matrix I ⊗ [1 1 1], I being a 9× 9 identity matrix. Third, removing one weight-6 row we get a (32,8) code with d_min=9. A (32,8) code with d_min=4 can be obtained by repeating the 8 non-redundant syndrome measurements 4 times.The probability of decoding failure versus ϵ for all codes is depicted in Fig. <ref>. Overall, we observe the same trends as for the product code. Degenerate MAP decoding does not show visible advantages compared to classical MAP decoding. Also in this case, repeating measurements yields the worst performance.By contrast, the concatenation of a repetition code with other syndrome correction codes yields good results, and it has the advantage that stabilizer weights can be kept low. The best results for certain code parameters are obtained by choosing appropriate subsets of weight-6 stabilizers which define the syndrome error correcting code.§ CONCLUSIONSWe studied single-shot decoding of quantum CSS codes with faulty syndrome measurements and re-stated the problem as a joint source-channel coding problem. By introducing low-weight redundant rows in the CSS code's parity-check matrix, a syndrome error correcting code is obtained which provides additional resilience against faulty syndrome measurements. By means of code examples, we illustrated that employing a syndrome error correcting code based on redundant rows outperforms repeated syndrome measurements. Such codes can be also concatenated with an additional repetition code. In our experiments, we considered classical MAP decoding, which identifies the most likely Pauli error, and more complex degenerate MAP decoding which subdivides valid error patterns into cosets and identifies the most likely coset. In our case, the morecomplex degenerate MAP decoding turned out to perform similarly to classical MAP decoding. Experiments with more realistic error models of quantum circuits are left for further work.§ ACKNOWLEDGEMENTS The authors would like to thank Davide Orsucci for his valuable comments and Gianluigi Liva for helpful discussions. IEEEtran
http://arxiv.org/abs/2310.18138v1
{ "authors": [ "Aldo Cumitini", "Stefano Tinelli", "Balázs Matuz", "Francisco Lázaro", "Luca Barletta" ], "categories": [ "quant-ph", "cs.IT", "math.IT", "68P30" ], "primary_category": "quant-ph", "published": "20231027133549", "title": "Optimal Single-Shot Decoding of Quantum Codes" }
1Shanghai Astronomical Observatory, Shanghai, 200030, People^'s Republic of China2Astronomical Institute, Graduate School of Science, Tohoku University, Aoba, Sendai 980-8578, Japan3Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai 980-8578, Japan4Department of Astronomy, Columbia University, 550 W. 120th St., New York, NY, 10027, USA5Department of Physics, Columbia University, 550 W. 120th St., New York, NY, 10027, USA6Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800, USA7Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA8Department of Physics, University of Florida, PO Box 118440, Gainesville, FL 32611, USA E-mail: [email protected] The astrophysical origin of stellar-mass black hole (BH) mergers discovered through gravitational waves (GWs)is widely debated.Mergers in the disks of active galactic nuclei (AGN) represent promising environments for at least a fraction of these events,with possible observational clues in the GW data.An additional clue to unveil AGN merger environmentsis provided by possible electromagnetic emission frompost-mergeraccreting BHs.Associated with BH mergers in AGN disks,emission from shocks emerging aroundjets launched by accreting merger remnants is expected.In this paper we compute the properties of the emission produced during breakout and the subsequent adiabatic expansion phase of the shocks, andwe then apply this model to optical flaressuggested to be possibly associated with GW events.We find that the majority of the reported flarescan be explained bythe breakout and the shock cooling emission.If these events are real, then the merging locations of binariesare constrained depending on the emission processes.If the optical flaresare produced by shock cooling emission, they would displaymoderate color evolution,possiblycolor variations among different events,a positive correlation between the delay time and the duration of flares,and accompanying breakout emission in X-ray bands before the optical flares.If the breakout emission dominates the observed lightcurve,it is expected thatthe color is distributed in a narrow range in the optical band, andthe delay time from GW to electromagnetic emission is longer than ∼ 2 days.Hence, further explorations of the distributions of delay times, color evolution of the flares, and associated X-ray emission will be useful to test theproposed emission model for the observed flares.§ INTRODUCTION The astrophysical pathways to black hole (BH) mergers discovered by the LIGO <cit.>, Virgo <cit.>, and KAGRA <cit.>gravitational wave (GW) observatorieshave been actively debated <cit.>.Various scenarios have been proposed, includingisolated binary evolution <cit.>evolution of triple or quadruple systems <cit.>,dynamical evolution in star clusters<cit.>,and compact objects in active galactic nucleus (AGN) disks <cit.>.In an AGN disk,BHs are embedded in due to capture via dynamical interactions between the nuclear star cluster and the AGN disk <cit.> and by in-situ star formation <cit.>.The AGN disk environmenthelps to bring the BHs closer together <cit.> and hence form binaries <cit.> which thenmerge over relatively short time scales.Comparisons to the observed BH masses, spins and merger rate indicate that a sizable fraction of the observed mergers may indeed originate in AGN disks <cit.>.The AGN channel could also explain some of the peculiar detections, such as those with a high mass <cit.> and possibly high eccentricity ( but see ). Due to the gas-rich merger environment, a key signature of the AGN channel is the possibility of electromagnetic emission accompanying the GW signal from the merger <cit.>.To explore this possibility, electromagnetic follow-up observations have been carried out for many of the mergers, withnine counterpart candidates suggested so far, including seven optical flares <cit.> and two gamma-ray flares <cit.>.Recently, several studies have investigated the electromagnetic emission from a variety of transients emerging from AGN disks.Many studies <cit.>focused on the radiation from gamma-ray bursts, while <cit.> and<cit.> discussed the electromagnetic signatures expected from accretion-induced collapse of neutron stars and white dwarfs, respectively.<cit.>, <cit.>, and <cit.> studied the properties of tidal disruption of stars by BHs,while <cit.> investigated supernova explosions,and <cit.> and <cit.> estimated the electromagnetic emission produced by thermal radiation and/or outflow from circum-binary disks in AGN disks. Several recent studies have also investigated whether transients from BHs mergingin AGN disks could explain the optical flare, ZTF19abanrhr <cit.>, associated with the BH merger GW190521 detected in GWs. <cit.> discussed emission from shocks caused bycollision between gas bound to the merger remnant and unbound gas after recoil kicks due to anisotropic radiation of GWs. <cit.> assessed the net luminosity and timescales for gas accretion induced byrecoil kicks.<cit.> considered flares emerging from shocks in a circum-BH disk due to recoil kicks.<cit.>,<cit.>, and <cit.>, respectively, considered thermal and non-thermal emission from bubbles and bubble evolution around BHs formed by strong outflowsconsidering continuous and episodicsuper-Eddington accretion.<cit.> further considered emission from shocks emerging due to interactions of Blandford-Znajek jets <cit.> launched from accreting BHs to the broad line regions,<cit.> considered free-free and bound-free emission from gas shocked due to interaction of the jets and AGN disk gas,and <cit.> estimated gamma-ray, neutrino, and cosmic-ray emission from internal shocks in the jets.<cit.>, <cit.>, <cit.>, and <cit.> estimated the association significance of ZTF19abanrhr to GW190521. In this paper, we develop an emission model based on the scenario proposed by <cit.> and discuss whether or not emission based on this scenario can explain some of the optical flares reported in <cit.>.<cit.> indicated that a Blandford-Znajek jet can be produced from BHs embedded in AGN disks and investigated its influence on the AGN disk structure. Due to the high pressure of the shocks emerging around the jet, a cavity is created around the BHs.Just before the jet breaks out of the AGN disk,photons in the shocked gas begin to escape. These photons can be observed as breakout emission <cit.>, whose properties have beeninvestigated in <cit.>. The BHs can maintain the jets even after theybreak out from the AGN disk, as long as there is leftover circum-BH disk gas. Once this is depleted, the BHs can no longer power the jets. This is then followed by an inactive phase which lasts until gas is replenished onto the BH, and the jet is launched again, with the cycle hence repeating. In the case that BHs merge in the cavity while accreting (upper panel of Fig. <ref>), <cit.> (hereafter Paper I) predicted thatelectromagnetic emission is often produced associated with the BH merger.This is because the jet direction can be reoriented following a merger (<ref>), and strong shocks and emission are produced soon after the jet reorientation.Paper I investigated the properties of breakout emission from a jet head associated with BH mergers,and found that this model can explain various properties,including the luminosity, delay time, duration, and color of the electromagnetic transients ZTF19abanrhr, GW150914-GBM, and LVT151012,as well as why the transients began brightening only after the merger.<cit.> and<cit.>also estimated the neutrino emission from the breakout of the jets produced associated with BH mergers.In this paper, we additionally consider the shock cooling emission, which is produced in a subsequent adiabatic expansion phase of the shocked gas (e.g.for supernovae andfor gamma-ray bursts), in addition to the breakout emission, which is produced in an early phase of the shocked gas (see Fig. <ref> for a schematic picture).We present the properties of the emission in both phases, and we then apply this model tothe flares reported in <cit.>.We further discuss how to testthe proposed emission model in future observations.The rest of this paper is organized as follows.In  <ref>,we describe a model for producing electromagnetic flares associated with GW emissionand a way to constrain physical properties of the flares from the observations using this model.We present our results in  <ref>,discuss how to test the model in  <ref>, andsummarize our conclusions in  <ref>.§ METHOD First we describe the model itself.We then specialize to discuss how to derive the model parameters from the observed properties of the flares, that is the delay time (t_ delay), the duration (t_ duration), the luminosity of the flare (L_ obs), the merger remnant mass (m_ BH), theSMBHmass (M_ SMBH), and the AGN luminosity (L_ AGN).In our analysis, we assume that the shocks produced by collisions between the jet and the AGN gas are characterized by non-relativistic regimes.This is because in both the breakout and the shock cooling emission, flares with the delay time and the duration of ≳ 10  day (Table <ref>) are usually characterized by this regime (Paper I, Tables <ref> and<ref>).As possible processes for explaining the properties of the optical flares,we consider the breakout emission from the jet head (the breakout emission scenario) and shock cooling emission from the cocoon (the shock cooling emission scenario).Note that we use the Shakura-Sunyaev model <cit.> for the accretion disk in the shock cooling emission scenario,andthe Thompson disk model <cit.> in the breakout emission scenario.This is because the position of the BH from the central supermassive BH (SMBH) is sub-parsec for the former (Table <ref>) and a few parsecfor the latter case (Table <ref>), and the mechanisms of angular momentum transfer and the disk properties are likely different in the two regions, with the Thompson disk model better suited than the Shakura-Sunyaev one to describe the outer disk.§.§ Shock formationShocked gas is responsible for both the breakout and the coolingemission, and hence we begin by describing the process of shock formation.We first describe the accretion rate onto BHs in an AGN disk, which is evaluated by the Bondi-Hoyle-Lyttleton rate.For a BH embedded in a cool AGN disk, the Bondi-Hoyle-Lyttleton radius (r_ BHL) is large, and usually exceeds the scale height of the AGN disk (H_ AGN) and the Hill radius (r_ Hill). Accounting for the geometrical limitation of the capture regions by the shear motion and the vertical height of the AGN disk,the capture rate of gas by the BH is given byṁ_ BHL =f_ c r_ w r_ hρ_ AGN (c_ s,AGN^2+v_ BH^2+v_ sh^2)^1/2 ≃  3× 10^-4 / yr (f_ c/10) (H_ AGN/0.003pc) (R_ BH/1  pc)^1/2(ρ_ AGN/4× 10^-17g/cm^3) (m_ BH/10 )^2/3(M_ SMBH/10^6 )^-1/6,<cit.>, whereρ_ AGN is the gas densityand c_ s,AGN is the sound speedof the AGN disk at the position of the BH (R=R_ BH, where R is the distance from the SMBH),v_ BH is the velocity of the BH with respect to the local motion of the AGN disk,v_ sh=r_ wΩ is the shear velocityat the capture radiusr_ w= min(r_ BHL, r_ Hill),Ω=(GM_ SMBH/R_ BH^3)^1/2 is the angular velocity of the BH,r_ h= min(r_ w, H_ AGN) is the capture height,G is the gravitational constant,and f_ c∼ 10 is a normalization constant <cit.>.In the second line of Eq. (<ref>),we assumev_ BH<c_ s,AGN and v_ BH<v_ sh.Note that r_ w(c_ s,AGN^2+v_ sh^2)^1/2≈ r_ HillH_ AGNΩ is used to derive the right hand side, which is approximately satisfied regardless of whether c_ s,AGN is larger or smaller than v_ sh. By considering the reduction or enhancement with respect to the Bondi-Hoyle-Lyttleton rate,we parameterize the fraction of the accretion rate onto the BH (ṁ) over the Bondi-Hoyle-Lyttleton rate (ṁ_ BHL) by f_ acc=ṁ/ṁ_ BHL as in Paper I.For example, low f_ acc may be predicted due to winds froman accretion disk with a super-Eddington rate.On the other hand,recent simulations suggest that the conversion to windis moderate <cit.> for accretion flows in which the circularizationradius (where gas is circularized after being captured by a BH) is much larger than the trapping radius (within which photons are advected to a BH without escaping), as is the case for BHs embedded in an AGN disk. In addition, the accretion rate onto a BH in a cavity during the active phases is estimated to be lower by a factor of a few compared to that without a cavity <cit.>. From rapidly accreting and spinning BHs in an AGN disk,a Blandford-Znajek jet is expected to be launched, as outlined in Appendix A.1 of <cit.>.The jet kineticluminosity(L_ j) is proportional to the mass accretion rate onto the BH (ṁ) as L_ j=η_ jṁ c^2,wherec is the speed of light, and η_ j is the jet conversion efficiency, which is approximated by η_ j∼ a_ BH^2 for a magnetically dominated jet <cit.>,a_ BH is the dimensionless spin of the BH,and a_ BH∼ 0.7 for the merger remnants <cit.>.At a BH merger (t=0, where t is the time from the merger),the jet direction is reoriented and can collide with unshocked AGN gas in the following ways.Once two BHs merge, the BH spin direction is reoriented if the angular momentum direction of the merging binary is misaligned with respect to the spin directions of the merging BHs. This is expected for mergers in an AGN disk due to frequent binary-single interactions <cit.> and/or inhomogeneity of AGN disks.Since the jet is injected in the direction of the BH spin,if the jet is not aligned with the circum-BH disk due to a strong jet power <cit.>,the jet propagates in the direction of the BH spin, and can collide with AGN gas.Even if the jet aligns with the angular momentum direction of the circum-BH disk due to magnetic interactions on average, the jet can precess by interacting with magnetic fieldswhile the angular momentum directions of the BH spin and the circum-BH disk are still misaligned with one other <cit.>. Due to the precession, the jet can collide with unshocked gas after mergerduring the first precession cycle (after that, the opening angle of the cavity becomes wider than that of the precession).The other possibility is that once BHs merge,a merger remnant BH receives a recoil kick in an almost random direction due to anisotropic radiation of GWs. Then shocks form in a circum-BH disk within ≲ 10^13  cm <cit.>, and shocked gas accretes onto the remnant with the angular momentum direction being modified as a result of shocks <cit.>.Due to magnetic interactions <cit.>, the jet is then aligned with the angular momentum direction of the circum-BH disk, which is in turn misaligned with respect to the jet direction before the merger, and the jet can therefore collide with AGN gas. Once the jet collides with unshocked AGN gas around a BH, two shocks form: a forward shock propagating in the AGN disk and a reverse shock in the jet. The region sandwiched by the two shocks is called the jet head.In the jet collimated regime considered in our work,the dimensionless velocity of shocked gas in the jet head at the shock breakout is estimated as <cit.>β_ h∼(L_ j/ρ_ AGN t_ break^2 θ_0^4 c^5)^1/5,where θ_0 is the opening angle of the injected jet, andt_ break isthe delay time between the production of the jet andthe shock breakout, and is roughly given by t_ break ∼3/5H_ AGN/β_ FScf_ corr∼1yr(H_ AGN/5×10^16 cm) (β_ FS/0.1)^-1(f_ corr/3).Here β_ FS∼ (7/6)β_ h is the dimensionless velocity of the forward shock of the jet head,and f_ corr is the correction factor for the delay time due to the inclination of the jet and the geometry of the cavity.Ignoringgeometrical corrections due to the existence of the cavity,we assume f_ corr= min[1/ cosi,1/θ_0 sini],consideringboth the cases in which the shocked gas (cocoon) breaking out of the AGN disk is from its head and from its sides, where i is the angle between the jet and the angular momentum direction of the AGN disk.With this prescription, f_ corr ranges between ∼ 1–1/θ_0. After the shock breakout,radiation is produced.Early emission is characterized by the breakout emission as described in  <ref> and Paper I,while later emission is characterized by the shock cooling radiation described in  <ref>.Since breakout emission is associated with the shock propagation, both non-thermal and thermal emissions are expected.On the other hand, we only consider thermal emission from shock cooling, and neglect any non-thermal emission from expanding ejecta. The latter can also be bright if additional strong shocks form when the ejecta collide with the interstellar medium.We use the non-thermal component of the breakout emission and the thermal shock cooling emission to model the flares reported by <cit.>, since these are bright in optical bands.Note that the peak frequency of the thermal breakout emission from the jet head falls above the X-ray bands,which is constrained by the duration of the flare (e.g. Eqs. <ref> and <ref> below), and hence it cannot reproduce the optical flares. §.§ Shock cooling emission§.§.§ Physical modelIn the shock cooling emission, photons diffuse and are released from deep inside the shocked gas,as observed in supernovae <cit.>.To present the properties of this emission, we first describe the evolution of the shocked gas.At the breakout of the shocked gas (cocoon) at t=t_ break,the thermal energy of the cocoon is roughly given by e_ BO=1/2m_ BOβ_ c^2c^2,where β_ c≃β_ hθ_0 is the dimensionless velocity of the cocoon <cit.>, and m_ BO is the mass of the cocoon at the breakout and is roughly given bym_ BO ≃ 2π H_ AGN^3 f_ corr^3 θ_0^2 ρ_ AGN,considering the cylindrical shape of the cocoon with the height of H_ AGNf_ corr and the radius of H_ AGNf_ corrθ_0.After the shock passage (even before the shock breakout),the internal energy of the shocked gas is converted tokinetic energy due to the expansioncaused by the radiation pressure.Afterwards, the shocked ejecta expands nearly spherically with velocity v_ ej∼β_ cc.As the ejecta expands to size R,the optical depth of the spherically expanding ejecta declines as <cit.>τ_ ej≃κ_ ej m_ BO/4π R^2,where κ_ ej is the ejecta's opacity.We adopt the Thomson scattering opacity of κ_ ej∼0.4g/cm^2 assuming ionized gas.If the ejecta is initially optically thick, as assumed in our fiducial model,photons deep inside the ejecta can be diffused out once the optical depth is reduced to τ_ ej =c/v_ ejatthe time t_ diff =(κ_ ej m_ BO/4π c v_ ej)^1/2,whenthe corresponding radius is R_ diff=t_ diffv_ ejand the density ρ_ diff=c/κ_ ej R_ diffv_ ej =c/κ_ ej t_ diffv_ ej^2. Due to the adiabatic expansion,the thermal energy at the diffusion radius is reduced by a factor of ∼ (R_ BO^3/R_ diff^3)^γ-1 where γ=4/3 is the adiabatic index for the radiation pressure dominated gas,R_ BO=(V_ BO/4π)^1/3 is the typical size, and V_ BO is the volume of the cocoon at the shock breakout.Hence,the luminosity at τ_ ej=c/v_ ej isrelated to e_ BO as L_ SC=e_ BO/t_ diffR_ BO/R_ diff. Using R_ BO in Eq. (<ref>),the unperturbed AGN density at the position of the BH (R=R_ BH)can be estimated via ρ_ AGN∼ρ_ diff(R_ diff/R_ BO)^3. Furthermore, from the AGN density,the inflow rate of the AGN disk at R=R_ BH can be calculated viaṀ_ inflow=4π H_ AGN^3 ρ_ AGNΩαassuming an alpha viscosity withparameter α, and where Ω is the orbital angular velocity of the BH around the SMBH.Here, the AGN luminosity (L_ AGN) isrelated to the inflow rate as L_ AGN/η_ radc^2=Ṁ_ SMBH=f_ consṀ_ inflow where Ṁ_ SMBH is the accretion rate onto the SMBH,η_ rad is the radiation efficiency,and f_ cons≡Ṁ_ SMBH/Ṁ_ inflow≤1 is the fraction of the inflow rate at R=R_ BH over the accretion rate onto the SMBH.The radiation temperatureis determined as follows.At τ_ ej=c/v_ ej,the energy density of the radiation within the ejecta shell is u_γ=L_ SCτ_ ej/4π R_ diff^2 c. The blackbody temperature of the radiation is then given by T_ BB=(u_γ/a)^1/4, where a is the radiation constant.Note that the radiation pressure dominates the gas pressure at t≲ t_ diff for the models considered in this paper.Since the flares in <cit.> were found by the Zwicky Transient Facility (ZTF),and the observed optical luminosity (L_ obs) can be roughly estimatedasthe total observed energy of the flarein the optical banddivided by the duration (provided in Table 3 of ),we pose that L_ obsneeds to matchthe luminosity in the ZTF bands as L_ obs=L_ SC. ∫^ν_ up_ν_ down2hν^3/c^2dν/e^hν/k_ B T_ BB-1 /σ T_ BB^4, . where ν_ up=c/(400  nm) and ν_ down=c/(700  nm) are the upper and lower limits for the frequencies observed by the r and g bands of the ZTF.Note that the luminosities inthe r and g bands of the ZTFare not directly derived from Eq. (<ref>).To calculate them (panels c and d of Fig. <ref>), we adjust the limits of the integral in this equation to cover the frequency range of the relevant band.§.§.§ Derivation of model parametersUsing Eqs. (<ref>)–(<ref>),we can then determine the 17 quantities β_ c, ρ_ AGN, R_ BH, H_ AGN, L_j, ṁ_ BHL, t_ break, e_ BO, m_ BO, τ_ ej, ρ_ diff, R_ BO, R_ diff,u_γ, T_ BB, L_ SC,and Ṁ_ inflow giventheinputparameters,θ_0, f_ acc, a_ BH, α, η_ rad, f_ corr, and f_ cons, and the observed properties, L_ obs L_ AGN, and t_ diff. For example,Combining Eqs. (<ref>)–(<ref>),the velocity of the cocoon is calculated as β_ c= [32 π^4 f_ jet/BHL^3 f_ cons A c^10 G^2m_ BH^2t_ diff^2 αη_ rad/9 L_ AGN L_ SC^3 κ_ ej^4 f_ corr^6]^δ,wheref_ jet/BHL≡ f_ c f_ accη_ j is aparameter related to the jet power, andA and δ arevariables depending on the velocity of the cocoon.For β_ c/θ_0<1,A=(35/18)^6 θ_0^-3 and δ=1/2,and otherwise (as long as the jet is in the collimated regime) A=θ_0 and δ=1/8.To determine β_ c using Eq. (<ref>),L_ SC needs to be derived using Eqs. (<ref>)–(<ref>).To consistently solve these equations,we calculate β_ c in an iterative way using Newton's method.Then, by incorporating β_ c intoEqs.(<ref>)–(<ref>),the other parameters can also be determined.We further adjust f_ jet/BHLso that the scale height of the AGN disk is equal tothe height expected for the Shakura-Sunyaev disk, givenα.Such disk structure is expected to be realized in regimes where the Toomre parameter (Eq. <ref>) becomes greater than 1. §.§ Breakout emission§.§.§ Physical model Long before photons deep inside the ejecta escape (t∼ t_ break+t_ diff),at around the time that the shock arrives to the surface of the ejecta (t∼ t_ break), bright breakout emission is released.Since breakout emission of the jet head is associated with the shock propagation, both non-thermal and thermal emission are expected.Such non-thermal emission can explain the optical flares found by <cit.>, as investigated in Paper I.In the following we describe how to reconstruct the model properties for the breakout emission. Photons inside the shock start to diffuse out from the AGN disk and the breakout emission begins to be released when the photon diffusion time from the shock front, t_ diff,sh∼ d_ edge^2κ_ shρ_ AGN/c, becomes equal to the dynamical timescale of the shock, t_ dyn,edge∼ d_ edge/β_ FSc, where d_ edge is the thickness of the AGN disk above the shock, and κ_ sh is opacity of the shocked gas.By equating these timescales, the thickness at the breakoutis given byd_ edge,BO∼ 1/(β_ FSκ_ shρ_ AGN), andthe duration of the emission from a breakout shell is t_ diff,BO ∼ 1/(β_ FS^2 cκ_ shρ_ AGN) ∼3yr(β_ FS/0.1)^-2(ρ_ AGN/1×10^-16g cm^-3)^-1,where we adoptκ_ sh∼0.4g/cm^2considering the ionization of gas by photons released from the shocks <cit.>. If we assume that the AGN disk is gravitationally unstable at the position of the BH, which is expected for the delay time and the duration of the flares (Paper I),the density of the AGN disk is related to the position of the BH through <cit.>Ω^2/√(2)π G ρ_ AGN=Q≃ 1,where Q is the Toomre parameter.Note thatAGN disks at the BH positions are predicted to be Toomre unstable <cit.>.§.§.§ Derivation of model parameters Using Eqs.(<ref>)–(<ref>)and(<ref>)–(<ref>),we can determine the 6 variables β_ h, ρ_ AGN, R_ BH, H_ AGN, L_j, and ṁ_ BHL, given θ_0, f_ acc, and η_ j.For example, the velocity of the jet head is calculated by β_ h=[5 (7/6)^4/3 f_ jet/BHL m_ BH^2/3G^1/2(c κ_ sht_ diff,BO/√(2)π)^1/6/3^5/3f_ corrt_ breakθ_0^4 c^2]^3/11.Then, by incorporating β_ h intoEqs.(<ref>)–(<ref>)and(<ref>)–(<ref>),the other parameters can be also determined. In the case of breakout emission,R_ BH is found to be large ( <ref>).At such large scales, efficient transfer of angular momentum of the AGN gas isrequired for SMBH accretion<cit.>.Following <cit.>,we assume that the inflow rate of the AGN disk is parameterized by Ṁ_ inflow=4π R_ BHH_ AGN^2 ρ_ AGNΩ m . §.§ EvolutionIn this section, we describe the evolution of the luminosity and the temperature for thermal emission from the cocoon in the breakout and shock cooling emission phases.Referring to previous studies <cit.>,we assume that the luminosity evolves as L(t') ∼e_ BO/t_ dyn {[ 0    for  t'≲ 0 ,;1    for  0≲ t'≤ t_ pl ,;(t'/t_ pl)^-4/3    for  t_ pl≤ t' ≲ t_ sph ,;(t_ sph/t_ pl)^-4/3(t'/t_ sph)^-2.28n-2/3(1.19n+1);for  t_ sph≤ t' ≤ t_ diff,; t_ dyn/t_ diffR_ BO/R_ diff exp[-1/2(t'^2/t_ diff^2-1)];for  t_ diff≤ t' , ].where t'=t-t_ break,t_ pl=1/(β_ c^2 c κ_ ejρ_ AGN) is the duration of emission from a breakout shell in the cocoon,t_ sph≃ R_ BO/β_ cc is the transition between the planar and spherical geometries of the breakout shell,t_ dyn=H_ AGN/β_ c c is the dynamical timescale, and n is the power law slope of the vertical AGN gas density profileat the height at which photons begin to break out. The second and third rows correspond to the luminosity in the planar phase (before the shocked gas doubles its radius by expansion) for the breakout emission, the fourth row corresponds to that in a spherical phase (after the shocked gas doubles and before the shock cooling emission phase), and the fifth row corresponds to that in a shock cooling emission phase. Similarly following previous studies, we assume that the temperature evolves asT(t') ∼ T_ BB,0 {[ 1    for  0≲ t'≤ t_ pl ,; (t'/t_ pl)^-2/39n+5/17n+9    for  t_ pl≤ t' ≲ t_ sph ,;(t_ sph/t_ pl)^-2/39n+5/17n+9(t'/t_ sph)^n_ T; for  t_ sph≤ t' ≤ t_ diff,; T_ BB/T_ BB,0(t'^2/t_ diff^2-1)^-1/2    for  t_ diff≤ t' , ].where T_ BB,0=(7ρ_ AGNβ_ c^2 c^2/2a)^1/4is the black body temperature of the breakout shell,n_ T is the power-law index for the temperature evolution in the spherical phase, andn_ T =-(18.48n^2+20.69n+6)/[(1.19n+1)(22.32n+17)] for an expanding spherical ejecta <cit.>.The validity of these analytical formulae has been tested by recent numerical simulations <cit.>. Since Eqs. (<ref>) and (<ref>) are formulae for an expanding ejecta with a spherically symmetric evolution, some modifications are required in the cases of emission from an expanding ejecta with a cylindrical shape like a cocoon, especially at around the transition between the planar and spherical phases.For simplicity, we determine n and n_ T so that the luminosity and the temperature smoothly evolve at t'=t_ diff, respectively.§.§ Parameters In this section we describe the fiducial values for the model parameters, and the observed properties of the flares used in the modellings. As fiducial values, we setthe opening angle of the injected jet to θ_0=0.2 <cit.>,the radiation efficiency to η_ rad=0.1,the correction factor for the delay time to f_ corr=3 (which roughly corresponds to the median value considering the cavity with the aspect ratio of ∼ 1 and isotropic jet directions),the consumption fraction of the inflow rate to f_ cons=1,the alpha-viscous parameter to α=0.1,and the angular momentum transfer parameter in the outer regions of the AGN disk to m=0.1.For the computation of the shock cooling emission weadjust the value off_ jet/BHL≡ f_ c f_ accη_ jso that the scale height of the AGN disk is equal to the height expected for the Shakura-Sunyaev disk ( <ref>),whilefor the breakout emission we set it tof_ jet/BHL=10assuming f_ c=10 <cit.>, η_ j=0.5, and f_ acc=2 considering moderate enhancement of accretion due to shocks caused by recoil kicks <cit.>.When modeling the flares with breakout emission,we assume that the flare duration (t_ duration) corresponds to the exponential decay time (t_e) in <cit.>.To derive the delay time (t_ delay), we identify the day at which the flare luminosity peaks using digitizer[https://automeris.io/WebPlotDigitizer/].Due to the difficulty of identifying the peak,t_ delay contains uncertainties.We adoptM_ SMBH and m_ BH from Tables 3 and 4 of <cit.>.The optical luminosity is derived by means ofthe total observed energyin the optical banddivided by the sum of the rise time (t_g) and the decay time (t_e) presented in <cit.>.We calculate the luminosity of the AGNs (L_ AGN) by inferring the flux of the AGNsat ∼ 4000Åfrom Fig. 6 of <cit.>,and estimating the luminosity distance from the redshift of the GW events assuming a value for the Hubble constant of 67.8  km/s  s^-1  Mpc^-1,for the matter density today of 0.24, and for the cosmological constant today of 0.74 <cit.>, and adopting the bolometric correction factor of 5 <cit.>.The values for the observed quantities adopted in this paper are listed in Table <ref>.Note that the delay time (t_ delay) corresponds to t_ break, and the duration of a flare (t_ duration) corresponds to t_ diff,BO. Conversely, when modeling the flares with shock cooling emission, we assume thatthe duration of a flare (t_ duration) corresponds to t_ diff.The delay time between a GW event and an optical flare (t_ delay)is also on the order of ∼ t_ diffgiven t_ break≪ t_ diff,while we do not use the delay time (t_ delay) to constrain the model parameters.Note that t_ delay∼ t_ duration expected in this scenario is roughly consistent with the properties of the observed flares(table <ref>).We assume that the observed optical luminosity of the flare (L_ op) corresponds tothe luminosity in the ZTF bands (L_ obs).We note that in our model, the physical properties are uniquely determined.This is because we fixed several input parameters as detailed above,and we do not directly use the observational data points for parameter fitting.If instead we allowed the input parameters to vary, we would introduce degeneracies among several input parameters. The variations of the values of the fixed input parameters can affect physical properties of the model especially in the breakout-emissionscenario, while less affect them in the shock cooling emission scenario (see  <ref> and  <ref>).We believe that such simple prescriptions are useful to understand typical properties expected from the model and consider possible tests of the model below. § RESULTS §.§ Shock cooling emissionTable <ref> showsthe list of possible pairs of associations between the GW events and the electromagnetic flares reported in <cit.>, along with their observed properties.Seven electromagnetic flares and twelve pairs for the possible associations are reported.The number of the pairs (12) is larger than that of the flares (7),because some flares can be associated with more than one GW events.Hence, at least five pairs are false associations.Note that in this paper we do not analyze the two gamma-ray flares possibly associated with GW events due to their significantly different properties <cit.>, while they can be explained by the thermal breakout emission from the jet head in relativistic regimes <cit.>.Table <ref> showsthe distribution of the model parameters when the properties of the observed flares (Table <ref>) are modeled with shock cooling emission. The parameter f_ jet/BHL is widely distributed depending on the pairs.f_ jet/BHL∼1–10 in pairs 4, 11, and 12 is roughly expected for the Bondi-Hoyle-Lyttleton accretion, while f_ jet/BHL∼10–60 in pairs 1–3, 5, and 7–10 can be realized by the enhancement of accretion by shocks due to recoil kicks at merger (Paper I).However, f_ jet/BHL∼3000 in pair 6 is difficult to be realized,since only up to f_ jet/BHL≲300 is found to be feasible by considering the radial surface density profile of a circum-BH disk and appealing to the results of numerical simulations (see Appendix B of Paper I).On the other hand,even for pair 6, if we adopt f_ corr∼ 1,f_ jet/BHL is reduced to ∼ 300.This is presumably because both a high accretion rate and a low inclination have similar influences on the breakout velocity, and most parameters are mainly characterized by the magnitude of the breakout velocity.Due to the variation of f_ corr (between ∼ 1–1/θ_0)and the degeneracy between f_ corr and f_ jet/BHL,the cooling model is poorly constrained or tested by the value of f_ jet/BHL. The parameter R_ BH ranges between 0.06–0.3 pc, corresponding to 900–6× 10^4 R_g, where R_ g=GM_ SMBH/c^2 is the gravitational radius of the SMBH.These locations roughly correspond to migration traps, gap forming regions, and/or slow migration regions, where BH mergers are predicted to be frequent <cit.>.The orange lines in the left panel of Fig. <ref> show the SEDs for pairs 1–12.The radiation temperature ranges between T_ BB=9200–20000  K,corresponding tothe peak wavelength λ_ peak=ch/(3k_ BT_ BB)=240–520  nm,where h is the Planck constant and k_ B is the Boltzmann constant.Also, the various colors observed in the optical flares (Fig. 3 of ) can be reproduced depending on the variationofthe radiation temperaturewith changing some input parameters, such as the direction of the jet (i).For example,if we set i=0^∘(the jet being perpendicular to the AGN-disk plane)by using derived values for ρ_ AGN, R_ BH, and H_ AGN and the observed parameters (m_ BH and M_ SMBH), the radiation temperature of flares is enhanced by a factor of ∼ 1.7. This is because for i=0^∘ the shocked mass becomes low, and then photons can escape from an earlier phase when the radiation temperature is higher.Thus, the variety of colors can be reproduced if there are temperature variations in the thermal emission.For synchrotron emission(whose possible contribution to the flares is discussed in  <ref>),the color is related tothe power-law slope of the injected electrons accelerated by the first-order Fermi process (p) as ν L_ν∝ν^(-p+2)/2,and p is presumably distributed in a narrow range for the same phenomena <cit.>.On the other hand, various values for the color can be realized if the breaks of the power laws in the SEDs coincidentally fall at around the ZTF bands (see the black line in the right panel of Fig. <ref>).Note that the distribution of the colors is currently uncertain due to the shift in the baseline of the flux and the contribution from the background AGN emission in Fig. 3 of <cit.>.If the color is actually distributed in a wide range, the flares are then easier to model by thermal shock cooling emission rather than by non-thermal breakout emission.In addition, the model can be also discriminated by constraining the spectral shape of the emission. This can be done by simultaneously observing the flares by the ZTF and the Ultraviolet Transient Astronomy Satellite(ULTRASAT, , Fig. <ref>) in the future.We also predictbreakout emission from the cocoon preceding the shock cooling emission(this is the emission from the cocoon, which is different from the emission from the jet head discussed in  <ref>).The duration of the breakout emission from the cocoon, t_ diff,CBO=1/β_ c^2 c κ_ ejρ_ AGN,is distributed in the range t_ diff,CBO∼ 10–10^5  s (the right most column of Table <ref>), and the temperature(Eq. <ref>)is in a range 2× 10^5–2× 10^6  K (blue lines in the right panel of Fig. <ref>).Although the duration of the breakout emission is shorter than than that of the shock cooling emission, itcan be detected by future X-ray surveys, such as HiZ-GUNDAM <cit.> and/or the Einstein Probe <cit.>. Fig. <ref> a and b show the evolution of the luminosity and the temperature of emission for pairs 1, 5, and 11.We chose pair 1 as a fiducial example,pair 5 as an event with small m_ BH,and pair 11 as the one with long t_ duration. The lines are drawn until the phase at which τ_ ej=1 is satisfied,since our model for thermal emission is not valid when τ_ ej<1.For the smaller m_ BH case (dashed lines in Fig. <ref>),the timescale for the breakout emission of the cocoon (t_ pl)islonger.This is because L_ j is lower due to the lower Bondi-Hoyle-Lyttleton rate (Eqs. <ref>, <ref>), and β_ c tends to be lower for lower L_ j (Eqs. <ref>, <ref>). To reproduce t_ diff, ρ_ AGNneeds to belower for lower β_ c (Eq. <ref>). Due to the low β_ c and ρ_ AGN (Table <ref>), the breakout timescale of the cocoon (t_ pl∝β_ c^-2ρ_ AGN^-1) becomes longer.In such case,since photons diffuse out from the shocked gas earlier, adiabatic expansion is inefficient, leading to higher temperature. For the longer t_ duration model (dotted lines in Fig. <ref>), the evolution of the luminosity is slower in the shock cooling regime (t'≳ t_ diff). §.§ Breakout emission Table <ref> showsthe parameters of the breakout emission modelreproducing the properties of the observed flares.In this model, theparameters are distributed in narrow ranges.The mergers are predicted to occur atR_ BH∼ 1–10  pc,the shock velocity ranges within β_ h∼ 0.3–0.4,the disk aspect ratio at R_ BH ranges within h_ AGN≡ H_ AGN/R_ BH∼ 0.001–0.02,the fraction of the optical luminosity to the kinematic power of the shock is around L_ op/L_ j∼ 0.02–1,the fraction of the distance from the SMBH to the size of the nuclear star cluster (R_ NSC) ranges within R_ BH/R_ NSC∼ 0.2-0.7 assuming the empirical relation of R_ NSC=8.3  pc(M_ SMBH/3.1× 10^8 )^0.154 <cit.>,and last the inflow rate of an AGN disk in units of the Eddington rate varies within Ṁ_ inflow/Ṁ_ Edd∼0.004-1.2,where Ṁ_ Edd≡ L_ Edd/c^2 η_ rad with radiative efficiency of η_ rad=0.1, andL_ Edd is the Eddington luminosity of the SMBH.We find that the optical luminosity of the flare is lower than the kinetic power of the jet (f_ op/kin<1) for all the pairs (Table <ref>).Considering the bolometric correction of the breakout emission in the optical band (∼ 10, Paper I), our scenario demands f_ op/kin≲ 0.1. For the pairs that do not satisfy this condition, we need to use f_ jet/BHL > 10. Note that a higher value (f_ jet/BHL≲ 100) is possible by considering the enhancement of the accretion rate due to shocks caused by recoil kicks at merger (Appendix. B of Paper I). In the right panel of Fig. <ref>,the SEDs for thermal emission from the breakout of the jet head for Pairs 1–12 are displayed.As they are bright at ∼ 10^17–10^19  Hz and the duration is long (≳ 10^6  s),they can be observed by X-ray telescopes, such asthe Swift X-ray telescope (XRT, ),Chandra, XMM Newton <cit.>,and the Nuclear Spectroscopic Telescope Array (NuSTAR, ).Note that the shock cooling emission associated with the breakout emissionin all the pairsis so dim (≪ 10^40  erg/s) that it is buried within the AGN variability.The derived values of R_ BH/R_ NSC∼ 0.2-0.7∼ 1-8 pc aremuch larger than in the shock-cooling case. The different locations are driven by the phases of the emission.In our model, the duration of flares corresponds to the timescale at which the diffusion timescale is equal to the dynamical timescale (e.g. Eqs.  <ref> and <ref> for the shock cooling and breakout emission cases, respectively).This timescale is inversely proportional to the density of the ejecta or of the AGN disk, as well as to the square of the velocity of the ejecta or of the shocks in the shock cooling and breakout emission scenarios, respectively.As a result,since the velocities are typically comparable (Tables <ref> and <ref>),the density of the AGN disk in the breakout emission scenario must be similar to the density of the ejecta in the shock cooling emission scenario, in order to reproduce the duration of the flares. Additionally,in the shock cooling emission scenario the density of the ejecta is much lower than the local AGN density due to adiabatic expansion (Eq. <ref>).Thus, the AGN density is typically much lower in the breakout emission scenario,and therefore the distance from the SMBH needs to be larger.These larger distance areconsistent with the scenario that BHs in nuclear star clusters are captured by AGN disks and merge with one other. Here, the migration timescale of objects in AGN disks around massive SMBHs as inferred for the flaring AGNs (Table <ref>) is so long that migration before BH mergers is less expected.To check whether the values derived for the aspect ratio (h_ AGN) are plausible,we calculate the aspect ratio of the AGN disk (h_ AGN,TQM) adopting the model in <cit.>, using R_ BH, M_ SMBH, Ṁ_ inflow, and m.We find that the values of h_ AGN,TQM are roughly comparable to h_ AGN within a factor of ∼ 2 (9th column of Table <ref>).Such moderate differences could arise due to the variation of f_ corr reflecting the variation of the inclination of jets.In Table <ref>, we list f_ corr at which h_ AGN,TQM=h_ AGN.This shows that h_ AGN,TQM=h_ AGN is satisfied by reasonable values for f_ corr around ∼ 3–5. By comparing L_ AGN/L_ Edd in Table <ref> and Ṁ_ inflow/Ṁ_ Edd in Table <ref>,we can determine the value of m at whichL_ AGN/L_ Edd= Ṁ_ inflow/Ṁ_ Edd as Ṁ_ inflow∝ m.For the pairs 11 and 12, m≥ 3 is required to satisfy L_ AGN/L_ Edd≤Ṁ_ inflow/Ṁ_ Edd.In the other pairs (1–10), m can be as low as ≥ 0.003–0.5 to satisfy this condition.We presume that m≲ 1 is acceptable <cit.>, although there are no studies constraining the possible ranges of m as far as we know.Note that the inflow rateat pc scalescan be much higher than the accretion rate onto the central SMBH <cit.>, since a large fraction of gas is possibly consumed by star formation and outflows,as predicted in <cit.>, <cit.>, and <cit.>.Then Ṁ_ inflow/Ṁ_ Edd is allowed to be much larger than L_ AGN/L_ Edd, and the required value of m is enhanced.Hence,pairs 11 and 12 are not well explained bybreakout emissiondue to the required high values of m.On the other hand, it is not clear whether AGN disks are usually in steady state, which might complicate the comparison between the inflow rate and the AGN luminosity.§ TESTS OF THE MODELIn the following we discuss how our model can be tested by examining the distribution expected for the observable properties.From  <ref> and  <ref> we see thatvalues for each model parameter are distributed in a narrow range (Tables <ref> and <ref>).We discuss whether this is because flares originating from merging BHs tend to have well-defined characteristic properties, or because of observational selection effects in the way the search was conducted by <cit.>.Flares were searched with specific ranges of parameters, and in particular with the rise time within 5–100day, the decay time within 10–200day, and the delay time from the GW detection in the interval 0–200day. In order for electromagnetic flares to be found in association with BH mergers, BHsneed to typically merge in bright AGNs.This is because most BH mergers reported by LIGO/Virgo/KAGRA are found to merge at luminosity distances of several Gpc <cit.>.At such large distances, AGNs are easily missed unless they are bright.Indeed, the hostSMBH masses for the flares reported by <cit.> are ≳ 10^8,hence so massive that AGNs are rarely missed in AGN searches at the distances of the GW events.Also, assuming a luminosity distance of d_ L∼ 3  Gpc, SMBH mass of 10^8, Eddington ratio of ∼ 1, bolometric correction of 5, and the fraction of the variable luminosity compared to the average luminosity ∼ 0.1, the flux of variable flares in the AGN is ∼ 2× 10^-13  erg/s/cm^2, which falls just above the sensitivity of ZTF ∼ 10^-13  erg/s/cm^2.Conversely, it is difficult to find flares associated with BH mergers occurring in AGNs around less massive SMBHs through surveys for AGN variability.We next estimate the detection rate of electromagnetic flares associated with BH mergers based on our scenario.If BH mergers actually produce electromagnetic flares as found in <cit.>,the rate of such flares is comparable to or less than the rate of BH mergers (∼ O(10)  Gpc^-3 yr^-1). If flares within ≲ 3  Gpc are observable by the ZTF and all merging BHs produce electromagnetic flares, up to N_ BBH,3Gpc∼ 300 flares associated with BH mergers can be detected per year.On the other hand, we have estimated that a small fraction of BH mergers (f_ EM/BBH≲ 0.02) could accompany detectable electromagnetic flares in our model (see  4.1 in Paper I for discussions).Then, the seven flares discovered by <cit.> is roughly consistent with the number of flares expected during LIGO/Virgo/KAGRA O3, which is estimated asN_ EM/GW,O3∼ N_ BBH,3Gpcf_ EM/BBHt_ O3≲ 10, where t_ O3∼ 1.1 is the duration of the O3 operation in the unit of year.Constraints on the frequency of flares, whose properties can be explained by emission from BHs, are hence useful to testour model.Here, note that dimmer more frequent flares would be contributed by solitary BHs <cit.>. §.§ Shock cooling emission From Table <ref>, we note that there are moderate variations in R_ BH andT_ BB in the shock cooling emission scenario.To consider a possible test of this model, we discuss its dependence on the observed properties, and the range of these properties for which the shock cooling emission scenariois inconsistent.If there are events with high or low T_ BB,extremely large or small R_ BH,or R_ BO/R_ diff≥ 1,such flares are inconsistent with the shock cooling emission scenario.However, the dependence of the variables T_ BB, R_ BH, R_ BO/R_ diff on the ratio of the observed parameters (L_ AGN, L_ obs, m_ BH, M_ SMBH, and t_ diff) over the adjustment parameters (f_ corr and f_ jet/BHL) is similar to the dependence of H_ AGN/R_ BH on the same ratio.For example, for β_ c/θ_0<1,H_ AGN/R_ BH depends on the observed properties as H_ AGN/R_ BH= 9^4/3L_ AGN^2 L_ obs^5 κ_ ej^7 θ_0^2/3 f_ corr^9/2^4/3 4^4 π^7 f_ jet/BHL^4 f_ cons^2 c^19α^2 η_ rad^2 A^4/3 G^3 ×1/ m_ BH^8/3M_ SMBH^1/3t_ diff^16/3 ∝L_ obs^5 L_ AGN^2 t_ delay^-16/3 m_ BH^-8/3 f_ corr^9 f_ jet/BHL^-4 ,and T_ BB depends as T_ BB=9^3/8 L_ AGN^3/8 L_ obs^11/8κ_ ej^3/2 f_ corr^9/4/(4π)^1/3 (32π^4)^3/8 f_ jet/BHL^9/8f_ cons^3/8 c^9/2A^3/8 G^3/4 ×1/α^3/8η_ rad^3/8m_ BH^3/4t_ diff^5/4a^3/8 ∝L_ obs^11/8L_ AGN^3/8t_ delay^-5/4m_ BH^-3/4 f_ corr^9/4 f_ jet/BHL^-9/8 .Then, if we adjust f_ corr and f_ jet/BHL so that H_ AGN/R_BH is consistent with a Shakura-Sunyaev disk,these parameters (T_ BB, R_ BH, and R_ BO/R_ diff) also fall in a range ofpossible values (similar values to those derived in Table <ref>), even if events with wide ranges of observed parameters (L_ AGN, L_ obs, m_ BH, M_ SMBH, and t_ diff) are observed.Thus, in this model, once H_ AGN/R_ BH is adjusted to the value expected in the Shakura-Sunyaev model, T_ BB, R_ BH, and R_ BO/R_ diff are then characterized byrealisticvalues, and the distribution of the observables becomes difficult to use as a further test of the model. An interesting test of the model is the correlation between the delay time and the duration of the flare.This is because the delay timeis comparable to the duration of a flareas long as t_ diff≫ t_ break, which is satisfied in pairs 1–12 (Table <ref>).To constrain the correlation coefficient e.g. with uncertainty of ≲ 0.3 by 95 percentile, more than ≳ 50 events are needed to be observed.Hence, more eventswill be very helpful as further diagnostics. Note that half of the pairs reported by <cit.> are unreal as a flare is associated with a few GW events, and hence, we need to derive the correlation excluding the influence from false associations.Also, we derived the delay time using digitizer.In addition, the time dependence of the luminosity assumed by <cit.> is different from that expected in the shock cooling emission. Thus, both the delay time and the duration suffer from significant uncertainties. If these timescales can be well constrained,the model can be tested for each event bycomparing the delay time and the duration.Another possible test is the detection of the cocoon breakout emission preceding the shock cooling emission.In pairs 1–12, the duration and temperature of the cocoon breakout emission range in the intervals ∼ 10–10^5  s and 0.4–5  keV; thus wide-field X-ray surveys, such as Einstein Probe <cit.> and HiZ-GUNDAM <cit.> will be useful to detect the early breakout emission (left panel of Fig. <ref>).Assuming that the luminosity of the breakout emission is similar in magnitude to the jet power, the detectable distance is estimated to be d_ det=(L_ jet/4π F_ sen)^1/2 ∼ 1  Gpc(L_ jet/10^46  erg/s)^1/2(F_ sen/10^-11  erg/s/cm^2)^-1/2,where F_ sen is the sensitivity of the facility.Here, the closest distance among the flares discovered by <cit.> is ∼ Gpc.Since the duration of the cocoon breakout emission is ∼ 10–10^5  s in pairs 1–12 and the sensitivity ofEinstein Probe <cit.> and HiZ-GUNDAM isF_ sen∼ 10^-11  erg/s/cm^2 for t_ int∼ 10^4  s,events at the luminosity distance of ∼ Gpc can be detectedif L_ jet≳ 10^46  erg/s.If both the breakout emission and the shock cooling emission are detected from the same AGN, it can be a robust test of this model.It is notable that in the shock cooling model, the color keeps evolving toredderin all pairs (Fig. <ref> d).Them_g - m_revolves by ∼ 0.4 in 50  days.Such mild evolution of the color can be a strong test of this model.On the other hand, for non-thermal emission from the breakout of the jet head, the color is expected to be unchanged.Note that the colors are presented for pair 3 in Fig. 2 of <cit.>, revealing almost no evolution. However, due to the contamination from the host AGN emission, any color evolution is likely difficult to constrain.To derive the color evolution more precisely, observations at additional frequencies, e.g. by ULTRASAT (black lines of Fig. <ref> c), would also be useful.Hence, when available, the evolution of the color will be an additional diagnostic in future observations. §.§ Breakout emission Next we discuss ways to test the breakout emission scenario.To do this, we first present the dependence of physical quantities on observables as β_ h∝θ_0^-12/11 f_ jet/BHL^3/11f_ corr^-3/11 t_ delay^-3/11 t_ duration^1/22 ,H_ AGN∝θ_0^-12/11 f_ jet/BHL^3/11f_ corr^-14/11 t_ delay^8/11 t_ duration^1/22 , ρ_ AGN∝θ_0^24/11 f_ jet/BHL^-6/11f_ corr^6/11 t_ delay^6/11 t_ duration^-12/22 , R_ BH∝θ_0^-8/11 f_ jet/BHL^2/11M_ SMBH^1/3f_ corr^-2/11 t_ delay^-2/11 t_ duration^4/22 , L_ j∝θ_0^8/11 f_ jet/BHL^9/11f_ corr^-9/11 t_ delay^13/11 t_ duration^-19/22 ,Ṁ_ inflow∝θ_0^4/11 f_ jet/BHL^-1/11m M_ SMBH^1/2f_ corr^-17/11 t_ delay^23/11 t_ duration^-13/22. For the breakout emission from the jet head,if the shocked gas becomes relativistic (β_ h≳ 0.8),the probability of observing non-thermal emission is significantly reduced due tothe relativistic beaming effects,the shift of the minimum energy, andthe time dilution (as shown in Fig. 3 b of Paper I).Thus, if the shock is relativistic,the breakout emission is presumably not observed by current facilities, such as ZTF.Since β_ h is estimated to be ∼ 0.2–0.4 for the observed flares, if β_ h is higher by a factor of ∼ 3 compared to our estimates,flares cannot be explained by breakout emission.β_ h≳ 0.8 is satisfied in all the pairsif events with delay time of t_ delay≲ 2  day are found.Note that this is not compensated by f_ corr and f_ jet/BHL (as in the shock cooling emission scenario).This is because β_ h is reduced only by a factor of ∼ 1.1 by enhancing f_ corr to the maximum value (∼ 1/θ_0). Also, if f_ jet/BHL is reduced to lower β_ h, f_ op/kin becomes larger than 1, which violates another requirement for the model.If associations between the flares and GWs are due to random coincidence, the delay time is expected to be distributed uniformly in the range of 0–200  day.Assuming a uniform distribution, we can test the breakout emission scenarioat the ∼ 1 σ and ∼ 2 σ levelsafter discovering ≳ 100 and ≳ 300 events, respectively,by checking if there are events with t_ delay≲ 2  day.If we find optical flares with t_ delay≲ 2  day, the breakout emission scenariois disfavored. If t_ duration is enhanced by one order of magnitude,the jet power is reduced by a similar factor, and the fraction of the optical luminosity to the jet power exceeds one for Pairs 1–3, 6, 11, and 12, and then, the breakout emission scenariobecomes inconsistent for these pairs.However, the enhancement of L_j due to long t_ duration can be compensated by f_ jet and f_ corr up to about a factor of ∼ 10, which is limited by the requirement for β_ h (≲ 0.8) as discussed above.Hence, the model is not well tested by t_ duration. Currently, the properties of the observed flaressatisfy the conditions required for the breakout emission scenario.To be consistent with this scenario,the delay time should not be shorter than∼ 2  day,and the color of flares should be distributed in a narrow range as discussed in  <ref>.Thus, to test whether there are flares with properties being inconsistent withthe breakout emission scenario,more events will need to be observed. § CONCLUSIONS In this paperwe have presentedthe properties of emission from shocks emerging from collisions between AGN gas and a jet launched from a merger remnant BH in an AGN disk. Our modelincludes the evolution throughout the shock breakout and subsequent cooling emission phases.We then applied this model tothe candidate flares reported in <cit.>.Our results are summarized as follows.* We fit the characteristic features of all of the events with both emission processes. Both processes could fit each observation, suggesting that such fits may themselves be insufficient to rule out the AGN origin for the flares.The reconstructed parameters might then be indicative of the selection algorithm determining the false alarm rate of associations.* While both processes could be made consistent with the observed events with appropriate parameter selection, we found that the implied merger distance from the central SMBH is markedly different for the two processes. Specifically, shock cooling emission can explain the observed properties if the mergers happen R_ BH∼ 0.06–0.3  pc from the SMBH, which is consistent with the locations of AGN-assisted merger models (e.g. ).On the other hand, breakout emission would require a much larger distance of R_ BH∼ 1–8  pc to explain the observed flare duration. This may be possible for mergers in AGNs with high SMBH masses, in which migration is inefficient (e.g. ).* Follow-up observations could help further constrain the reconstructed parameters of the events. X-ray observations would have to be made prior to the optical detection, whichlikely requires future wide-field surveys.In addition, follow-up observations determining the spectral evolution of the electromagnetic flares would be important. H.T. was supported by the National Key R&D Program of China (Grant No. 2021YFC2203002) and the National Natural Science Foundation of China (Grant No. 12173071).S.S.K. was supported byJapan Society for the Promotion of Science (JSPS) KAKENHIgrant Number 22K14028, 21H04487, and 23H04899, andthe Tohoku Initiative for Fostering Global Researchers for Interdisciplinary Sciences (TI-FRIS) of MEXT's Strategic Professional Development Program for Young Researchers.Z.H. was supported by NASA grant 80NSSC22K0822 and NSF grant AST-2006176.R.P. acknowledges support by NSF award AST-2006839.I.B. acknowledges the support of the Alfred P. Sloan Foundation and NSF grants PHY-1911796 and PHY-2110060.... aasjournal
http://arxiv.org/abs/2310.18392v1
{ "authors": [ "Hiromichi Tagawa", "Shigeo S. Kimura", "Zoltán Haiman", "Rosalba Perna", "Imre Bartos" ], "categories": [ "astro-ph.HE", "astro-ph.GA", "gr-qc" ], "primary_category": "astro-ph.HE", "published": "20231027180000", "title": "Shock cooling and breakout emission for optical flares associated with gravitational wave events" }
Let X be the blow-up of ^2_ in a finite set of points in very general position. We show that X has only standard autoequivalences, no nontrivial Fourier–Mukai partners, and admits no spherical objects. Further, we show that the same result holds if X is a blow-up of finitely many points in a minimal surface of nonnegative Kodaira dimension which contains no (-2)-curves. Independently, we characterize spherical objects on blow-ups of minimal surfaces of positive Kodaira dimension.[2020]14F08, 14J26 Neutrino mixing model for best-fit values of θ_12 and θ_13 [==========================================================§ INTRODUCTIONLet X be a smooth projective variety over the complex numbers and denote by ^b(X) the bounded derived category of coherent sheaves on X. If the canonical bundle ω_X is ample or anti-ample, then, by Bondal–Orlov <cit.>, the group of autoequivalences (^b(X)) only consists of so-called standard autoequivalences, i.e. (^b(X)) = (X)⋊(X)×[1].In general, the standard autoequivalences (X)⋊(X)×[1] form a subgroup of (^b(X)) and ^b(X) often admits of non-standard autoequivalences, see, e.g., <cit.> for the case of abelian surfaces and <cit.> for the case of K3 surfaces of Picard rank 1. A natural source for non-standard autoequivalences are so-called spherical twists <cit.>.In contrast to the case of varieties with trivial canonical class, a spherical object on a variety with nontrivial and non-torsion canonical class has to be supported on a proper closed subset, see <ref>. If X is a certain toric surface <cit.> or a surface of general type whose canonical model has at worst A_n-singularities <cit.>, then (^b(X)) is generated by standard equivalences and spherical twists.In <ref>, we focus on rational surfaces X which are blow-ups of ^2_ in a finite set of points in very general position. It follows from <cit.>, recalled in <ref>, that such a surface X does not contain any (-2)-curve. Motivated by the results of <cit.>, it is reasonable to expect that the absence of (-2)-curves implies the absence of spherical objects. We confirm this expectation by arguing that a spherical object on X has to be supported on a union of rational integral curves, see <ref>. Moreover, we obtain the followingLet X be the blow-up of ^2_ in a finite set of points in very general position. Then the following statements hold: *Any autoequivalence of X is standard, i.e. (^b(X)) = (X)⋊(X)×[1].*If Y is a smooth projective variety such that ^b(X) ≅^b(Y), then X ≅ Y.*There exists no spherical object in ^b(X).By <cit.>, for any smooth projective variety X we have that item:autoequvialences_are_standard implies item:no_fm_partners. Moreover, if X is a smooth projective variety of dimension ≥ 2 with nontrivial and non-torsion canonical class, then item:autoequvialences_are_standard implies item:no_spherical_objects. Indeed, arguing as in <ref>, a spherical object S on such a variety X has to be supported on a proper closed subvariety. By <cit.>, the spherical twist T_S associated to S satisfies T_S(S) = S[1- X] and T_S(k(x))=k(x) for any point x∈ X ∖ (S). Thus, T_S is a non-standard autoequivalence.We provide two proofs of <ref>, both utilizing the results of de Fernex <cit.> regarding rational curves in the blow-up of ^2_ in a finite set of points in very general position. The first proof relies on the geometric observation that a curve C ⊆ X such that K_X |_C is trivial in _0(C)_ is rational; see <ref>. This allows to give a direct proof of each statement in <ref> (although, as explained above, it would suffice to prove item:autoequvialences_are_standard by using the result of Favero). The second proof, outlined in <ref>, relies on Uehara's more general classification results and his description of autoequivalence groups of surfaces with Fourier–Mukai support dimension 2 satisfying a condition on the configuration of (-2)-curves <cit.>.In <ref> we consider blow-ups X of minimal surfaces Y of nonnegative Kodaira dimension. In contrast to the case of rational surfaces, (-2)-curves on X are strict transforms of (-2)-curves on Y, see <ref>. Thus, using <cit.>, we obtain Let Y be a minimal surface ofnonnegative Kodaira dimension and let X be the blow-up of Y in a nonempty finite set of points. Assume Y contains no (-2)-curves, e.g. Y has Kodaira dimension 1 and the elliptic fibration of Y has only irreducible fibers. Then ^b(X) admits only standard autoequivalences, i.e.(^b(X)) =(X) ⋊(X) ×[1]. As outlined above, <ref> implies that such X has no Fourier–Mukai partners and ^b(X) does not contain spherical objects.In <ref> we characterize spherical objects on blow-ups X of minimal surfaces Y of positive Kodaira dimension: An object in ^b(X) is spherical if and only if it is the pullback of a spherical object in ^b(Y) whose support is disjoint from the exceptional locus of X → Y. If Y is a minimal surface of Kodaira dimension 1 whose elliptic fibration has only irreducible fibers, this characterization combined with the results of <cit.> gives an alternate proof that ^b(X) does not contain spherical objects, see <ref>.The authors thank Hokuto Uehara and Charles Vial for useful comments on an earlier draft of this paper. Further, the authors thank Gebhard Martin for helpful discussions regarding elliptic surfaces. The term surface always refers to a smooth projective 2-dimensional variety over . For a variety X, we denote by ^*(X) (resp. _*(X)) the Chow groups of algebraic cycles modulo rational equivalence with integer coefficients graded by codimension (resp. dimension). We denote by ^*(X)_^*(X) ⊗_ the Chow groups with rational coefficients. A (-k)-curve C in a surface S is an integral smooth rational curve C with self-intersection number -k.§ PRELIMINARY OBSERVATIONSLet X be a smooth projective variety. The support of an object F ∈^b(X) is by definition the closed subvariety(F)⋃_i ∈(^i(F))⊆ Xendowed with the unique reduced closed subscheme structure. If F is a simple object, i.e. (F,F)=, then (F) is connected; see, e.g., <cit.>.An object S ∈^b(X) is called spherical if (S, S[i]) = ifi = 0,X,0else,and S ⊗ω_X ≅ S.Denote by p, q X× X → X the projections and by Δ↪X× X the diagonal embedding. If S is a spherical object on X, the object _S Cone(Lq^*S^∨⊗^LLp^* S →_Δ) ∈^b(X× X) is the Fourier–Mukai kernel of the spherical twist T_S ^b(X) →^b(X) given by T_S (-) = Rp_*(_S ⊗^LLq^*(-)).Note that by <cit.> a spherical twist is always an autoequivalence of ^b(X). The condition S ⊗ω_X ≅ S has the following consequence on the support of a spherical object:Let X be a smooth projective positive dimensional variety with K_X ≠ 0 in ^*(X)_, i.e. ω_X is nontrivial and non-torsion. Then any spherical object S ∈^b(X) is supported on a connected proper closed subset.Moreover, if X is a surface, then (S) is a, possibly reducible, connected curve C=⋃_iC_i such that K_X |_C̃_i =0 in ^1(C̃_i)_, where C_i are the irreducible components of C and C̃_i → C_i are the normalizations. In particular, K_X · C =0 ∈. Denote by ^i(S) ∈ X the i-th cohomology sheaf of S. Since ω_X is a line bundle, we have^i(S) ⊗ω_X = ^i(S ⊗ω_X ) ≅^i(S),which yields (^i(S)) (ω_X) =(^i(S)) in ^*(X)_. If ^i(S) had positive rank, then (^i(S)) would be invertible in ^*(X)_, hence (ω_X)=0. This contradicts to K_X being non-torsion. Hence, all cohomology sheaves ^i(S) have rank zero and thus the generic point of X is not contained in the support of S. Thus, (S) <X and (S) is connected by <cit.>.Assume in addition that S = 2. If S were supported on a point, then <cit.> would show that S ≅ k (x)[m] for some x∈ X and m ∈. In particular, χ(k (x)[m], k (x)[m])=0, but χ(S, S)=2. Hence, (S) is 1-dimensional and connected, i.e. a connected reduced, possibly reducible, curve.Let C_i ⊆ X be an irreducible curve, contained in (S) and let C̃_̃ĩ→ C_i be its normalization. Denoting by jC̃_̃ĩ→ C_i ↪ X composition, we obtain by the projection formulaK_X · C_i = j_*j^*K_X ∈_0(X).Letbe a cohomology sheaf of S which has nonzero rank restricted on C_i. The equality ()=()(ω_X) on X shows (j^*) = (j^*)(j^*ω_X) on C̃_̃ĩ. Since j^* has nonzero rank, this implies that j^* K_X is torsion in _0(C̃_̃ĩ). We conclude that the intersection number K_X· C_i =(j_*j^*K_X) is zero. Let X be the blow-up of ^2_ in a finite set of points in very general position. The following result of de Fernex shows that X contains no integral rational curves of self-intersection less or equal than -2.Let X be the blow-up of ^2_ in a finite set of points in very general position. If C⊆ X is an integral rational curve with C^2<0, then C is a (-1)-curve, that is a smooth rational curve of self-intersection -1.Moreover, the following <ref> follows from the proof of <cit.>.Let X be the blow-up of ^2_ in a finite set of points in very general position. If C ⊆ X is an integral rational curve, then C · K_X < 0.Therefore X cannot contain an integral rational curve C ⊆ X such that C · K_X =0.§ PROOF OF THEOREM <REF>In <ref> we have seen that a spherical object in ^b(X) is supported on a curve C ⊆ X such that C_i · K_X=0 for every irreducible component C_i of C. The proof of <ref> relies on a refinement of this observation, namely that every such curve C_i is rational.Recall the following construction from <cit.>: Let X be a projective variety overand denote by X^(d) the d-th symmetric product of X. Let cX^(d)→_0(X) be the map defined byX^(d)∋ Z ↦class ofZmod rational equivalence. Further define σ_d X^(d)× X^(d) →_0 (X)_hom (Z_1, Z_2)↦ c(Z_1)-c(Z_2),where _0 (X)_hom⊆_0(X) denotes the subspace of homologically trivial cycles.The fibers of σ_d are countable unions of closed algebraic subsets of X^(d)× X^(d).Let X be the blow-up of ^2_ in n points in general position. If C ⊆ X is an integral curve with K_X |_C̃=0 ∈^1(C̃)_, where C̃→ C is the normalization, then C is rational.We denote by E_i the exceptional divisor over the i-th blown up point. Then K_X = -3H +∑_i E_i and by assumption mK_X|_C =0 ∈^1(C)_ for all m∈. Note that C cannot be one of the exceptional curves E_i, since K_X · E_i =-1 for all i. Hence, C is the strict transform of a curve of degree d = H · C. Since mK_X · C= 0, the intersection of m∑_i E_i and C defines a unique point in the symmetric product Z_2 (x_1, … ,x_3md) ∈ C^(3md).Consider the set| 3mH |∩ C { C'∩ C| C'∈| 3mH | such thatC ⊈C' }as a subset of C^(3md). We claim that for sufficiently large m>0 the subset | 3mH |∩ C is dense in C^(3md).Indeed, let q_1, …, q_3md∈ C ∖ E_1∪…∪ E_n be pairwise distinct points and let X'→ X be the blow-up of q_1, …, q_3md. Denote by E_i' the exceptional divisor over the point q_i for 1 ≤ i ≤ 3md and consider the divisorD 3mH - ∑_i=1^3md E_i' onX'.A member of the linear system | D | can be identified with a curve in ^2_ of degree 3m vanishing at the points q_1, …, q_3md. By Riemann–Rochχ(D)= 1+ 1/2D · (D-K_X')=1+1/2(9m^2+9m-6md),thus χ(D) >0 for sufficiently large m>0. Since Serre duality shows h^2(X', _X'(D))= h^0(X', _X'(-D+K_X'))=h^0(X', _X' (-3(m+1)H + ∑_i=1^n E_i ))=0,we have h^0(X',_X'(D)) ≥χ(D) >0 for sufficiently large m>0. It follows from <cit.> that a general member C' of | D | is smooth and irreducible for sufficiently large m>0. Hence, C ⊈C' and C ∩ C' = {q_1, …, q_3md}. This shows that | 3mH |∩ C contains a Zariski open subset of C^(3md) for sufficiently large m>0. For the rest of the proof we fix such m>0. We first assume that C is smooth. Further, we assume for contradiction that C is not rational. Let σ̅_3md be the restriction of σ_3md C^(3md)× C^(3md)→_0(C)_hom to C^(3md)×{Z_2}. By <ref>, for every t∈_0(C) the fiber σ̅_3md^-1(t) is a countable union of closed algebraic subsets. We denote by _0(C)_tor the torsion classes in _0(C). Recall that _0(C)_tor is countable, thus ⋃_t∈_0(C)_torσ̅_3md^-1(t) is also a countable union of closed algebraic subsets. Let Z_1 Z_2-x_3md+y=x_1+… +x_3md-1+y,where y∈ C is a point such that c(y)-c(x_3md) is not torsion in _0(C). Note that such y exists since there are only countable many torsion points in _0(C) and for every x≠ y∈ C, c(x)≠ c(y) ∈_0(C). Hence,⋃_t∈_0(C)_torσ̅_3md^-1(t) ⊆ C^(3md) is a countable union of proper closed algebraic subsets. We have argued above that | 3mH|∩ C contains a Zariski open subset of C^(3md), thus a very general member Z∈| 3mH|∩ C satisfies σ_3md(Z,Z_2)≠ 0 in _0(C)_. Hence, mK_X|_C≠ 0 in _0(C)_. But by assumption mK_X|_C = 0 in _0(C)_, thus C has to be a rational curve. In case C is not smooth, we can argue in the same way by replacing C by its normalization and the restriction to C by the composition of restriction and pullback to the normalization.Assume for contradiction that S ∈^b(X) is a spherical object. By <ref>, S is supported on a connected curve C = C_i such that K_X |_C_i=0 ∈^1(C_i)_ for all irreducible components C_i of C. By <ref>, each C_i is rational, thus by <ref> C_i · K_X ≠ 0. This contradicts to K_X |_C_i=0 in ^1(C_i)_.Let ϕ^b(Y) →^b(X) be an equivalence. For any point y∈ Y the skyscraper sheaf k(y) satisfies k(y) ⊗ω_Y ≅ k(y) and thus ϕ(k(y)) ⊗ω_X ≅ϕ(k(y)). Moreover, since =(k(y), k(y)) = (ϕ(k(y)), ϕ(k(y))), <cit.> shows that (ϕ(k(y)) is connected. Arguing as in <ref>, we observe that (ϕ(k(y))) is either a point or (ϕ(k(y))) =⋃_i C_i, where each C_i is an integral curve with K_X |_C_i = 0 ∈^1(C_i)_. In the latter case each C_i is rational by <ref>. By <ref>, C_i · K_X ≠ 0. Hence, ϕ(k(y)) is supported on a point x ∈ X and by <cit.> ϕ(k(y))= k(x)[m] for some m ∈. Moreover, by <cit.> the locus of y∈ Y such that ϕ∘ [-m](k(y)) is a skyscraper sheaf is open. Since Y is connected, this locus is the whole of Y, which shows that the shift m in ϕ(k(y))=k(x)[m] is independent of y∈ Y. Thus ϕ∘ [-m] sends skyscraper sheaves to skyscraper sheaves and <cit.> (or <cit.>) shows that ϕ∘ [-m] = f_*(⊗ -) for some line bundle ∈(Y) and isomorphism fY → X. This proves item:no_fm_partners and shows that in the case Y =X the autoequivalence ϕ is a standard autoequivalence. Thus, item:autoequvialences_are_standard follows. We assumed the blown up points in<ref> to be in very general position. On the one hand, this is required in de Fernex' <ref> to ensure that X admits no (-2)-curves. On the other hand, <ref> relies on <cit.> which requires the blown up points to be in general position.§ ALTERNATIVE PROOF OF THEOREM <REF> <REF> AND <REF>An alternative proof of <ref> item:autoequvialences_are_standard,item:no_fm_partners, which is more dependent on the literature, can be obtained using <cit.> and <cit.> as we outline in the following:Recall, e.g. from <cit.>, that if Y is a rational surface admitting a minimal elliptic fibration, then Y can be obtained from ^2_ by blowing up 9, possibly infinitely near, points and, for some m>0, the linear system | -mK_Y| is a pencil. Hence, if X is the blow-up of ^2_ in a finite set of points in very general position, then X admits no minimal elliptic fibration. Indeed, this is clear if the number of blown up points is different from 9. In the case of 9 blown up points the linear system | -mK_X| is zero-dimensional for any m>0, so it is not a pencil. By <cit.>, a non-minimal surface admits nontrivial Fourier–Mukai partners only if it admits a minimal elliptic fibration. Hence, <ref> item:no_fm_partners follows.Let Y be any surface and let Φ_P ^b(Y) →^b(Y) be an autoequivalence with Fourier–Mukai kernel P∈^b(Y× Y). We denote by Comp(Φ_P) the set of irreducible components in (P) ↪ Y × Y and byN_Y max{ W | W ∈Comp(Φ_P), Φ_P∈(^b(Y))}the Fourier–Mukai support dimension of Y. By Uehara's classification <cit.>, the equality N_Y=2 is equivalent to Y admitting no minimal elliptic fibration and K_Y being not numerically equivalent to zero. Hence, for X the blow-up of ^2_ in a finite set of points in very general position we have N_X=2.If Y is a surface with N_Y = 2 such that the union of all (-2)-curves in Y forms a disjoint union of configurations of type A, then, by <cit.>, (^b(Y)) is generated by standard autoequivalences and spherical twists. For X the blow-up of ^2_ in a finite set of points in very general position, de Fernex' <ref> shows that X contains no (-2)-curve. Hence, <ref> item:autoequvialences_are_standard follows.§ SURFACES OF NONNEGATIVE KODAIRA DIMENSION§.§ AutoequivalencesIn contrast to the case of negative Kodaira dimension, blowing up points in arbitrary position on minimal surfaces of nonnegative Kodaira dimension does not give rise to new (-2)-curves.Let Y be a minimal surface of nonnegative Kodaira dimension and let pX → Y be the blow-up of Y in a set of points p_1, …, p_n ∈ Y. Then every (-2)-curve C in X is the strict transform of a (-2)-curve C_0 in Y such that p_i ∉ C_0 for 1≤ i ≤ n. We denote by E_i the exceptional divisor over the i-th blown up point p_i for 1 ≤ i ≤ n. Let C ⊆ X be a (-2)-curve. By adjunction, we have0=g(C) = 1 + 1/2(C^2 + C · K_X),where g(C) denotes the geometric genus of C. Thus, C · K_X =0. Further, since C is not one of the exceptional curves E_i, C is the strict transform of a curve C_0 ⊆ Y. We have0 = C · K_X= C_0 · K_Y + ∑_i=1^n m_i,where m_i is the multiplicity of C_0 at p_i. Since K_Y is nef, each of the m_i is zero, in other words p_i ∉ C_0 for 1 ≤ i ≤ n. We conclude that C_0 is a smooth rational curve with K_Y · C_0=0, hence, by adjunction, a (-2)-curve.As a consequence of <ref> and <cit.>, we obtain the followingLet Y be a minimal surface ofnonnegative Kodaira dimension and let X be the blow-up of Y in a nonempty finite set of points. Assume Y contains no (-2)-curves, e.g. Y has Kodaira dimension 1 and the elliptic fibration of Y has only irreducible fibers. Then ^b(X) admits only standard autoequivalences, i.e.(^b(X)) =(X) ⋊(X) ×[1].By <ref>, X contains no (-2)-curves. Thus, the statement follows from <cit.> if X admits no minimal elliptic fibration. The latter can be shown as follows: Recall, e.g. from <cit.>, that a surface S with minimal elliptic fibration satisfies K_S^2=0. If κ(Y)=0, then K_Y is numerically equivalent to zero. Hence, K_Y^2=0 and therefore K_X^2<0. If κ(Y)=1, then Y has an elliptic fibration and therefore K_Y^2=0. Hence, K_X^2<0. Finally, if κ(Y)=2, then X has no elliptic fibration by <cit.>.Note that the description of autoequivalences as in <ref> is not true for a minimal surface Y. For example, if κ(Y)= 1, then (^b(Y)) can be characterized as in <cit.>. In that case,as outlined in the proof of <cit.>, Y admits an autoequivalence Φ_ whereis the universal sheaf on Y × J_Y(1,1) and J_Y(1,1)≅ Y is a moduli space of stable sheaves on a smooth fiber of the elliptic fibration of Y. In this case, the support ofis 3-dimensional, thus Φ_ does not lift to an autoequivalence of a blow-up of Y. Let X be a non-minimal surface of nonnegative Kodaira dimension with minimal model Y. If the (-2)-curves in Y only form chains of type A, then it is possible to describe (^b(X)) as in <cit.>. Indeed, arguing as in <cit.> one shows that the (-2)-curves in X only form chains of type A. Thus, <cit.> applies and shows that (^b(X)) is generated by standard autoequivalences and spherical twists.§.§ Spherical ObjectsSimilar to <ref>, spherical objects in the blow-up of a minimal surface of positive Kodaira dimension are completely determined by the minimal surface.We begin with recalling two elementary <ref> regarding morphisms and the support of complexes of sheaves. As we were unable to find a suitable statement in the literature, we include a proof of <ref>.Let X be a smooth projective variety and let F, G∈^b(X). * If (F) ∩(G) = ∅, then _^b(X)(F, G) =0.*If D ⊆ X is a divisor and (F) ∩ D = ∅, then F ⊗_X(D) =F.We first prove item:first_item. The condition (F) ∩(G) = ∅ implies __X^p(^-q(F), ^l(G)) =0 for all p,q,l ∈. Recall, e.g. from <cit.>, that we have a spectral sequenceE_2^p,q = __X^p(^-q(F), ^l(G)) ⇒__X^p+q(F, ^l(G))for every l ∈. Similarly, we have a spectral sequenceE_2^p,q = __X^p(F, ^q(G)) ⇒__X^p+q(F, G).Thus, (F) ∩(G) = ∅ implies __X^l(F, G) =0 for all l ∈. Finally, the local-to-global spectral sequenceE^p,q_2 = H^p(X, __X^q(F, G)) ⇒__X^p+q(F, G)shows __X^p+q(F, G)=0.To prove item:second_item, assume that D ⊆ X is a divisor and that (F) ∩ D = ∅. The ideal sheaf sequence0 →_X(-D) →_X →_D → 0yields an exact sequence0 →__X(_D, F) → F → F ⊗_X(D) →__X^1(_D, F) → 0.As argued above, we have __X(_D, F) =0 =__X^1(_D, F). Hence, F → F ⊗_X(D) is an isomorphism.Let X be a smooth projective variety and F ∈^b(X). Then a point x ∈ X lies in (F) if and only if _^b(X)(F, k(x)[l]) ≠ 0 for some l ∈. The following <ref> characterizes spherical objects in blow-ups of minimal surfaces of positive Kodaira dimension.Let Y be a minimal surface of positive Kodaira dimension and let pX → Y be the blow-up of Y in a set of points p_1, …, p_n ∈ Y. Then every spherical object in ^b(X) is of the form Lp^*S for some spherical object S ∈^b(Y). Moreover, if S ∈^b(Y) is spherical, then Lp^*S is spherical if and only if p_i ∉(S) for all 1 ≤ i ≤ n. We denote by E_i the exceptional divisor over the i-th blown up point p_i for 1 ≤ i ≤ n. We first prove the followingIf S' ∈^b(X) is a spherical object, then (S') is disjoint from each E_i. Assume S' ∈^b(X) is spherical, then, by <ref>, (S') = ⋃_i C_i, where each C_i is an integral curve with K_X · C_i =0. Since K_X = p^*K_Y + ∑_i E_i, such curve C_i is the strict transform of a curve in Y. Moreover, if C_0 is a curve in Y, the strict transform of C_0 has class p^*C_0 - ∑_i m_i E_i, where m_i is the multiplicity of C_0 at p_i. We compute thatK_X ·(p^*C_0 - ∑_i=1^n m_i E_i ) = K_Y · C_0 + ∑_i=1^n m_i.Since K_Y is nef, we have K_Y · C_0 ≥ 0 and therefore p_i ∉ C_0 for all 1≤ i≤ n. Recall that ^b(X) admits a semiorthogonal decomposition^b(X) = ⟨_E_1(-1), …, _E_n(-1), Lp^* ^b(Y) ⟩.Since (S') is disjoint from each E_i, we have, by <ref>,_^b(X) (S', _E_i(-1)[l])=0=_^b(X) ( _E_i(-1), S'[l])for every l ∈. Hence, S' ∈Lp^* ^b(Y), i.e., there exists a object S ∈^b(Y) such that Lp^*S≅ S'. Note that Rp_*_X = _Y implies_^b(X)(S',S'[l])=_^b(X)(Lp^* S, Lp^* S[l]) = _^b(Y)( S,Rp_* Lp^* S[l]) = _^b(Y)( S, S⊗^LRp_*_X[l]) = _^b(Y)( S, S[l]).for every l∈. Moreover, since (S') is disjoint from the exceptional divisors E_i, <ref> shows that Lp^*S⊗_X(∑_iE_i) = Lp^*S. Hence, Lp^*S ⊗ p^* ω_Y ≅Lp^*S. Pushing forward via Rp_* and using the projection formula shows that S ⊗ω_Y ≅ S. Thus, S is a spherical object in ^b(Y).Now let S ∈^b(Y) be a spherical object.As in <ref>, we have_^b(X)(Lp^* S, Lp^* S[l]) = _^b(Y)( S, S[l])for every l ∈. Thus, Lp^* S is spherical if Lp^*S ⊗ω_X ≅Lp^*S. Let x ∈ X be a point, then Rp_* k(x)=k(p(x)) and by adjunction_^b(X)( S, Rp_* k(x)[l]) = _^b(X)(Lp^*S, k(x)[l])for every l ∈. Hence, <ref> shows that(Lp^* S ) = p^-1((S)). By the previous claim,it is necessary that p^-1((S)) is disjoint from each E_i for Lp^* S to be spherical. On the other hand, this is also sufficient, since Lp^* S ⊗_X(∑_i E_i)= Lp^* S holds by <ref> if p^-1((S)) is disjoint from each E_i.Let Y be a minimal surface of Kodaira dimension 1 whose elliptic fibration has only irreducible fibers. It follows from the description of (^b(Y)) in <cit.> that ^b(Y) does not contain spherical objects. Thus, if X is a blow-up of Y in a finite set of points, then, by <ref>, ^b(X) does not contain spherical objects either. Alternately, this can also be deduced from <ref>.
http://arxiv.org/abs/2310.17938v1
{ "authors": [ "Xianyu Hu", "Johannes Krah" ], "categories": [ "math.AG", "14F08, 14J26" ], "primary_category": "math.AG", "published": "20231027072931", "title": "Autoequivalences of Blow-Ups of Minimal Surfaces" }
Gate-tunable topological superconductivity in a supramolecular electron spin latticeRémy Pawlak,^1∗† Jung-Ching Liu,^1† Chao Li,^1† Richard Hess,^1† Hongyan Chen,^2 Carl Drechsel,^1, Ping Zhou,^3 Robert Häner,^3 Ulrich Aschauer,^3,4Thilo Glatzel,^1 Silvio Decurtins,^3Daniel Loss,^1 Jelena Klinovaja,^1 Shi-Xia Liu,^3∗ Wulf Wulfhekel,^2 & Ernst Meyer^1^1Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland^2Physikalisches Institut, Karlsruhe Institute of Technology,Wolfgang-Gaede-Str. 1, 76131 Karlsruhe, Germany^3Department of Chemistry, Biochemistry and Pharmaceutical Sciences,University of Bern, Freiestrasse 3, 3012 Bern, Switzerland^4Department of Chemistry and Physics of Materials, University of Salzburg,Jakob-Haringer-Strasse 2A, 5020 Salzburg, Austria^†These authors equally contributed;^∗To whom correspondence should be addressed;E-mails:[email protected], [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Opinion summarization sets itself apart from other types of summarization tasks due to its distinctive focus on aspects and sentiments. Although certain automated evaluation methods like ROUGE have gained popularity, we have found them to be unreliable measures for assessing the quality of opinion summaries. In this paper, we present OpinSummEval, a dataset comprising human judgments and outputs from 14 opinion summarization models. We further explore the correlation between 26 automatic metrics and human ratings across four dimensions. Our findings indicate that metrics based on neural networks generally outperform non-neural ones. However, even metrics built on powerful backbones, such as BART and GPT-3/3.5, do not consistently correlate well across all dimensions, highlighting the need for advancements in automated evaluation methods for opinion summarization. The code and data are publicly available at https://github.com/A-Chicharito-S/OpinSummEval/tree/mainhttps://github.com/A-Chicharito-S/OpinSummEval/tree/main.§ INTRODUCTIONOpinion summarization has garnered significant research interest in light of recent advancements in neural networks and large datasets <cit.>. In contrast to conventional summarization tasks, which focus on preserving key information in unstructured texts like news articles, opinion summarization places emphasis on extracting prevalent aspects and expressing coherent sentiments from a vast number of reviews, which are often disorganized and occasionally contradictory (Table <ref>). Due to the large size of datasets and extensive annotations <cit.>, opinion summarizers <cit.> are usually trained in an unsupervised manner, where pseudo pairs of {reviews, summary} are constructed from a collection of reviews without relying on human-written references <cit.>. Despite significant advancements in datasets and architectures, evaluating the performance of models for opinion summarization remains a challenge. One common approach is automated evaluation, which employs automatic metrics like ROUGE <cit.> as criteria. While this method is efficient and provides stable results, it may not necessarily accurately reflect the model's performance from a human perspective (Table <ref>). Another approach is human evaluation, where annotators are tasked with scoring or ranking summaries from different models. Human evaluation is more closely aligned with common understandings and is therefore considered more reliable than automated scores. However, it is typically time-consuming and labor-intensive, making it suitable primarily for the testing stage and impractical for providing supervision signals during model training. Our literature review of 21 papers published between 2018 and 2023 (Appendix <ref>), reveals that only 3 papers introduce different metrics as complements to ROUGE for evaluating opinion summarization. We argue that in addition to the advancements made in opinion summarization datasets and models, there should be attention given to the evaluation of metrics in terms of their alignment with human judgments. Such emphasis would be valuable in selecting an appropriate metric that facilitates efficient and human-aligned evaluation of model performance. Moreover, opinion summarization possesses distinctive characteristics, such as its emphasis on aspects, the diversity of opinions and expressions, and the difficulty of expressing coherent sentiments from potentially conflicting reviews. These factors set opinion summarization apart from most other summarization tasks and introduce new challenges for automatic metrics to correlate well with human judgments. Hence, even though certain metrics have demonstrated effectiveness in other summarization tasks <cit.>, their reliability and performance in opinion summarization still lack sufficient verification and comprehensive analysis.Our work is motivated to fill the blank with the following contributions: 1) We introduce OpinSummEval, a dataset with human annotations on the outputs of 14 opinion summarization models over 4 dimensions, which is the first of its kind to the best of our knowledge; 2) We conduct a comprehensive evaluation of 26 metrics for opinion summarization. Our findings indicate that neural-based metrics, such as BARTScore <cit.> and ChatGPT <cit.>, exhibit superior performance compared to non-neural metrics like ROUGE; 3) We assess the performance of various models (statistically-based, task-agnostic, task-specific, and zero-shot) with OpinSummEval. Our analysis reveals that task-specific models can compensate for the limitations posed by model sizes through specialized paradigms. Furthermore, we observe that GPT-3.5 <cit.> consistently outperforms other models, as preferred by human evaluators. These contributions collectively enhance our understanding of opinion summarization, provide a benchmark dataset for future research, highlight the effectiveness of neural-based metrics, and offer insights into the performance of different opinion summarization models. § RELATED WORKAutomated Evaluation Besides the success of metrics <cit.> that compute n-gram overlaps, such statistically-based measurements usually fail to promote paraphrases that convey the same meaning. Recent advances in automatic metrics <cit.> take insights from neural networks and encourage diversity in words and phrases. <cit.> propose BERTScore, which measures word-wise similarities with BERT <cit.> embeddings. BARTScore <cit.> treats evaluation as a text generation task and uses the conditional probability of BART <cit.> as a metric. <cit.> cast evaluation as a Question Answering (QA) task and measure the quality of texts with a trained QA model. As the GPT family raises to power, metrics based on it <cit.> also show great potential. <cit.> instruct ChatGPT to evaluate with an integer score. <cit.> propose to augment the instructions with Chain of Thought (CoT) <cit.> and weight a set of predefined integer scores with their generation probabilities from GPT-3/4.Metrics Evaluation in Summarization <cit.> investigate the effectiveness of metrics for text summarization using pyramid <cit.>. <cit.> evaluate metrics in text summarization by annotating the CNN/DailyMail dataset <cit.> in terms of relevance, consistency, fluency, and coherence. <cit.> similarly conduct evaluation for dialogue summarization, and <cit.> evaluate metrics for biomedical question summarization. Similar to our work, <cit.> constructs ReviewNLI and evaluates 4 metrics on opinion prevalence.However, none of the tasks share the characteristics of opinion summarization, nor do these works evaluate GPT-based metrics, which motivates our work to evaluate automated methods in the task of opinion summarization.§ PRELIMINARIESIn this section, we introduce the task definition of metric evaluation, the selected summarization models, and the automatic metrics to be evaluated.§.§ Task DefinitionGiven a dataset D containing N instances, we denote the i-th instance as d_i. With M summarization models, we denote ŝ_j^i as the output from the j-th model on d_i, and ℳ_k(ŝ_j^i) as the score assigned by metric ℳ_k. If we choose C as the correlation criteria, the relation ℛ between metric ℳ_p and ℳ_q is measured at different levels <cit.>: System-level correlationℛ_sys(p, q)=C( [1/N∑_iℳ_p(ŝ_1^i), ..., 1/N∑_iℳ_p(ŝ_M^i)],[1/N∑_iℳ_q(ŝ_1^i), ..., 1/N∑_iℳ_q(ŝ_M^i)])where the associated p-value reflects the significance of the correlation ℛ_sys(p, q).Summary-level correlationℛ_sum(p, q)=1/N∑_i C([ℳ_p(ŝ_1^i), ..., ℳ_p(ŝ_M^i)],[ℳ_q(ŝ_1^i), ..., ℳ_q(ŝ_M^i)])where there is no p-value since the correlations are averaged over the dataset D.§.§ Summarization ModelsWe selected 14 popularly used models[The detailed introduction and the resources for their outputs are listed in Appendix <ref>.] in opinion summarization from 4 categories: statistically-based, task-agnostic, task-specific, and zero-shot. We use the superscript Ext and Abs to denote extractive and abstractive models. Statistically-Based models rely on linguistic features of reviews and respective statistical results to perform extractive summarization. Models in this category include LexRank^Ext <cit.>, Opinosis^Ext <cit.>, and BertCent^Ext <cit.>.Task-Agnostic models are pre-trained language models (PLMs) intended to fit multiple tasks. Models in this category are finetuned with suggested hyperparameters to achieve competitive performance. We select BART^Abs <cit.>, T5^Abs <cit.>, and PEGASUS^Abs <cit.> as our backbones.Task-Specific models are designed specifically for opinion summarization, with objectives and modules that attend to obstacles such as unsupervised training. We choose COOP^Abs <cit.>, CopyCat^Abs <cit.>, DenoiseSum^Abs <cit.>, MeanSum^Abs <cit.>, OpinionDigest^Abs <cit.>, PlanSum^Abs <cit.>, and RecurSum^Abs <cit.> as the representatives.Zero-Shot models are not trained on any datasets for opinion summarization and are tested directly. We choose GPT-3.5^Abs <cit.>,in specific,as the backbone. §.§ Evaluation MetricsWe choose 26 metrics[The detailed introduction and resources for their implementations are listed in Appendix <ref>.] to evaluate their effectiveness in opinion summarization. They are categorized into non-GPT and GPT-based, depending on whether they are built upon GPTs.Non-GPT metrics include commonly-used measurements in opinion summarization and popularly evaluated metrics from related works <cit.>. We choose the following metrics to evaluate: (statistically-based) ROUGE <cit.>, BLEU <cit.>, METOR <cit.>, TER <cit.>, and ChrF <cit.>; (neural-based) BERTScore <cit.>, BARTScore <cit.>, BLANC <cit.>, BLEURT <cit.>, InfoLM <cit.>, BaryScore <cit.>, MoverScore <cit.>, Sentence Mover’s Similarity <cit.>, EmbeddingAverage <cit.>, VectorExtrema <cit.>, GreedyMatching <cit.>, Perplexity-[], with PEGASUS as the backbone, Prism <cit.>, S^3 <cit.> and SUPERT <cit.>; (QA-based) QAFactEval <cit.>, QuestEval <cit.> and SummaQA <cit.>; (NLI-based) SummaC <cit.>.GPT-Based metrics are built upon the GPT family and its variants. Specifically, we choose Perplexity-[], with GPT-2 <cit.> as the language model, ChatGPT <cit.>, withas the backbone[The prompts we use are shown in Appendix <ref>.] and two variants[The prompts and CoT are shown in Appendix <ref>.] of G-Eval <cit.>: G-Eval-[], which weights a set of predefined scores with the generation probabilities conditioned on the instructions and CoT, with [In <cit.>, the choice is , a GPT-3.5 variant, which is more powerful however less efficient and more expensive compared with .] as the backbone; G-Eval-[], which gives integer scores based on the instructions and CoT, withas the scoring model.§ OPINSUMMEVALIn this section, we introduce the dataset upon which annotations are carried out, the 4 dimensions to be annotated, the detailed annotation process, and the analysis of the annotation results.§.§ DatasetYelp <cit.> is a widely-used dataset that has promoted vast research works in opinion summarization, upon which the model outputs we are able to collect are the most[A discussion on such a choice is shown in Appendix <ref>]. We base our annotations on its test set, where there are 100 instances and each consists of 8 reviews on the same product/service and 1 human-written reference. §.§ DimensionsInstead of choosing coherence, consistency, fluency, and relevance <cit.> as the dimensions to evaluate, we select the following 4 dimensions consistent with the characteristics of opinion summarization.Aspect Relevance measures whether the mainly discussed aspects in the reviews are covered exactly by the summary. It focuses on whether the summary correctly reflects the mainly discussed aspects in the reviews.Self-Coherence measures whether the summary is consistent within itself in terms of sentiments and aspects. It focuses on whether the summary is coherent and does not reflect conflicting opinions.Sentiment Consistency measures whether the summary is consistent with the reviews in terms of sentiments for each aspect. It focuses on whether the summary aspect-wisely captures the main sentiment in the reviews.Readability measures whether the summary is fluent and informative. It focuses on whether the summary is well-written and valuable. §.§ ProcessThe annotation is carried out on the test set of Yelp with the outputs of the aforementioned 14 models. For each instance, we ask the annotators to rate on an integer scale from 1 (worst) to 5 (best) and annotate every summary independently over the 4 dimensions. The overall workload would be 2 (# of annotators) × 100 (# of instances) × 14 (# of models) × 4 (# of dimensions) = 11200 scores, where each dimension receives 2 annotations. The annotation is conducted independently and each annotator rates one batch (with a size of 10) at a time to ensure consistency and reliability. The final score of a summary at each dimension is the average of its annotations, and the annotation process with guidelines is detailed in Appendix <ref>. §.§ Analysis Annotation Distribution We count the annotations for different dimensions[A sample size of 2 (# of annotators) × 100 (# of instances) × 14 (# of models) = 2800 for each.] and show their distributions in Figure <ref>.The dissimilarity of annotation distributions among any two dimensions is evident, suggesting that OpinSummEval maintains independence across dimensions. We observe that the majority of annotations assign a score of 3 or 4 across the four dimensions, which indicates that most models can consistently generate moderately high-quality summaries across various dimensions. Regarding deviations within each score, we have observed that scores ranging from 2 to 4 exhibit significant variability, whereas scores of 1 and 5 demonstrate relatively smaller deviations. We argue this is due to the fact that summaries evaluated as the worst/best in one dimension often tend to perform poorly/exceptionally across others as well.Annotation Agreement We choose Cohen's κ <cit.> and Gwet's AC1 <cit.> to evaluate the annotation agreement. As shown in Table <ref>, we report the averaged agreement over the batches for each dimension. We observe that Cohen's κ is within an acceptable range[We show its interpretation and the agreement measured under Fleiss' κ <cit.> and Krippendorff’s α <cit.> in Appendix <ref>.] between 0.7771 to 0.9055, and the annotators tend to have a higher agreement in terms of “aspect relevance” and “sentiment consistency” compared with the other two dimensions. This is reasonable since evaluating “aspect relevance” and “sentiment consistency” involve cross-examination with the reviews, while the others are rated self-referentially. A similar trend is also observed for Gwet's AC1.§ EVALUATION RESULTS§.§ Metric EvaluationWe measure the correlations between metrics and human annotations with Kendall's τ[A discussion of Pearson's r is detailed in Appendix <ref>.] following <cit.> and show the results in Table <ref>. We observe that certain metrics exhibit a stronger correlation at summary-level than at system-level, such as ROUGE-1 at sentiment consistency and MoverScore at aspect relevance, which is similar to the findings of <cit.>. However, it is worth noting that for metrics that show a significant correlation (p-value ≤ 0.05), there is indeed a higher correlation at system-level than at summary-level across all the dimensions.Our observations reveal that metrics relying on linguistic features, such as n-gram overlaps, exhibit lower correlations with human judgments across all four dimensions when compared to neural automatic metrics. In the case of the ROUGE-n family, it is commonly believed that ROUGE-1/2 assess informativeness, while ROUGE-L measures fluency <cit.>. However, despite their popularity in opinion summarization, their performance is rather unsatisfactory. None of them exhibits a high correlation with human evaluations, which is consistent with the findings of <cit.>. Based on our evaluation results, we recommend exercising caution when using ROUGE scores to provide training supervision or evaluate the quality of opinion summaries during testing. Regarding other statistically-based metrics like BLEU, METEOR, and ChrF, although they exhibit higher absolute correlation values compared to the ROUGE-n family, their correlations tend to be negative at the system-level and positive at the summary-level. This can potentially cause difficulties when interpreting their meanings. The only exception is TER, which demonstrates positive correlations at the system-level across most dimensions. However, the summary-level correlations are reversed, and overall, TER exhibits low and insignificant correlations at both levels.Metrics based on neural networks generally exhibit strong correlations with human judgments across all four dimensions. Among all the variants, BERTScore_recall demonstrates the highest performance. This can be attributed to the fact that the recall score measures the extent to which words in the summary match the reference. This similarity is akin to determining whether important opinions from the reviews (mentioned in the reference) are captured in the summary. We observe BARTScore_rev→ hyp consistently outperforms others across almost all four dimensions. We believe this superiority has two key factors. First, BART's power as a competitive backbone enables the measurement of conditional generation probabilities. Second, BARTScore_rev→ hyp directly measures the likelihood of a summary being generated from input reviews, which aligns with the main concept of summary evaluation. Compared to SMS[SMS is the abbreviation for Sentence Mover's Similarity.], InfoLM, and BaryScore, whose correlations are relatively low in magnitude and less significant, BLANC treat evaluation as a language understanding task of the input documents, and achieve high correlations with dimensions that involve analyzing the reviews. Surprisingly, BLEURT exhibits strong correlations with readability and outperforms BLEU, ROUGE, and BERTScore, which are the three signals used in its training. This suggests that trainable metrics that learn from other metrics can yield competitive and even superior results. The competitive performance of SUPERT can be attributed to its pseudo reference, which comprises sentences extracted from reviews. However, since the extracted sentences can vary in style, they may not serve as a reliable proxy for measuring readability. For QA-based metrics, QAFactEval, QuestEval, and SummaQA all exhibit good correlations with dimensions reflecting relevance and consistency, in line with the observations of <cit.>. Although SummaC cast evaluation as a natural language inference (NLI) task to measure factual consistency, the correlation is merely salient even in dimensions that reflect consistency. We suspect the reasons are two-fold: 1) self-consistency and sentiment-consistency focus more on the summary itself and sentiments instead of facts between the reviews and the summary, which is shifted from the original purpose of SummaC; 2) opinions that are potentially scattered and heterogeneous in the reviews make it harder for the model to inference correctly; thus, degrade the evaluation results. Among GPT-based metrics, PPL-[] performs similarly to PPL-[][PPL stands for perplexity.] and exhibit poor correlations across all dimensions. For the two variants of G-Eval-[], they exhibit limited alignment with human annotations. We suspect this is becauseis a faster however less powerful backbone compared to the original choice, . This suggests that future directions may focus on developing metrics that prioritize efficiency without compromising quality. Regarding GPT-3.5-based metrics, ChatGPT-[] with handcrafted prompts generally outperforms CoT-enhanced G-Eval-[]. We believe that this gap can be attributed to: 1) differences in the prompts used, 2) an increase in input length after embedding CoT, and 3) potential drawbacks of using CoT without demonstrations, and further investigation is necessary to understand the details. It is worth noting that ChatGPT-based metrics excel in measuring readability, indicating their potential as effective evaluators of linguistic soundness.We observe that reference-free metrics (marked by ▾) generally outperform metrics that rely on reference-based evaluation. While the specific reasons require further investigation, we suspect that the evaluation results may be strongly influenced by the quality and style of human-written references. Therefore, future research could also explore the possibility of reference-free evaluation methods.§.§ Model EvaluationWe evaluate the performance of the 14 models based on their average scores over the 4 dimensions and present the results in Table <ref>. We also report ROUGE-1/2/L scores (by convention) and BARTScore results (based on previous analysis). Extractive models (LexRank, BertCent) are favored over all the dimensions since they select salient sentences from the reviews, which are usually informative and grammatically correct, as summaries. The only exception is Opinosis, and we suspect this is because the model extracts incomplete phrases from the reviews and subsequently re-arranges them, which may result in confusing and potentially inaccurate summaries.In comparison to task-agnostic PLMs, the performance of task-specific models is not consistently superior across all dimensions. We believe there are two primary reasons for this. First, we use PLMs with a depth of at least 12 layers, which is significantly larger than that of the task-specific models. Second, the training paradigms used for task-specific models may enhance performance in one dimension while potentially hindering it in another. For example, OpinonDigest is trained to reconstruct a sentence based on a set of extracted keywords. While this training approach may promote self-coherence, it can also lead to hallucinations and potential inconsistencies when compared to the reviews. However, it is important to note that our observations do not contradict the effectiveness of their proposed architectures and training schedules. Notably, we observe that CopyCat achieves comparable performance to T5 (3.960 vs. 4.020) in terms of self-coherence, and COOP receives a higher rating for readability compared to PEGASUS (3.865 vs. 3.780). This suggests that their paradigms specifically designed for opinion summarization can compensate for the discrepancy in size and yield comparable capabilities. We present case study on model outputs in Appendix <ref>.§ DISCUSSION §.§ The Choice of MetricsDespite our work has shown, as demonstrated in many similar research works <cit.>, that n-gram-based automated metrics, such as BLEU <cit.> and ROUGE <cit.>, are less aligned with humans compared with the newly-developed neural-based methods[Here and followed in this subsection, by “neural-based” we refer to metrics whose evaluation paradigms involving neural models other than statistical counting such as n-grams.], such as BARTScore <cit.> and G-Eval <cit.>, we would like to suggest that choosing which metrics to evaluate opinion summarization models remains to be an unresolved issue. On one hand, it is indeed that neural-based methods show higher correlations with human evaluations, however, it is worth mentioning that these methods might be inherently partial, for example, there might exist gender or social biases in the embeddings of a pre-trained model that is later used as the backbone of a neural-based metric, which might implicitly favor opinion summarizers that promote such biases. On the other hand, despite the fact that n-gram-based metrics provide fast and efficient evaluations for both training and testing, their statistical nature destines that these metrics are hard to capture the rich variations of human languages, and thus, might indirectly favor models that are more aligned to a limited set of human-written summaries, which would be less flexible and thus less likely to satisfy the increasing demand of controllable opinion summarization <cit.>. Apart from the above dilemmas, both statistical-based and neural-based metrics rely on the number and quality of human-written summaries[This is also true for reference-free metrics that evaluate with some neural models, which are trained in a supervised fashion with human-annotated labels.], which might largely affect the evaluation outcomes, for example, if the maximum length of human-written references is less than L, then models producing summaries exceed L are less likely to receive high scores, despite the fact that long texts sometimes could convey more details that might be beneficial for decision making.Therefore, we suggest that automated metrics should be chosen carefully when used to evaluate the performance of opinion summarization models. Although the results in this work could potentially be a reference to motivate a specific choice, however, we argue that such a decision would be better made if multiple considerations were taken instead of solely based on our analyses, since the reported correlations are not an absolute criterion to show that one metric is universally better than another.§.§ Potential Evaluation Paradigms for Opinion SummarizationSince there are no metrics particularly tailored for opinion summarization at the time of this research, we would like to suggest some potential evaluation paradigms that might be effective for the development of opinion-summarization-specific metrics.From the analyses in Section <ref>, we can see that QA-based (e.g., SummaQA) and text-generation-alike (e.g., BARTScore) evaluation paradigms could be potential directions to develop novel metrics for opinion summarization, especially with the recent advancement of large language models (LLMs), which show astonishing ability in both QA and language modeling. Comparing the performance of BERTScore and BARTScore, we can also conclude that the training objectives of the backbones affect the final evaluation results; thus, future works could further consider building metrics based on some opinion summarization models, whose training objectives naturally align with the evaluation process.Based on the evaluation results from Section <ref>, we can observe that among task-specific models, COOP ranks the best measured by both ROUGE and BARTScore, and is favored by human annotators across different dimensions as well. COOP first searches a convex combination of the latent representations based on input-output word overlaps, and then uses the searched latent vector to produce summaries, which is similar to the best performing automated metric BARTScore_rev→ hyp that evaluates via (rev, hyp) matching. The prominent performance of COOP and its resemblance to BARTScore_rev→ hyp suggest that future works could take inspiration from COOP, and design metrics based on input-output matching to evaluate models for opinion summarization. § CONCLUSIONWe present OpinSummEval, a dataset that contains summaries from 14 opinion summarization models, annotated across four dimensions. Through a comprehensive investigation and analysis, we have the following findings: 1) Metrics based on n-gram statistics, such as ROUGE, exhibit poor correlations with human evaluation. Therefore, despite their popularity, future works in opinion summarization should be cautious when using these metrics; 2) Neural-based metrics perform better than non-neural metrics. However, it is important to note that the performance of powerful backbone models does not guarantee high correlations with human evaluation; 3) Only a few metrics consistently align well with human evaluation across all four dimensions, and BARTScore and QA-based metrics demonstrate competitive performance across multiple dimensions. This suggests that future development of metrics for opinion summarization could draw inspiration from the paradigms used in these metrics; 4) Recently proposed metrics based on GPT-3/3.5 excel in evaluating readability. However, their performance in other dimensions is influenced by the choice of prompts and backbones. Careful consideration is suggested if these metrics are used for evaluation in opinion summarization.Based on our research, we hope that future works will recognize the importance of selecting proper evaluation methods, consider using metrics in addition to ROUGE, and even design novel metrics specifically tailored for opinion summarization.§ LIMITATIONSAnnotation Scale An ideal dataset should encompass a substantial number of the following components: 1) model outputs, 2) annotations, and 3) instances. However, prior research works <cit.> have demonstrated that an increase in model outputs and annotators typically leads to a disproportionate rise in construction time. Consider the example of annotation, where achieving the desired consensus among n annotators necessitates conducting tests or re-annotations approximately n(n-1)/2 times, exhibiting a time complexity of O(n^2). Consequently, to ensure high-quality annotations, we employ two annotators, meanwhile, carrying out annotations on Yelp to maximize the quantity of chosen models (14) and annotated instances (100).§ ETHICS STATEMENTThe annotators are paid 8 dollars per hour, which is above the local minimum wage, and their personal information is removed from the dataset. 78 natexlab#1#1[Akbik et al.(2019)Akbik, Bergmann, Blythe, Rasul, Schweter, and Vollgraf]akbik2019flair Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-the-art NLP. In NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54–59.[Alex et al.(2010)Alex, Grover, Shen, and Kabadjov]alex-etal-2010-agile Bea Alex, Claire Grover, Rongzhou Shen, and Mijail Kabadjov. 2010. https://aclanthology.org/W10-1804 Agile corpus annotation in practice: An overview of manual and automatic annotation of CVs. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 29–37, Uppsala, Sweden. Association for Computational Linguistics.[Amplayo et al.(2021a)Amplayo, Angelidis, and Lapata]amplayo-etal-2021-aspect Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021a. https://doi.org/10.18653/v1/2021.emnlp-main.528 Aspect-controllable opinion summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6578–6593, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.[Amplayo et al.(2021b)Amplayo, Angelidis, and Lapata]Amplayo_Angelidis_Lapata_2021 Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021b. https://doi.org/10.1609/aaai.v35i14.17481 Unsupervised opinion summarization with content planning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12489–12497.[Amplayo and Lapata(2020)]amplayo-lapata-2020-unsupervised Reinald Kim Amplayo and Mirella Lapata. 2020. https://doi.org/10.18653/v1/2020.acl-main.175 Unsupervised opinion summarization with noising and denoising. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1934–1945, Online. Association for Computational Linguistics.[Amplayo and Lapata(2021)]amplayo-lapata-2021-informative Reinald Kim Amplayo and Mirella Lapata. 2021. https://doi.org/10.18653/v1/2021.eacl-main.229 Informative and controllable opinion summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2662–2672, Online. Association for Computational Linguistics.[Angelidis et al.(2021)Angelidis, Amplayo, Suhara, Wang, and Lapata]10.1162/tacl_a_00366 Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. https://doi.org/10.1162/tacl_a_00366 Extractive Opinion Summarization in Quantized Transformer Spaces. Transactions of the Association for Computational Linguistics, 9:277–293.[Angelidis and Lapata(2018)]angelidis-lapata-2018-summarizing Stefanos Angelidis and Mirella Lapata. 2018. https://doi.org/10.18653/v1/D18-1403 Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675–3686, Brussels, Belgium. Association for Computational Linguistics.[Banerjee and Lavie(2005)]banerjee-lavie-2005-meteor Satanjeev Banerjee and Alon Lavie. 2005. https://aclanthology.org/W05-0909 METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics.[Bhandari et al.(2020)Bhandari, Gour, Ashfaq, Liu, and Neubig]bhandari-etal-2020-evaluating Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu, and Graham Neubig. 2020. https://doi.org/10.18653/v1/2020.emnlp-main.751 Re-evaluating evaluation in text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9347–9359, Online. Association for Computational Linguistics.[Bhaskar et al.(2023)Bhaskar, Fabbri, and Durrett]bhaskar2023prompted Adithya Bhaskar, Alexander R. Fabbri, and Greg Durrett. 2023. http://arxiv.org/abs/2211.15914 Prompted opinion summarization with gpt-3.5.[Bražinskas et al.(2020a)Bražinskas, Lapata, and Titov]brazinskas-etal-2020-shot Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020a. https://doi.org/10.18653/v1/2020.emnlp-main.337 Few-shot learning for opinion summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4119–4135, Online. Association for Computational Linguistics.[Bražinskas et al.(2020b)Bražinskas, Lapata, and Titov]brazinskas-etal-2020-unsupervised Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020b. https://doi.org/10.18653/v1/2020.acl-main.461 Unsupervised opinion summarization as copycat-review generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5151–5169, Online. Association for Computational Linguistics.[Bražinskas et al.(2021)Bražinskas, Lapata, and Titov]brazinskas-etal-2021-learning Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2021. https://doi.org/10.18653/v1/2021.emnlp-main.743 Learning opinion summarizers by selecting informative reviews. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9424–9442, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.[Brazinskas et al.(2022)Brazinskas, Nallapati, Bansal, and Dreyer]brazinskas-etal-2022-efficient Arthur Brazinskas, Ramesh Nallapati, Mohit Bansal, and Markus Dreyer. 2022. https://doi.org/10.18653/v1/2022.findings-naacl.113 Efficient few-shot fine-tuning for opinion summarization. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1509–1523, Seattle, United States. Association for Computational Linguistics.[Chu and Liu(2019)]pmlr-v97-chu19b Eric Chu and Peter Liu. 2019. https://proceedings.mlr.press/v97/chu19b.html MeanSum: A neural model for unsupervised multi-document abstractive summarization. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1223–1232. PMLR.[Clark et al.(2019)Clark, Celikyilmaz, and Smith]clark-etal-2019-sentence Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. 2019. https://doi.org/10.18653/v1/P19-1264 Sentence mover's similarity: Automatic evaluation for multi-sentence texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2748–2760, Florence, Italy. Association for Computational Linguistics.[Cohen(1960)]doi:10.1177/001316446002000104 Jacob Cohen. 1960. https://doi.org/10.1177/001316446002000104 A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37–46.[Colombo et al.(2021)Colombo, Staerman, Clavel, and Piantanida]colombo-etal-2021-automatic Pierre Colombo, Guillaume Staerman, Chloé Clavel, and Pablo Piantanida. 2021. Automatic text evaluation through the lens of Wasserstein barycenters. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10450–10466.[Colombo et al.(2022)Colombo, Clavel, and Piantanida]Colombo_Clavel_Piantanida_2022 Pierre Jean A. Colombo, Chloé Clavel, and Pablo Piantanida. 2022. https://doi.org/10.1609/aaai.v36i10.21299 Infolm: A new metric to evaluate summarization &amp; data2text generation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10554–10562.[Devlin et al.(2019)Devlin, Chang, Lee, and Toutanova]devlin-etal-2019-bert Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. https://doi.org/10.18653/v1/N19-1423 BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.[Elsahar et al.(2021)Elsahar, Coavoux, Rozen, and Gallé]elsahar-etal-2021-self Hady Elsahar, Maximin Coavoux, Jos Rozen, and Matthias Gallé. 2021. https://doi.org/10.18653/v1/2021.eacl-main.141 Self-supervised and controlled multi-document opinion summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1646–1662, Online. Association for Computational Linguistics.[Erkan and Radev(2004)]10.5555/1622487.1622501 Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Int. Res., 22(1):457–479.[Fabbri et al.(2022)Fabbri, Wu, Liu, and Xiong]fabbri-etal-2022-qafacteval Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. https://doi.org/10.18653/v1/2022.naacl-main.187 QAFactEval: Improved QA-based factual consistency evaluation for summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2587–2601, Seattle, United States. Association for Computational Linguistics.[Fabbri et al.(2021)Fabbri, Kryściński, McCann, Xiong, Socher, and Radev]10.1162/tacl_a_00373 Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. https://doi.org/10.1162/tacl_a_00373 SummEval: Re-evaluating Summarization Evaluation. Transactions of the Association for Computational Linguistics, 9:391–409.[Fleiss(1971)]fleiss1971measuring Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.[Forgues et al.(2014)Forgues, Pineau, Larchevêque, and Tremblay]forgues2014bootstrapping Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, and Réal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In Nips, modern machine learning and natural language processing workshop, volume 2, page 168.[Fu et al.(2023)Fu, Ng, Jiang, and Liu]fu2023gptscore Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. http://arxiv.org/abs/2302.04166 Gptscore: Evaluate as you desire.[Ganesan et al.(2010)Ganesan, Zhai, and Han]ganesan-etal-2010-opinosis Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. https://aclanthology.org/C10-1039 Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 340–348, Beijing, China. Coling 2010 Organizing Committee.[Gao et al.(2023)Gao, Ruan, Sun, Yin, Yang, and Wan]gao2023humanlike Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, and Xiaojun Wan. 2023. http://arxiv.org/abs/2304.02554 Human-like summarization evaluation with chatgpt.[Gao and Wan(2022)]gao-wan-2022-dialsummeval Mingqi Gao and Xiaojun Wan. 2022. https://doi.org/10.18653/v1/2022.naacl-main.418 DialSummEval: Revisiting summarization evaluation for dialogues. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5693–5709, Seattle, United States. Association for Computational Linguistics.[Gao et al.(2020)Gao, Zhao, and Eger]gao-etal-2020-supert Yang Gao, Wei Zhao, and Steffen Eger. 2020. https://doi.org/10.18653/v1/2020.acl-main.124 SUPERT: Towards new frontiers in unsupervised evaluation metrics for multi-document summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1347–1354, Online. Association for Computational Linguistics.[Gwet(2008)]https://doi.org/10.1348/000711006X126600 Kilem Li Gwet. 2008. https://doi.org/https://doi.org/10.1348/000711006X126600 Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology, 61(1):29–48.[He and McAuley(2016)]10.1145/2872427.2883037 Ruining He and Julian McAuley. 2016. https://doi.org/10.1145/2872427.2883037 Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proceedings of the 25th International Conference on World Wide Web, WWW '16, page 507–517, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.[Hosking et al.(2023)Hosking, Tang, and Lapata]hosking-etal-2023-attributable Tom Hosking, Hao Tang, and Mirella Lapata. 2023. https://doi.org/10.18653/v1/2023.acl-long.473 Attributable and scalable opinion summarization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8488–8505, Toronto, Canada. Association for Computational Linguistics.[Iso et al.(2021)Iso, Wang, Suhara, Angelidis, and Tan]iso-etal-2021-convex-aggregation Hayate Iso, Xiaolan Wang, Yoshihiko Suhara, Stefanos Angelidis, and Wang-Chiew Tan. 2021. https://doi.org/10.18653/v1/2021.findings-emnlp.328 Convex Aggregation for Opinion Summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3885–3903, Punta Cana, Dominican Republic. Association for Computational Linguistics.[Isonuma et al.(2021)Isonuma, Mori, Bollegala, and Sakata]isonuma-etal-2021-unsupervised Masaru Isonuma, Junichiro Mori, Danushka Bollegala, and Ichiro Sakata. 2021. https://doi.org/10.1162/tacl_a_00406 Unsupervised abstractive opinion summarization by generating sentences with tree-structured topic guidance. Transactions of the Association for Computational Linguistics, 9:945–961.[Isonuma et al.(2019)Isonuma, Mori, and Sakata]isonuma-etal-2019-unsupervised Masaru Isonuma, Junichiro Mori, and Ichiro Sakata. 2019. https://doi.org/10.18653/v1/P19-1206 Unsupervised neural single-document summarization of reviews via learning latent discourse structure and its ranking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2142–2152, Florence, Italy. Association for Computational Linguistics.[Krippendorff(2011)]krippendorff2011computing Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability.[Kusner et al.(2015)Kusner, Sun, Kolkin, and Weinberger]pmlr-v37-kusnerb15 Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. https://proceedings.mlr.press/v37/kusnerb15.html From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 957–966, Lille, France. PMLR.[Laban et al.(2022)Laban, Schnabel, Bennett, and Hearst]10.1162/tacl_a_00453 Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. https://doi.org/10.1162/tacl_a_00453 SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization. Transactions of the Association for Computational Linguistics, 10:163–177.[Landauer and Dumais(1997)]EmbeddingAvr Thomas K Landauer and Susan T. Dumais. 1997. https://doi.org/https://doi.org/10.1037/0033-295X.104.2.211 A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, page 211–240.[Lewis et al.(2020)Lewis, Liu, Goyal, Ghazvininejad, Mohamed, Levy, Stoyanov, and Zettlemoyer]lewis-etal-2020-bart Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. https://doi.org/10.18653/v1/2020.acl-main.703 BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics.[Li et al.(2019)Li, Li, and Zong]Li_Li_Zong_2019 Junjie Li, Haoran Li, and Chengqing Zong. 2019. https://doi.org/10.1609/aaai.v33i01.33016690 Towards personalized review summarization via user-aware sequence network. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):6690–6697.[Lin(2004)]lin-2004-rouge Chin-Yew Lin. 2004. https://aclanthology.org/W04-1013 ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.[Liu et al.(2023)Liu, Iter, Xu, Wang, Xu, and Zhu]liu2023geval Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. http://arxiv.org/abs/2303.16634 G-eval: Nlg evaluation using gpt-4 with better human alignment.[Luo et al.(2023)Luo, Xie, and Ananiadou]luo2023chatgpt Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. http://arxiv.org/abs/2303.15621 Chatgpt as a factual inconsistency evaluator for text summarization.[Malon(2023)]malon2023automatically Christopher Malon. 2023. http://arxiv.org/abs/2307.14305 Automatically evaluating opinion prevalence in opinion summarization.[Nallapati et al.(2016)Nallapati, Zhou, dos Santos, Gulcehre, and Xiang]nallapati-etal-2016-abstractive Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. https://doi.org/10.18653/v1/K16-1028 Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics.[Nenkova and Passonneau(2004)]nenkova-passonneau-2004-evaluating Ani Nenkova and Rebecca Passonneau. 2004. https://aclanthology.org/N04-1019 Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145–152, Boston, Massachusetts, USA. Association for Computational Linguistics.[Oved and Levy(2021)]oved-levy-2021-pass Nadav Oved and Ran Levy. 2021. https://doi.org/10.18653/v1/2021.acl-long.30 PASS: Perturb-and-select summarizer for product reviews. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 351–365, Online. Association for Computational Linguistics.[Papineni et al.(2002)Papineni, Roukos, Ward, and Zhu]10.3115/1073083.1073135 Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. https://doi.org/10.3115/1073083.1073135 Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311–318, USA. Association for Computational Linguistics.[Pennington et al.(2014)Pennington, Socher, and Manning]pennington-etal-2014-glove Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. https://doi.org/10.3115/v1/D14-1162 GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.[Peters et al.(2018)Peters, Neumann, Iyyer, Gardner, Clark, Lee, and Zettlemoyer]peters-etal-2018-deep Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. https://doi.org/10.18653/v1/N18-1202 Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics.[Peyrard et al.(2017)Peyrard, Botschen, and Gurevych]peyrard-etal-2017-learning Maxime Peyrard, Teresa Botschen, and Iryna Gurevych. 2017. https://doi.org/10.18653/v1/W17-4510 Learning to score system summaries for better content selection evaluation. In Proceedings of the Workshop on New Frontiers in Summarization, pages 74–84, Copenhagen, Denmark. Association for Computational Linguistics.[Popović(2015)]popovic-2015-chrf Maja Popović. 2015. https://doi.org/10.18653/v1/W15-3049 chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics.[Radev et al.(2004)Radev, Jing, Styś, and Tam]RADEV2004919 Dragomir R. Radev, Hongyan Jing, Małgorzata Styś, and Daniel Tam. 2004. https://doi.org/https://doi.org/10.1016/j.ipm.2003.10.006 Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919–938.[Radford et al.(2019)Radford, Wu, Child, Luan, Amodei, Sutskever et al.]radford2019language Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.[Raffel et al.(2020)Raffel, Shazeer, Roberts, Lee, Narang, Matena, Zhou, Li, and Liu]JMLR:v21:20-074 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. http://jmlr.org/papers/v21/20-074.html Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67.[Rus and Lintean(2012)]rus-lintean-2012-comparison Vasile Rus and Mihai Lintean. 2012. https://aclanthology.org/W12-2018 A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157–162, Montréal, Canada. Association for Computational Linguistics.[Scialom et al.(2021)Scialom, Dray, Lamprier, Piwowarski, Staiano, Wang, and Gallinari]scialom-etal-2021-questeval Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. https://doi.org/10.18653/v1/2021.emnlp-main.529 QuestEval: Summarization asks for fact-based evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.[Scialom et al.(2019)Scialom, Lamprier, Piwowarski, and Staiano]scialom-etal-2019-answers Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2019. https://doi.org/10.18653/v1/D19-1320 Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3246–3256, Hong Kong, China. Association for Computational Linguistics.[Sellam et al.(2020)Sellam, Das, and Parikh]sellam-etal-2020-bleurt Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. https://doi.org/10.18653/v1/2020.acl-main.704 BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics.[Shapiro and Wilk(1965)]eb32428d-e089-3d0c-8541-5f3e8f273532 S. S. Shapiro and M. B. Wilk. 1965. http://www.jstor.org/stable/2333709 An analysis of variance test for normality (complete samples). Biometrika, 52(3/4):591–611.[Snover et al.(2006)Snover, Dorr, Schwartz, Micciulla, and Makhoul]snover-etal-2006-study Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla, and John Makhoul. 2006. https://aclanthology.org/2006.amta-papers.25 A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers, pages 223–231, Cambridge, Massachusetts, USA. Association for Machine Translation in the Americas.[Suhara et al.(2020)Suhara, Wang, Angelidis, and Tan]suhara-etal-2020-opiniondigest Yoshihiko Suhara, Xiaolan Wang, Stefanos Angelidis, and Wang-Chiew Tan. 2020. https://doi.org/10.18653/v1/2020.acl-main.513 OpinionDigest: A simple framework for opinion summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5789–5798, Online. Association for Computational Linguistics.[Tay et al.(2019)Tay, Joshi, Zhang, Karimi, and Wan]tay-etal-2019-red Wenyi Tay, Aditya Joshi, Xiuzhen Zhang, Sarvnaz Karimi, and Stephen Wan. 2019. https://aclanthology.org/U19-1008 Red-faced ROUGE: Examining the suitability of ROUGE for opinion summary evaluation. In Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association, pages 52–60, Sydney, Australia. Australasian Language Technology Association.[Thompson and Post(2020)]thompson-post-2020-automatic Brian Thompson and Matt Post. 2020. https://doi.org/10.18653/v1/2020.emnlp-main.8 Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 90–121, Online. Association for Computational Linguistics.[Vasilyev et al.(2020)Vasilyev, Dharnidharka, and Bohannon]vasilyev-etal-2020-fill Oleg Vasilyev, Vedant Dharnidharka, and John Bohannon. 2020. https://doi.org/10.18653/v1/2020.eval4nlp-1.2 Fill in the BLANC: Human-free quality estimation of document summaries. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 11–20, Online. Association for Computational Linguistics.[Wang et al.(2023)Wang, Liang, Meng, Sun, Shi, Li, Xu, Qu, and Zhou]wang2023chatgpt Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. http://arxiv.org/abs/2303.04048 Is chatgpt a good nlg evaluator? a preliminary study.[Wang and Wan(2021)]wang-wan-2021-transsum Ke Wang and Xiaojun Wan. 2021. https://doi.org/10.18653/v1/2021.findings-acl.65 TransSum: Translating aspect and sentiment embeddings for self-supervised opinion summarization. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 729–742, Online. Association for Computational Linguistics.[Wei et al.(2023)Wei, Wang, Schuurmans, Bosma, Ichter, Xia, Chi, Le, and Zhou]wei2023chainofthought Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. http://arxiv.org/abs/2201.11903 Chain-of-thought prompting elicits reasoning in large language models.[Yuan et al.(2023)Yuan, Zhang, Huang, and Huang]yuan2023revisiting Hongyi Yuan, Yaoyun Zhang, Fei Huang, and Songfang Huang. 2023. http://arxiv.org/abs/2303.10328 Revisiting automatic question summarization evaluation in the biomedical domain.[Yuan et al.(2021)Yuan, Neubig, and Liu]NEURIPS2021_e4d2b6e6 Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. https://proceedings.neurips.cc/paper_files/paper/2021/file/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Paper.pdf Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems, volume 34, pages 27263–27277. Curran Associates, Inc.[Zhang et al.(2020)Zhang, Zhao, Saleh, and Liu]pmlr-v119-zhang20ae Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. https://proceedings.mlr.press/v119/zhang20ae.html PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328–11339. PMLR.[Zhang* et al.(2020)Zhang*, Kishore*, Wu*, Weinberger, and Artzi]bert-score Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. https://openreview.net/forum?id=SkeHuCVFDr Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.[Zhao and Chaturvedi(2020)]Zhao_Chaturvedi_2020 Chao Zhao and Snigdha Chaturvedi. 2020. https://doi.org/10.1609/aaai.v34i05.6512 Weakly-supervised opinion summarization by leveraging external information. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):9644–9651.[Zhao et al.(2019)Zhao, Peyrard, Liu, Gao, Meyer, and Eger]zhao-etal-2019-moverscore Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. https://doi.org/10.18653/v1/D19-1053 MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563–578, Hong Kong, China. Association for Computational Linguistics.§ A SURVEY OF AUTOMATIC METRICS IN OPINION SUMMARIZATION PAPERSWe surveyed 21 papers from 2018 to 2023 on opinion summarization published in top NLP/AI conferences and journals: ACL <cit.>, EMNLP <cit.>, NAACL <cit.>, EACL <cit.>, TACL <cit.>, ICML <cit.>, AAAI <cit.>. We find that the majority of papers report ROUGE-1/2/L results as the assessment of model performances, and only 4 papers <cit.> introduce new metrics (e.g., Perplexity, BERTScore, and QA-based metrics) in addition to ROUGE as alternative evaluation methods.§ LIST OF SELECTED MODELSWe introduce the 14 models we selected from 4 categories: statistically-based, task-agnostic, task-specific, and zero-shot.Statistically-Based ModelsLexRank^Ext <cit.> is an extractive summarizer based on a PageRank-alike algorithm. By constructing a network where sentences are treated as nodes, the model selects important reviews as the output summary. We use the implementation at <https://github.com/crabcamp/lexrank>.Opinosis^Ext <cit.> is a graph-based model that extracts salient reviews as the predicted summary. It connects sentences in a graph based on Part-Of-Speech (POS) tagging and selects reviews based on their redundancies[We use thetoolkit <cit.> for POS tagging and repeat the reviews 2-3 times to satisfy the requirement that input sentences should be ≥ 60.]. We use the implementation at <https://github.com/kavgan/opinosis-summarization>.BertCent^Ext <cit.> is a variant of the Centroid model <cit.> that uses BERT embeddings to summarize. We use the resources at <https://github.com/rktamplayo/PlanSum>.Task-Agnostic Models[All the models in this category are self-implemented.]BART^Abs <cit.> is a PLM that uses a denoising objective to recover the original texts from random masks. We choose BART-Large as the summarizer.T5^Abs <cit.> is trained in a unified framework where different tasks are united within a “text-to-text” objective. We choose T5-Base as our backbone.PEGASUS^Abs <cit.> is a PLM designed for abstractive summarization. Through sentence masking and reconstruction, it is sensitive to contexts and thus capable to generate informative summaries. We choose PEGASUS-Large as our summarization model.Task-Specific ModelsCOOP^Abs <cit.> is an aggregation framework inspired by convex optimization which learns to summarize by maximizing word overlaps between inputs and outputs. Specifically, we choosewith COOP as the summarizer due to its superior performance. We use the resources at <https://github.com/megagonlabs/coop>.CopyCat^Abs <cit.> is based on multi-layer variational auto-encoders and summarizes based on the latent encodings of reviews. We use the resources at <https://github.com/abrazinskas/Copycat-abstractive-opinion-summarizer>.DenoiseSum^Abs <cit.> disturbs the input reviews by introducing noises at the segment level and the document level, and learns to summarize from denoising. We use the resources at <https://github.com/rktamplayo/DenoiseSum>.MeanSum^Abs <cit.> is a model based on auto-encoders and learns to summarize by recovering the average encodings of reviews. We use the resources at <https://github.com/sosuperic/MeanSum>.OpinionDigest^Abs <cit.> is trained by reconstruction and can perform controllable summarization over aspects and sentiments. We use the resources at <https://github.com/megagonlabs/opiniondigest>.PlanSum^Abs <cit.> tackles the unsupervised challenge via content planning, which enhances relevance in the pseudo {reviews, summary} pairs to construct a better training set. We use the resources at <https://github.com/rktamplayo/PlanSum>.RecurSum^Abs <cit.> is based on variational auto-encoders where summaries are generated layer-wisely. We use the resources at <https://github.com/misonuma/recursum>.Zero-Shot ModelsGPT-3.5^Abs has shown competitive abilities to perform zero-shot opinion summarization <cit.>. We chooseas the backbone and set the temperature to 0 while keeping the other parameters as their default.§ LIST OF EVALUATION METRICSWe choose 26 metrics to evaluate their effectiveness in opinion summarization, and categorize them into non-GPT and GPT-based, depending on whether they are built upon GPTs. Non-GPT Metrics[For BLEU, METOR, EmbeddingAverage, VectorExtrema, and GreedyMatching, we use the implementation at <https://github.com/Maluuba/nlg-eval>.]ROUGE <cit.> measures the n-gram overlaps between the candidate and a set of references, and is popularly used in summarization tasks. We use the implementation at <https://github.com/Diego999/py-rouge>.BLEU <cit.> is the primary metric for machine translation. It focuses on precision and evaluates by computing n-gram overlaps between a candidate and a reference.METOR <cit.> measures the alignment between a candidate and a set of references by mapping unigrams.TER <cit.> is a metric that computes the ratio between the number of edits that convert the candidate into a reference and the average number of words in references. We use the implementation at <https://github.com/mjpost/sacrebleu>.ChrF <cit.> is a metric that measures the token-level n-gram overlaps between a candidate and a reference. We use the implementation at <https://github.com/m-popovic/chrF>.BERTScore <cit.> is a metric that evaluates a candidate and a reference with their similarity based on word-level BERT embeddings. We use the implementation at <https://github.com/Tiiiger/bert_score>.BARTScore <cit.> measures the quality of the target text by its generation probability conditioned on the source text. We use the implementation at <https://github.com/neulab/BARTScore>.BLANC <cit.> is a reference-free metric based on the assumption that summaries with quality are helpful for understanding the input documents, and evaluates by reconstructing the masked texts. We use the implementation at <https://github.com/PrimerAI/blanc>.BLEURT <cit.> is based on BERT and trained with scores from different metrics as the supervision signals for evaluation. We use the implementation at <https://github.com/google-research/bleurt>.InfoLM <cit.> generates distributions based on the masked word probability of texts and evaluates by calculating the similarity between the distributions of the candidate and the reference. We chooseas the metric due to its superior performance. We use the implementation at <https://github.com/PierreColombo/nlg_eval_via_simi_measures>.BaryScore <cit.> is a metric that measures the similarity between a candidate and a reference based on their Wasserstein distance. We use the implementation at <https://github.com/PierreColombo/nlg_eval_via_simi_measures>. MoverScore <cit.> measures the n-gram semantic distance between a candidate and a reference based on BERT embeddings. We use the implementation at <https://github.com/AIPHES/emnlp19-moverscore>.Sentence Mover’s Similarity <cit.> generalizes Word Mover's Distance <cit.> and evaluates the candidate with its distance to the reference. We consider two types of embeddings, namely, ELMo <cit.> and GLoVe <cit.>. We use the implementation at <https://github.com/eaclark07/sms>.EmbeddingAverage <cit.> computes the cosine similarity between the embeddings of the candidate and the reference, where the average embedding of words is treated as the sentence-level embedding.VectorExtrema <cit.> is a metric that computes similarities based on sentence-level embeddings, which is constructed by taking the extreme value at each dimension from the embeddings of the words in a sentence.GreedyMatching <cit.> calculates the similarity by comparing words from the candidate and the reference with a greedy matching algorithm.Perplexity-[] is a metric that uses a language model as the backbone to evaluate the generation likelihood of a sentence. We choose PEGASUS as our language model. We use the implementation at <https://huggingface.co/docs/transformers/perplexity>.Prism <cit.> is a measurement that evaluates the candidate sentence by paraphrasing. We use the implementation at <https://github.com/thompsonb/prism>.S^3 <cit.> is a model-based metric trained to aggregate scores from different metrics as the evaluation result. We use the implementation at <https://github.com/UKPLab/emnlp-ws-2017-s3>.SUPERT <cit.> is a reference-free metric that measures the semantic similarity between the candidate and a pseudo reference, which is comprised of salient sentences extracted from the source documents. We use the implementation at <https://github.com/yg211/acl20-ref-free-eval>.QAFactEval <cit.> is a QA-based metric focusing on evaluating factual consistency, which measures fine-grained answer overlap between the source and summary. We use the implementation at <https://github.com/salesforce/QAFactEval>.QuestEval <cit.> is a metric that views text evaluation as a QA task and generates questions from both the source document and the candidate itself. We use the implementation at <https://github.com/ThomasScialom/QuestEval>.SummaQA <cit.> is a QA-based metric that generates questions from source documents and treats the candidate sentence as the answer to evaluate its quality. We use the implementation at <https://github.com/ThomasScialom/summa-qa>.SummaC <cit.> is a lightweight metric that evaluates factual consistency using Natural Language Inference (NLI) models. We choose the SummaC_Conv model as the backbone and use the implementation at <https://github.com/tingofurro/summac>.GPT-Based Metrics[All the metrics in this category are self-implemented.]Perplexity-[] uses GPT-2 as the backbone to evaluate the generation likelihood of a sentence.ChatGPT <cit.> has shown great potential to perform human-alike evaluation. We chooseas our backbone and evaluate each summary independently. G-Eval <cit.> is a GPT-based metric that generates Chain of Thought (CoT) to improve its reasoning ability when evaluating texts, and there are two variants of it. G-Eval-[] weights a set of predefined scores with their generation probability conditioned on the instructions and CoT, and we useas the backbone model. We evaluate each dimension independently.G-Eval-[] directly gives integer scores based on the instructions and CoT. We chooseas the scoring model and rate each dimension independently.§ PROMPTS FOR CHATGPTThe prompt for ChatGPT is shown in Figure <ref>. § PROMPTS AND COT FOR G-EVAL The prompt for G-Eval and the generated CoTs conditioned on the prompt for the 4 dimensions are shown in Figure  <ref>.§ A DISCUSSION ON THE CHOICE OF DATASETWe show the statistics of available summaries on the two popularly used datasets in opinion summarization in Table <ref>, where “outputs-only”, “checkpoint-only”, and “both” stand for there are only outputs publicly available, only model checkpoint publicly available, and both outputs and model checkpoint publicly available. The Amazon <cit.> dataset is adapted from the Amazon product review dataset <cit.>, and contains 32 instances in its test set. Compared with the Amazon <cit.> dataset, we chose Yelp <cit.> based on the following two reasons: 1. the total number of available task-specific models on Yelp (7) is larger than that of Amazon (6); 2. the total number of available instances to be annotated on Yelp (100) is larger than that of Amazon (32), which matches the annotation sizes of previous works <cit.>.§ THE DETAILED ANNOTATION PROCESSThe annotation guideline is shown in Figure  <ref>. After reading the guideline, the annotators are asked to conduct pilot annotations to have a better understanding of the task and are encouraged to ask questions to gain feedback. We follow <cit.> to conduct agile annotation, where the annotation scheme evolves over time; thus, ensuring high annotation quality and early correction of potential mistakes. Specifically, after the i-th round of annotation is finished, we evaluate the annotation agreement of each batch using Cohen's κ, and batches with an agreement score less than 0.61 will later be annotated again in the i+1 round. After one round of annotation is finished, as the annotators become more experienced with the task, they are allowed to discuss issues related to the existing guideline and make potential refinements to it. During the entire annotation process, the annotators are promptly assisted by the authors, and are strictly forbidden to exchange ideas on giving which specific score to avoid false agreement. § INTERPRETATION OF COHEN'S Κ AND AGREEMENT UNDER OTHER MEASUREMENTSThe interpretation of Cohen's κ is shown in Table <ref>. The annotation agreement of the final annotations calculated with Fleiss' κ and Krippendorff’s α is shown in Table <ref>.§ A DISCUSSION ON PEARSON'S RAlthough <cit.> and <cit.> use Pearson's r to measure the correlations between automatic metrics and human annotations, we argue it does not apply in our case. Since Pearson's r assumes the two variables X and Y to be measured are normally distributed, we test the normality of different metrics and dimensions using , and report the results in Table <ref>. It is clear that only a few metrics and dimensions pass the test, which suggests that the correlations under Pearson's r only hold between certain metrics and dimensions; thus, we follow <cit.> and adopt Kendall's τ, which is a non-parametric method that does not make any assumptions on the distributions of variables. § EVALUATION RESULTS OF SOME METRICSThe system-level and summary-level evaluation results for EmbeddingAverage, VectorExtrema, GreedyMatching, Prism, and S^3 are shown in Table <ref>. § CASE STUDYDespite the success of task-agnostic PLMs and task-specific models, we observe that GPT-3.5 is consistently favored by annotators[GPT-3.5 is the best system in terms of BARTScore except for extractive models, which is because BARTScore (rev→ hyp) favors summaries containing sentences from the reviews.] across the 4 dimensions, which is similar to the findings of <cit.>. In the case study presented in Table <ref>, it is evident that CopyCat provides inaccurate recommendations, while BART exhibits self-contradiction. In comparison to LexRank, GPT-3.5 produces well-structured, concise summaries that cover a wider range of aspects. Based on these observations, we recommend that future research in opinion summarization consider the GPT family as a baseline, as their summaries tend to closely align with human evaluation across all dimensions.
http://arxiv.org/abs/2310.18122v2
{ "authors": [ "Yuchen Shen", "Xiaojun Wan" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231027130954", "title": "OpinSummEval: Revisiting Automated Evaluation for Opinion Summarization" }
inst1]Ran Wang [inst1]organization=Department of Psychiatry, New York University Grossman School of Medicine, city=New York, state=NY, postcode=10016,country=USAinst1,inst2,inst3]Zhe Sage Chen[inst2]organization=Department of Neuroscience and Physiology, Neuroscience Institute, New York University Grossman School of Medicine, city=New York, state=NY, postcode=10016,country=USA[inst3]organization=Department of Biomedical Engineering, New York University Tandon School of Engineering, city=Brooklyn, state=NY, postcode=11201,country=USARecent advances in machine learning have made revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience,including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities.Foundation model generative AI BigData transformer self-supervised learning transfer learning representation learning embedding brain-machine interface § INTRODUCTION Advances in neurotechnology have allowed us to record large-scale, high-throughput neural data through in vivo electrophysiology and brain imaging.These BigData present a challenge for various neural data analyses such as decoding and functional connectivity analysis, as well as closed-loop brain-machine interface (BMI) applications in neuroscience experiments <cit.>. In parallel, machine learning research is also moving very fast. Rapid advances in deep learning and development of large-scale foundation models and large language models (LLMs) have taken the whole world by storm, demonstrating remarkable and revolutionary findings in generating high-resolution synthetic images, yielding human-like natural language understanding and human-level creativity <cit.>. Without exaggeration, the past few years have witnessed a paradigm shift in AI to foundation models in nearly every aspect of machine learning applications. How will these technological changes impact and imply for neuroscience? Answers to this question are part of our motivations to write this review. However, since the field is relatively new, the number of published studies on neuroscience applications based on foundation models or LLMs is relatively small, but the interest is rapidly growing and many findingsderived from this line of research may have a potentially significant impacton neuroscience.In this mini-review, we first provide a brief overview of foundation models, its building block—transformers, and extend our overview to a broad class of generative AI tools. Further, we will review important concepts in representation learning, self-supervised learning (SSL) and transfer learning, which will play important roles in cross-modality applications. Next, we will review recent applications of foundation models and generative AI in various neuroscience research areas, including but not limited to large-scale brain imaging data analysis, natural speech and language understanding, memory, emotion, mental state decoding,behavior, BMI, and data augmentation.Finally, we conclude the review with discussions and outlook on future research opportunities and challenges.§ FOUNDATION MODELS AND GENERATIVE AI §.§ What are foundation models? A foundation model is a “paradigm for building AI systems" in which a model trained on a large amount of unlabeled data can be adapted to many other applications. The foundation models are often trained using self-supervision with BigData, and can be adapted to a wide range of tasks (e.g., text, images, speech, structured data, brain signals, and high-dimensional tensor data) (Fig. <ref>).One of the popular class foundation models is LLMs (Table <ref>), which take language input and generate synthesized output. In general, foundation models work with multi-modal data types.In a recent group study conducted at Stanford university, it was concluded that “foundation models arescientifically interesting due to their impressive performance and capabilities, abut what makes them critical to study is the fact that they are quickly being integrated into real-world deployments of AI systems with far-reaching consequences on people” <cit.>. At the very high level, there are two fundamental ideas in the LLM and foundation models: (i) embedding, which aims to convert words or tokens into high-dimensional statistically meaningful numbers; (ii) SSL or contrastive learning. §.§.§ EmbeddingEmbedding is a feature extraction technique that nonlinearly transforms the input signal to a representational vector that are easy to indexed, searched, computed, and visualized.In language processing applications, a word embedding is to project words onto a meaningful space in which words “are nearby in meaning” appear nearby in the embedding. Take ChatGPT as an example, the dimensionality of the embedding space can be high-dimensional (hundreds to thousands depending on the specific layer). Therefore, the embedding vectors that contains a string of numbersare located in the coordinates of “linguistic feature space”.In deep neural networks, embedding layers enable us tolearn the relationship between high-dimensional inputs and outputs more efficiently. §.§.§ SSL learning In real life, humans and animals can learn efficiently from observation or very few labeled examples, pointing the limitation of BigData-based supervised learning. SSL is predictive learning in that it aims topredict missing parts of the input. In recent years, SSL techniques have achieved immense successes innatural language processing (NLP) and computer vision by enabling models to learn from BigData at unprecedented scales <cit.>.Depending on the objective, SSL can be a generative, contrastive, or generative-contrastive (adversarial) form; a comprehensive survey of SSL is referred to elsewhere <cit.>. Under the SSL framework, fine-tuning the pre-trained models with a small percentage of labeled data can achieve comparable results with the supervised training <cit.>. In NLP, pre-training methods like BERT (Bidirectional Encoder Representations from Transformers)have shown strong performance gains using SSL that masks individual words or subword units <cit.>. Recently, <cit.> proposed an extended version of BERTknown as SpanBERT, which can mask continguous random spans instead of random tokens and train the span boundary representations to better predict the entire content of the masked span; by so doing, SpanBERT consistently outperforms BERT, with the largest gains on span selection tasks. §.§ Transformer modelA transformer model is a deep neural network that learns context and thus meaning by tracking relationships in sequential data. Specifically, transformers were developed to solve the problem of sequence transduction that transforms an input sequence to an output sequence, enabling end-to-end learning in machine translation, text generation and sentiment analysis <cit.>. Transformers are the building blocks in many foundation models, such asBERTand GPT (Generative Pre-trained Transformer). Transformersare computationally efficient in simultaneous sequence processing since model training can be sped up through parallelization, a key feature missing in recurrent neural networks (RNNs) and long short-term memory (LSTM); this feature has also made the creation of LLMs feasible. The transformer model has a seq2seqneural network architecture, consisting of encoding, decoding and self-attention modules (Fig. <ref>a). There are several concepts fundamental to computations in the transformer: * word embeddings: vector representations of words* positional embeddings: encoding the position of each token in a sequence and add the positional information to the word embeddings * attention: understanding the context of a word by considering words that go before or after it. In other words, if the meaning is a result of relationships between things, then self-attention is a general way of learning relationships <cit.>.* self-attention:weighing the importance of different parts of the input sequence against each other. * multi-head attention: allowing the network to learn multiple ways of weighing the input sequence against itself. In addition to NLP applications, the transformer architecture has been applied in other domains such as computer vision <cit.>,visual stimulus classification <cit.>, neural data analysis <cit.>, and reinforcement learning (RL) <cit.>. §.§ Generative AI Generative AI describes a class of algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. Several representative generative AI algorithms are summarized below.* Variational Autoencoder (VAE): VAE is a generative AI algorithm that uses deep learning to generate new content, detect anomalies and remove noise <cit.>. VAE consists of an encoder and a decoder, separated by the latent space (Fig. <ref>b).The latent space contains an abstract representation of the data containing only the most meaningful information (i.e., dimensionality reduction). The model can learn the data distribution, so that a corresponding output can be reconstructed based on a new sample input.* Generative Adversarial Network (GAN): A GAN is a class of deep learning framework that uses two neural networks, the generator and the discriminator (Fig. <ref>c), to generate new and realistic synthetic data that are similar to the samples among the training set. Specifically, the generator network takes random noise as input and generates synthetic data, and is aimedto produce data that are indistinguishable from the real data in the training set. The generator tries to create realistic samplesand follow the patterns present in the original dataset. On the other hand,the discriminator network evaluates the data it receives and tries to distinguish between real data from the training set and the synthetic data produced by the generator. Its goal is to correctly classify whether the input data is real or generated by the generator. The discriminator provides feedback to the generator, helping it improve its generated samples. To date, the GAN and many of its variants have numerous applications in image generation, image-to-image translation, super-resolution imaging, text-to-image synthesis, and video generation <cit.>. * Generative Pre-trained Transformer (GPT): GPT is specifically referred toa series of language models that use the transformer architecture to understand and generate coherent and contextually relevant text. Because of powerful predictive ability, GPT is effective for a variety of NLP tasks, including text generation, translation, and summarization. The basic idea behind GPT is to apply SSL and trainlarge datasets containing a diverse range of text from various sources. Upon the completion of learning, the modeltakesthe sequence of tokens that corresponds to the text in the past and finds an embedding that represents them, and further generate a large number of values that turn into probabilities for predicting possible next tokens <cit.>. The newer GPT developments, such as GPT-3 <cit.> and GPT-4, represent a landmark in this technology. * Diffusion Model: Diffusion modelsare referred to a class of latent generative models that are used to model the distribution of data based on Markov chains and variational inference (Fig. <ref>d) <cit.>. These models are designed to capture the underlying data distribution by iteratively transforming a simple distribution into a complex one. Diffusion models offer a promising avenue for deep generative modeling owing torobust expressive capacity, and ability to generate data via ancestral sampling without the prerequisite of a posterior distribution.Unlike other deep generative models such as VAE and GAN, training diffusion models is relatively simple. To date, diffusion models have been used in image generation, NLP, and time series analysis. * Latent Score-based Generative Model (LSGM): The LSGM generalizes the ideas of VAE and diffusion model, maps the input onto a latent space and applies the diffusion model in the latent embeddings of the data (Fig. <ref>e) <cit.>. As an extension to score-based generative models <cit.>, the LSGM has several key computational advantages: synthesis speed, expressivity, and tailored encoders and decoders.Foundation models can serve as a basisfor generative AI. BERT and GPT models have already been used asthe building blocks for developing more sophisticated generative AI models. For instance,<cit.> developed a self-supervised pre-trained foundation model on vision-language multi-modal input, which only requires weak semantic correlated image-text training pairs; specifically, they demonstrated that the foundation model not only can generate high-level concepts and describe complicated scenes, but also has an ability to imagine, which represents a step towards artificial general intelligence (AGI).Furthermore, foundation models may provide a starting point for developing more advanced generative AI systems. Researchers and developers often fine-tune or extend the foundation models to create specialized generative models tailored to specific tasks or domains. More importantly, foundation models may facilitate transfer learning, which is vital for generative AI–as it allows models to leverage the knowledge and representations learned by foundation models to generate diverse and contextually appropriate content across different domains.Oneexciting application of generative AI is to decode brain signalsand transform them into text or images, which may have a translational impact on the lives of individuals withtraumatic brain injury (TBI) or server paralysis who cannot communicate through speech, typing, or gestures <cit.>. Recently, GAN-based<cit.> and diffusion model-based <cit.> approaches havebeen developed to reconstruct human faces or visual images from fMRI recordings. See <cit.> for a short review on generative AI for brain imaging applications. § REPRESENTATION LEARNING AND TRANSFER LEARNING §.§ Representation learning Representation Learning is referred to a class of machine learning algorithms that extract meaningful patterns from raw data to create representations easily understood or processed<cit.>. During this process, dimensionality reduction, regularization,invariance, and sparsity play an important role. Current LLMs heavily rely on effective representation learning algorithms. Representation learning can be achievedby unsupervised, supervised, and self-supervised frameworks. For instance, as a special case of SSL paradigm, contrastive learning canlearn an embedding space such that similar instances have close representations while dissimilar instances stay far apart from each other.In addition to computer vision and NLP tasks, contrastive learning hasbeen used to extract meaningfulrepresentations from neural data, including data from electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and other neuroscience modalities <cit.>. For instance,contrastive learning has enabled researchers to uncover patterns in brain connectivity data, providing insight into the organization and communication between different brain regions, or identifying connectivity-based biomarkers between healthy and pathological brains <cit.>. Contrastive learning can also learn representations in the latent feature space based on dimensionality reduction. One such an exampleis contrastive PCA (cPCA), which can identify the dominant subspace that distinguishes two datasets collected from different conditions <cit.>.Additionally, contrastive variational autoencoder (cVAE) <cit.>, as an extension to cPCA, offers a more flexible approach capable of modeling nonlinear relationships between the inputs and latent features. Finally, another contrastive learning paradigm,contrastive predictive coding (CPC) <cit.>, learns self-supervised representations by predicting the future in latent space by using autoregressivemodels and VAE; the model uses a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future data.§.§ Transfer learningTransfer learning represents a class of machine learning technique where knowledge learned from a task is reused in order to boost performance on a related task or generalize out-of-distribution via targeted re-training <cit.>. In deep learning models, transfer learning has been widely used in computer vision, image classification, and NLP tasks<cit.>.Transfer learning has found many applications in neuroscience.In neuroimaging data analysis, pre-trained models from NLP or computer vision domains, can be fine-tuned or used to extract features from raw neural data, facilitating out-of-domain tasks such as classification, segmentation, and decoding of neural activity. For instance, pre-trained models from related medical imaging tasks can be adapted to process and interpret neuroimaging data, leading to a more accurate and efficient analysis. Additionally, since the relationship between cognitive tasks is usually represented by similarity of neural representations or activated brain regions, transfer learning may perform better in task decoding with fMRI data if the source and the target cognitive tasks activate similar brain regions <cit.>. In BMI research, transfer learning can improve the performance and adaptability of BMI systems by leveraging knowledge from related tasks. Pre-trained models may help enhance the decoding of neural signals for controlling external devices or for interpreting brain activity associated with specific motor or cognitive tasks.Transfer learning can assist in the early detection and diagnosis of neurological or psychiatric disorders by leveraging knowledge from related medical domains. Pre-trained models from medical imaging or clinical data analysis can be adapted to identify biomarkersassociated with specific pathological conditions, aiding in early intervention and personalized treatment strategies. Notably, transfer learning can work well where the data sample size is small in neuroimaging-based prediction <cit.> and ECoG/EEG decoding analysis <cit.>.§ FOUNDATION MODELS AND GENERATIVE AIFOR NEUROSCIENCE APPLICATIONS§.§Context-dependent embedding mappingAs discussed earlier, representation learning can identify context-depending embeddings for a broad class of input signals. For instance, if the input is a speech signal, the embedding mapping for speech representation may be produced by “wave2vec” <cit.>, HuBERT <cit.>, and “data2vec” <cit.>. If the input is a neural time series such as EEG signal,the embedding mapping for EEG may include “EEG2vec”<cit.> or other representation learning methods <cit.>. Such methods have been demonstrated in neuroscience applications such as automatic sleep staging <cit.> and seizure detection <cit.>. In neural data analysis, embeddings have been widely adopted in unsupervised or supervised representation learning. For instance, automated neuron reconstruction and annotation of volume electron microscopy (VEM) datasets of three-dimensional images of brain tissue is computationally intensive and challenging. <cit.> first used unsupervised training to infer morphology embeddings (“neuron2vec”) of neuron reconstructions, and then trained cellular morphology neural networks (CMNs) to identify glia cells via supervised classification; they also demonstrated in using CMNs to identify subcellular compartments and the cell types of neuron reconstructions.Embeddings are useful for revealing low-dimensional neural dynamics and modeling naturalistic behaviors <cit.>. Although traditional latent variable modelshave been used for analyzing neural and behavioral data <cit.>, most of them are limited in encoding the context dependence. Incorporating task-relevant embedding vectors to form a context-relevant embedding would allow us to perform end-to-end learning efficiently. Recently, <cit.> have proposed a non-recurrent, BERT encoder-based neural data transformer (NDT) modelto explicitly model autonomous neural population activity and reported comparable performance between the NDT model and other RNN models. In their NDT model, inputs to transformer layers were first normalized and enriched through contextual information (“self-attention” blocks), and passed through a feedforward module.§.§ Brain imaging Human neuroimaging provides a window to examine a healthy and diseased brain, in terms of both structural and functional forms, including EEG, MEG, fMRI, diffusion tensor imaging (DTI),and positron emission tomography (PET). See <cit.> for a review of generative AI for brain imaging, coveringco-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and decoding analysis. Several lines of work have proposed generative AI approaches to reconstruct visual images based on fMRI data <cit.>. For instance, <cit.> first trained a VAEnetwork using a GANunsupervised procedure over a large dataset of celebrity faces, where the VAE latent space provided a topologically organized 1024-dimensional embedding of each image. Next, theypresentedthousands of face images to human subjects, and learned a linear mapping between multi-voxel fMRI activation patterns and latent embeddings. Finally, they applied this mapping to novel face images, translating fMRI patterns into reconstructed faces.<cit.> developed a self-supervised pre-trained image-text multi-modal foundation model which outperformed CLIP (Contrastive Language-Image Pre-Training) model even with a small percentage (∼3.75%) of training pairs. The image and text were first encoded individually by pre-trained uni-modal large-scale models, vision transformer (ViT) and BERT. The output of BERT was then projected toa trained mapping layer that aligns with ViT features. By comparing the encoded image encoding feature with fMRI imaging of the human visual cortex, their results showed that the proposed multi-modal model has higher prediction accuracy than the uni-modal image encoder.§.§ Natural language and speechSpeech and language understanding involvesa deep comprehension of their generation and processing (in both sound and text), enabling computers to perform tasks such as speech recognition, language translation, sentiment analysis, and text summarization. Representing human speech from brain signals (such as ECoG and fMRI) consists in decoding neural activityassociated with speech production, perception, or comprehension. It has been known that natural speech reveals a semantic map that tiles the human cerebral cortex<cit.>, and the semantic space iscontinuously distributed across the brain describing representations of thousands of object and action categories <cit.>.On the one hand,the rich features extracted from thefoundation models provide a new hypothesis when studying brain representations during specific speech and language tasks. For example, the ECoG activity in the superior temporal gyrus (STG) and inferior frontal gyrus (IFG) of the human brain was found to be correlated with features extracted by the GPT model <cit.>. Since predictive pre-training of the GPT model was capable of encoding contextual information, word onset, and word surprisal, this finding suggests that the human auditory cortex may encode speech in a similar manner. The contextual encoding phenomenon was also found when correlating neural representations in the human auditory cortex with the HuBERT model's embeddings <cit.>.On the other hand, a growing number of studies have focused on decoding human speechfrom invasive brain recordings, using either intracranial ECoG or intracortical spiking activity <cit.> (see the review of BMI applications below).Recently, <cit.> have developed a contrastive learning approach to decode speech based onnon-invasive magneto- or electro-encephalography (MEG/EEG). They first employed a large-scale pre-trained speech encoding model (“wave2vec 2.0” <cit.>) to extract semantic features from speech, and then trained a decoding model to extractfeatures that converged to the speech features of corresponding trail while diverging from speech features of other trails. The model was capable of identifying the speech segment with features that best matched the decoded neural features. This work represents a large step forward in clinical practice without putting patients at the risk of brain surgery. Furthermore, EEG signals can be leveraged to augment multi-modal NLP models while using less training data <cit.>; in combination with EEG data, BERT embeddings have showed consistently improved performance for NLP tasks.§.§ Memory and semantic reconstruction In the traditional episodic memory paradigm, subjects are usually required to memorize arbitrary items (words or images), lacking the fundamental components in real-life naturalistic events occurring over a longer timescale. Multimedia stimuli such as music and film, however, may provide rich contextual and naturalistic memory behaviors <cit.>. In neuroscience experiments, recollection of short audiovisual segments from movies can be viewed as a proxy to real-life memory thatconsists of a stream of continuous sensory experiences. In contrast to pure reconstruction of static images from brain imaging <cit.>, reconstructing high-quality images with correct semantics from brain recordings is more challengingdue to the complex underlying representations of brain signals and the scarcity of data annotations. In the literature, neural decoders have been developed for semantic reconstruction of movie or visual experiences <cit.>.Extension of this framework using generative AI would represent a promising research direction. Recently, <cit.> proposed a conditional diffusion model with sparse masked modeling for human visual decoding. Inspired by sparse coding in the primary visual cortex, they firstapplied SSL and mask modeling in a large latent space for fMRI data; then they augmented a latent diffusion model (LDM) to reconstruct highly plausible images with semantically matching details from fMRI recordings using very few paired annotations.§.§ Mental state and emotion Decoding brain states and mental processes based onbrain imaging data has been an active research area <cit.>. However, the common challenge is that the sample size is relatively small and themodel is prone to overfitting. Recently, to decode mental states, <cit.>proposed to leverage publicly shared fMRI data (<https://openneuro.org/>) to pretrain a foundation model. Their procedure consisted of two steps. In the first step, performing self-supervised learning on fMRI time series using various model strategies: seq-to-seq autoencoder, casual sequence modeling (similar to GPT-3), sequence-BERT, and network-BERT. In the second step, applying a plug-in and adaptation to decoding mental states. In so doing, the mental states can be viewed as a high-dimensional neural embedding, and NLP-inspired architectures were able to learn useful representations of fMRI time series; more importantly, the pre-trained model also improved the decoding accuracy of mental states (compared to several baseline models).Decoding emotions from brain activity is onefundamental task in human-computer interaction, yet most decoding methods are limited by the number of emotion category or has ignored the discrepancy of emotion expression between two brain hemispheres. Recently, <cit.> proposed a multi-view multi-label hybrid model for fine-grained emotion decoding: the generative component is a multi-view VAE that learns the brain activity of left and right hemispheres, as well as their differences; the discriminative component is a multi-label classification network; furthermore, they used a label-aware module for emotion-specific neural representation learning and modeled the dependency of emotional states by a masked self-attention mechanisms. §.§ Naturalistic behavior An important goal in neuroscience is to uncover the circuit mechanisms underlying cognitive processes and behavior, for which quantitative behavioral descriptions may play a vital role in linking brain activity and behavior <cit.>. Unlike constrained behaviors (such as head-fixed tasks or planar reach-and-grasp movement),naturalistic behavior isreferred to thebehavior that animals have a tendency to exhibit under natural or realistic conditions, which isoften pleasurable and beneficial to biological functioning.Given the success of sequence modeling in NLP, it is tempting to framebehavior analysis as a sequence modeling problem and apply this idea to context-relevant behavioral embedding and attention computation.Recently, <cit.> have proposed a generalist agent (GATO) model for multi-modal, multi-task learning. Specifically, they encoded various modalities into a single vector space of “tokens" that can be ingested by a large sequence model such as transformers; they also proposed various “tokenization" approaches to capture the large amount of multi-modal data that include standard vision and language datasets andsome RL benchmarks. §.§ Brain-machine interfaces A BMI is a system that establishes a direct communication pathway between the brain's electrical activity and an external device, reading out the encoded stimuli (e.g., speech, vision, location) or translating thought into action (i.e., neuroprosthetics) <cit.>. Such mind-reading devices can be used not only for translational applications <cit.>, but also forscientific inquiry in basic science questions <cit.>.Data sources in different BMIs have a varying degree of signal-to-noise ratio (SNR). For instance, while sharing the same temporal resolution, ECoG has a higher SNR than the scalp EEG.On the other hand, calcium imaging or fMRI data have a much lower temporal resolution than ECoG or EEG. Because of this variability, directly mapping neural signals onto decoding targets (e.g., text, speech, and music) is not optimal. Pre-trained foundation models can mitigate this by incorporating prior knowledge about the decoding targets, aligning them more closely with the neural signals.To date, LLMs have been incorporated into BMI systems to enhance text decoding. A wide range of machine learning techniqueshave been employed to increase the efficiency and accuracy of EEG-based spelling systems <cit.>. In practice, these language models can either auto-complete decoded words or be integrated into classifiers to refine the probability estimates of potential letters based on previously decoded ones. Leveraging language models has proven to significantly reduce word-error-rates, especially when decoding text from intracranial ECoG or Utah array during speech attempts <cit.>. A notable recent study <cit.> utilized a pre-trained GPT-2 model to interpret perceived speech from fMRI scans, converting neural patterns into text. This research, which involved over 16 hours of fMRI data from participants listening to stories, has showcased the potential of BMI in decoding imagined speech and even in cross-modal decoding, such as interpreting text representations of mental states during silent film viewing.Foundation models have also been instrumental in enhancing the performance of BMI systems, especially in decoding audio and visual signals <cit.>. For instance, <cit.> utilized a pre-trained speech generative model to decode clear speech from neural signals. Specifically, they used a sophisticated transformer-based speech encoding model (“HuBERT”) to learn a compact representation of speech, which was then transformed into high-quality speech using a pre-trained synthesizer. Beyond speech, music decoding has also seen progresses with the aid of generative AI. Multiple lines of recent research <cit.> have demonstrated the feasibility of decoding music from neural signals using deep learning, with pre-trained models such as musicLM <cit.>, to produce high-quality outputs. Similarly, image reconstruction from fMRI scans has achieved remarkable accuracy with the help of image generative models such as the VAE, GAN, and diffusion models <cit.>.In these studies, neural signals were first converted into latent representations, and then used to produce images through various generative models (Table <ref>). For instance, a two-stage scene reconstruction framework called “Brain-Diffuser" has been proposed: in the first stage, low-level image was first reconstructed via a very deep VAE, and in the second stage, a latent diffusion model conditioned on predicted multi-modal (text and visual) features was used to reconstruct high-quality images <cit.>.Remarkably, <cit.> developed an real-time visual decoding strategy from MEG recordings using a foundation model. The model consists of three modules: (i) pre-trained embedding obtained from images, (ii) an MEG module trained end-to-end, and (iii) a pre-trained image generator. Furthermore, the brain-to-image readout was decoded witha foundational image model known as DINOv2. The authors reported that MEG-based decoding can recover high-level visual features compared to fMRI-based decoding, offering a real-time BMI paradigm (∼250 ms delay) for the human brain. To date, most of brain decoding applications have been reported in human research since data format and acquisitionare relatively universal, which may not be the case in animal studies. Recently, built upon a foundation model known as Perceiver IO <cit.>, <cit.> developed a new framework called POYO (Pre-training On manY NeurONs) for large-scale training transformer models end-to-end on multi-session and across-individual electrophysiology datasets. POYO introducesinnovative spike-based tokenization strategies and used pre-trained models (with possible fine tuning) for neural population decoding; with a transformer architecture, POYO applies both cross-attention and self-attentionin the latent space after latent embeddings of neural events. Their work demonstrates that the power of transfer learning and transformer to achieve rapid and scalable neural decoding. §.§ Data augmentationMachine learning-driven data augmentation techniques are beneficial to alleviate the sample imbalance or insufficiency problem <cit.>. This is particularly important for improving the generalization ability of deep learning. Recently, data-centric deep learning or generative AI strategies (e.g., data regeneration and synthetic data generation) have been proposed to improve the consistency between the existing and augmented data, especially in clinical applications where labeled samples may be scarceor the data privacyis a concern <cit.>. For instance, combiningRNN and GAN may help construct generative models of synthetic time series and impute missing sequences <cit.>. In one example, combined GAN and VAE models utilized three-dimensional convolution to model high-dimensional fMRI sensors with structured spatial correlations and thesynthesized datasets were then used to augment classifiers designed to predict cognitive and behavioral outcomes <cit.>.In another example,an auxiliary classifier GAN (AC-GAN) was used to generate synthetic interictal epileptiform discharges (IED) from EEG recordings of epileptic patients <cit.>. <cit.> employed an LLM (based on GPT-2) to augment the EEG/MEG dataset for a classification task. After initial training, the GPT model was used to generate realistic synthetic neural signal given corresponding classification labels as the augmented data;a marginal improvement was reported in classification performance.Recently, a text data augmentation approach based on ChatGPT (named AugGPT) <cit.>, has been developed to overcome the challenge of limited sample sizes in NLP tasks <cit.>.Specifically, sentences in the training set were rephrased into conceptually similar variations as the augmented data with the same label of the original sample.The results showed that data augmentation based on such a large-scale pre-trained model increased the classification accuracy by a big margin in comparison withstandard data augmentation methods. However, more research is still needed to see whether similar techniques can apply to neural data augmentation. § DISCUSSION AND CONCLUSION §.§ Crosstalk between AI and neuroscienceAI and neuroscience have been driving each other forward. Not only neuroscience has inspired the development of deep learning and AI technologies<cit.>,explainable AI and deep learning have also generated opportunities for in-depth neuroscience investigations <cit.>. For instance, biologically constrained CNN models have enabled neuroscientists to directly compare data in the visual cortex and uncover the underlying computational principles <cit.>. Recently, <cit.> proposed a contrastive learning-based neural network model for jointly modeling neural and behavioral dynamics. The SSL algorithm, known as CERBA, combining ideas from nonlinear independent component analysis (ICA) with contrastive learning, may identifyinterpretable and consistent neural embeddings of high-dimensional neural recordings using auxiliary variables (such as time or behavioral measures). Importantly, itcangenerate embeddings across multiple subjects and cope with distribution shifts among experimental sessions, subjects, and recording modalities. In another example, <cit.> applied deep language algorithms (based on GPT-2) to predict nearby words and discovered that the activations of language models linearly map onto the brain responses to speech, and these predictions are organized hierarchically in frontoparietal and temporal cortices. These findings illustrate the synergy between neuroscience and AI can largely improve our understanding of human cognition. It is also worth mentioning that current AI technologies have relied on oversimplified models of neural systems. First and foremost, the standard artificial neurons in deep neural networks are “point neurons" that focus on somatic computation, yet the importance of nonlinear dendritic computation has been ignored. However, it has been known that the dendrite also plays an important role in neuronal computations and biological learning, such as enhancing expressivity of single neurons, improving neuronal resources and generalization abilities, utilizing internal learning signals, and enabling continual learning, contextual representation, and predictive coding <cit.>. Deep learning models have the potential to reproduce computational complexity of biologically realistic neurons' I/O properties <cit.>. Second, brain oscillations are important hallmarks in representing neural dynamics for a wide range of tasks in cognition, attention, memory, decision-making, and sensorimotor integration. Future development of next-generation neuroAI models and biologically plausible learning algorithms remains a central research direction to transform a “black-box” to “glass box” model while achievinga good trade-off between performance and interpretability.§.§ Outlook and outstanding questionsLooking ahead,foundation models and generative AI will anticipate a rapid research growth in method development and applications, especially in brain imaging and large-scale neural and behavioral data analyses.In clinical applications, foundation models and generative AI may have a translational impact on personalized medicine.A growing number of ChatBots, such as ChatGPT and Bard, can play an active role in mitigating the worldwide crisis in mental health <cit.>. In multi-modal BMI systems, generative AIwill help combine speech, vision and motor modalities to improve the functionality and decoding accuracy.Future developments of brain-to-content neurotechnologiesmay have promising applications inimmersive virtual reality, video games, marketing, and personalizied education. Finally, we present several outstanding questions that might motivate future research in the intersection of AI and neuroscience.* Since the majority offoundation models have been trained on single-modal data,it is unclear whetherthe model would benefit from training based on multi-modal or cross-modal data when the decoding domain is on single modality. For instance, in simultaneous EEG-fMRI recordings, can we train a foundation model based on their joint measurements, and then apply the pre-trained model in EEG-alone or fMRI-alone decoding analysis?Whilethe prior knowledge of the cross-modal relationship may be beneficial, the variability in SNR and spatiotemporal resolution between two modalitiesmay create practical barriers. Furthermore, it remains an open question howwe should apply SSL to identify an optimal analysis pipeline for multi-modal neuroimaging data.* Representation learning and foundation models have great potentials in RL, including end-to-end policy learning<cit.> and multi-agent communications <cit.>. However, it remains unclear how well the foundation models and learned embedding representations can generalize across tasks in RL. For instance, RL algorithms havebeen developed in BMI applications, enabling individuals with motor disabilities to control external devices using neural signals. It still needs to be thoroughly tested whether the pre-trained policy can generalize across subjects, tasks, and environments. Identifying common as well as individualized decision-making or control policy under the new representation learning paradigm will continue to be an active research topic. * While ChatGPT can be used as an interface between users and external systems serving as a bridge between individuals with limited mobility and the external world,it is vital to revolutionize communication capabilities of BMIs by translating thoughts into text-based information andrefining the dynamics of human-machine interaction. However, it remains unclear howChatGPT or GPT-like models can be optimally integrated into the BMI systems.Furthermore, can we adapt these models or generative AI to interpret and produce text that syncs flawlessly with a user's intentions while abiding by ethical and privacy mandates? The recurrent engagement of users with ChatGPT offers prospects to transform the lives of those with disabilities and to develop personalized and adaptable BMI systems, escalating user gratification and optimizing system outputs.* Ongoing research has continued producing new frontiers in foundation models and generative AI, such as the new autonomous AI agent tools (AutoGPT, MetaGPT and AutoGen) (see a compiled list at <https://github.com/steven2358/awesome-generative-ai>). Integration of these emerging AI technologies into neuroscience applicationspresents more challenges and opportunities. In conclusion, many research areas in neuroscience have greatly benefited from BigData-empowered machine learning. Exploitation of large-scale foundation models, generative AI, and transfer learning tools will enable us to potentially probe neuroscience questions and brain-to-content technology in new dimensions. The landscape of neuroscience research is rapidly changing, and our imagination is only the limit for unlimited creativity.We hope this mini-review will inspiremore exciting workin the near future. § FUNDING The work was partially supported by grants MH118928, DA056394, NS123928, NS121776, MH132642, and NS135170 from the US National Institutes of Health.§ DECLARATION OF COMPETING INTEREST The authors declare no competing interests.§ DATA AVAILABILITY No data was used for the research described in the current article.§ ACKNOWLEDGMENTS The authors thank Dr. Ryota Kobayashi and Dr. Ken Nakae for the invitation to participate in a Special Session at the Annual Japanese Neuroscience Meeting held in Sendai, Japan on August 3, 2023, which motivated the writing of this review. elsarticle-harv
http://arxiv.org/abs/2310.18377v1
{ "authors": [ "Ran Wang", "Zhe Sage Chen" ], "categories": [ "q-bio.NC", "cs.AI", "cs.HC", "cs.LG", "cs.MM" ], "primary_category": "q-bio.NC", "published": "20231027004440", "title": "Large-scale Foundation Models and Generative AI for BigData Neuroscience" }
Departamento de Matemática y Estadística, Universidad de La Frontera. Temuco, [email protected] supported by Project Fondecyt 1230001[2010]Primary 57M60, 57M10 We prove the existence of finite groups of orientation-preserving homeomorphisms of some closed orientable surface S that act freely and which extends as a group of homeomorphisms of some compact orientable 3-manifold with boundary S, but which cannot extend to a handlebody.Extending finite free actions of surfaces Rubén A. Hidalgo January 14, 2024 =========================================§ INTRODUCTIONEvery closed orientable surface S of genus g is null-cobordant, that is, it can be seen as the boundary of some compact orientable 3-manifold (for instance, as the boundary of a handlebody of genus g). Let G be a finite group of homeomorphisms of S. One says that G extends if it is possible to find some compact orientable 3-manifold M^3 with boundary S admitting a group of homeomorphisms, isomorphic to G, whose restriction to S coincides with G. In the case that M^3 can be chosen to be a handlebody, then we say that G extends to a handlebody. A natural question is if every extendable action necessarily extends to a handlebody (it seems this question is due to B. Zimmermann). If g=0, then G always extends to the closed 3-ball (a handlebody of genus zero).Let us assume g ≥ 1. A Schottky system of loops for G is a collection ℱ, of pairwise disjoint essential simple loops on S, such that: (i) ℱ is G-invariant and (ii) S ∖ℱ consists of planar surfaces.As a consequence of the equivariant loop theorem <cit.>, if G extends to a handlebody, then there exists a Schottky system of loops for G. The converse is also true, that is, G extends to a handlebody if and only if a Schottky system of loops exists for G (see <cit.> for a proof of this fact in terms of Kleinian groups).If G consists only of orientation-preserving homeomorphisms, then the following is known (see, for instance, <cit.>): (i) if S/G has genus zero and exactly three cone points, then it cannot be extended to a handlebody, (ii) if G acts freely (i.e., the G-stabilizer of every point of S is trivial) and S/G has genus one, then it extends to a handlebody, (iii) if G acts freely and it is isomorphic to either an abelian group or one of the Platonic symmetry groups, then it extends to a handlebody, (iv) if G is a dihedral group, then it extends to a handlebody. In <cit.>, is studied the case when G is a cyclic group generated by an orientation-reversing homeomorphism.In <cit.>, it was proved that there are Hurwitz actions of G= PSL_2(q) (i.e., G consists of orientation-preserving homeomorphisms and the quotient orbifold S/G has genus zero and exactly three cone points of orders 2, 3 and 7) which extend. As these actions cannot extend to a handlebody (as noted above), these provide examples that answer negatively the above question under the presence of fixed points. In this paper, for the case of free actions, we observe that there are free actions that extend but not to a handlebody (Theorem <ref>). The main ideas are described below.Let us assume that G acts freely by orientation-preserving homeomorphisms on S of genus g ≥ 2 and let R=S/G of genus γ≥ 2.The free action of G on S is defined by a surjective homomorphism θ: F → S, where F=π_1(R). In <cit.>, Dominguez and Segovia proved that if G is either the alternating or the symmetric group, then it always extends.In <cit.>, Samperton proved that: (i) if B_0(G)=0 (where B_0(G) denotes the Bogomolov multiplier of G), then G extends, and (ii)if B_0(G) ≠ 0 and G does not contain a dihedral subgroup, then it admits a free action that does not extend (for instance, G= SmallGroup(3^5,28) in the Gap Library <cit.>).In particular, if B_0(G)=0 and there is not a Schottky system of loops for G, then the free action of G does extend but not to a handlebody. One may use the package HAP implemented in GAP to check if the Bogomolov multiplier of G is zero. For the case when G is either of odd order or of even order but with a unique element of order two, the existence or non-existence of a Schottky system of loops for G can be read from θ (see Theorem <ref> for γ=2 and Theorem <ref> for γ≥ 3). If γ=2,in which case, F=⟨ x_1,y_1,x_2,y_2: [x_1,y_1][x_2,y_2]=1 ⟩, then this existence result reads as follows. If ℭ is the orbit of the commutator [x_1,y_1] under certain (explicit) subgroup Out_0^+(F) of Aut^+(F) (see Section <ref>), thenG admits a Schottky system of loops (i.e., it extends to a handlebody) if and only if (θ) ∩ℭ≠∅. The collection ℭ is infinite, but, as F contains a finite number of subgroups of a fixed finite index, there is a finite subcollection ℭ_G of ℭsuch that (θ) ∩ℭ≠∅ if and only if (θ) ∩ℭ_G≠∅.Let us assume the extra assumptions that (i) [x_1,y_1] ∉(θ) (there are plenty of examples with this property; examples are provided in Section <ref>) and (ii) G has odd order. If N is the intersection of all the Aut^+(F)-images of (θ) (a finite index normal subgroup of F), then (see Lemma <ref>) N ∩ℭ=∅. So, by Theorem <ref>, this normal subgroup N provides a free action ofG_N=F/N as a group of orientation-preserving homeomorphisms of a closed orientable surface S_N such that S_N/G_N=R and which does not extend to a handlebody (note that there is a normal subgroup H<G_N with S=S_N/H and G=G_N/H). Two situations may happen: either (i) G_N does not extend (so, providing more examples as in <cit.>) or (ii) G_N extends but it does not extend to a handlebody (providing a negative answer to Zimmermann's question). We then interpret this construction as a fiber product to see that, if B_0(G) and all of the Sylow subgroups of G are abelian, thenB_0(G_N)=0 (Lemma <ref>), providing in this way examples as required.The author was privately communicated by E. Samperton that, together with M. Boggi and C. Segovia,they can obtain examples of free actions that extend but not to a hadlebody by using different methods <cit.>. § PRELIMINARIES§.§ Actions on handlebodies and Schottky groupsA Schottky group Γ of rank g is a purely loxodromic Kleinian group, with a non-empty region of discontinuity Ω⊂ℂ, isomorphic to the free group of rank g. It is known that Ω is connected (equals to ℂ if g=0, ℂ minus two points if g=1, and the complement of a Cantor set if g ≥ 2). The quotient Ω/Γ is a closed Riemann surface of genus g and (ℍ^3∪Ω)/Γ is a handlebody whose interior carries a complete hyperbolic metric (with injectivity radius bounded away from zero) whose conformal boundary is Ω/Γ.Koebe's retrosection theorem asserts that every closed Riemann surface can be obtained, up to biholomorphisms, in this way <cit.>. Let G be a finite group of homeomorphisms of a closed orientable surface S of genus g≥ 2. By the Nielsen realization theorem <cit.>, we may provide to S of a Riemann surface structure making G a group of conformal/anticonformal automorphisms of it. Let us fix one of such a Riemann surface structures on S. If M^3 is a handlebody whose boundary is S, then the given Riemann surface structure on S induces to the interior of M^3 a complete hyperbolic structure (with S as its conformal boundary). This is equivalent to have a Schottky group Γ of rank g, with region of discontinuity Ω⊂ℂ, such that S=Ω/Γ and M^3=(ℍ^3∪Ω)/Γ.To say that G extends to the handlebody M^3 is, in this setting, equivalent to the existence of a Kleinian group K containing Γ as a finite index normal subgroup such that the action of G is represented by the quotient group K/Γ. In this case, one says that K is a virtual (extended) Schottky group (if G does not contain orientation reversing homeomorphisms, then we say that K is a virtual Schottky group). The finite index condition asserts that K and Γ both have the same region of discontinuity. A geometrical picture of these types of groups, in terms of the Klein-Maskit combination theorems <cit.>, was provided in <cit.>. If G acts freely on S and contains no dihedral subgroups, such a picture is quite simple and is given as follows.[<cit.>] Let K be a virtual (extended) Schottky group containing as a finite index normal subgroup a Schottky group Γ of rank g ≥ 2. Assume that the group K/Γ does not contain dihedral subgroups and that it acts freely on Ω/Γ, where Ω is the region of discontinuity. Then K is the free product, in the sense of the Klein-Maskit combination theorem, of(i) α≥ 0 cyclic groups generated by loxodrmic elements A_j,(ii) α' ≥ 0 cyclic groups generated by pseudo-reflection elements B_j,(iii) β≥ 0 abelian groups, each one generated by an elliptic element E_j (of some finite order n_j≥ 2) together a loxodromic element C_j such that C_jE_jC_j^-1=E_j, and (iv) β' ≥ 0 groups, each one generated by an elliptic element F_j (of some finite order m_j≥ 2) together a pseudo-reflection D_j such that D_jF_jD_j^-1=F_j^-1. We say that K has signature (α,α',β,β').If the virtual (extended) Schottky group K has signature (α,α',β,β'), then Ω/K is a closed surface being the connected sum of α+β tori and 2(α'+β') real projective planes (K is virtual Schottky group if and only if α'=β'=0). If Γ is a Schottky group of rank g, being a finite index normal subgroup of K, then the free action of G=K/Γ on S=Ω/Γ extends to the handlebody M^3=(ℍ^3∪Ω)/Γ and M^3/G=(ℍ^3∪Ω)/K is topologically a handlebody and its conical locus (in its interior) consist of β+β' pairwise disjoint (unlinked) simple loops (their cone orders being n_1,…,n_β, m_1,…,m_β'). §.§ The Bogomolov multiplierIf G is a finite group, then its Schur multiplier is the abelian group MG):= H^2(G,ℚ/ℤ) ≅ H_2(G,ℤ) and its Bogomolov multiplier (see <cit.>) is the abelian group B_0=[ H^2(G,ℚ/ℤ) →⊕_A ⊂ G H^2(A,ℚ/ℤ)], where A runs over all abelian subgroups of G. If M_0(G) is the subgroup of H_2(G,ℤ) generated by the toral classes of M(G), then B_0(G) ≅ H_2(G,ℤ)/M_0(G).In <cit.>, Saltman observed that finite groups with non-zero Bogomolov multiplier provide counterexamples of Noether’s problem to the rationality of fields of invariants. In <cit.>, Bogomolov provided examples of p-groups of order p^9 with a non-zero Bogomolov multiplier.Let G=F_n/N, where F_n denotes the free group of rank n and N is a finite index normal subgroup of F_n. In <cit.>, Hopf proved thatM(G) ≅[F_n,F_n] ∩ N/[F_n,N],where [F_n,F_n] is the derived subgroup of F_n and [F_n,N] is the normal subgroup generated by the elements of the form aba^-1b^-1, a ∈ F_n and b ∈ N, and later, in <cit.> (see also <cit.>), Moravec obtained a Hopf-type formula for computing B_0(G)B_0(G) ≅[F_n,F_n] ∩ N/⟨ K(F_n) ∩ N⟩,whereK(F_n) is the set of all the commutators of F_n. The HAP package (http:// hamilton.nuigalway.ie/Hap/www/), implemented in GAP <cit.>, permits to compute the Bogomolov multiplier of a group G of small order (using the command BogomolovMultiplier(G)). The following facts can be found in, for instance, <cit.>. * B_0(G)=0 if G is one of the followings: * a symmetric group;* a simple group;* a p-group of order at most p^4;* an abelian-by-cyclic groups (i.e., G contains an abelian group A as a normal subgroup such that G/A is a cyclic group);* a primitive supersolvable group;* an extraspecial p-group (i.e., the center Z_G is a cyclic group of order p and G/Z_G≅ℤ_p^2n). * B_0(G_1× G_2) ≅ B_0(G_1) × B_0(G_2).* B_0(N ⋊ K) ≅ B_0(N)^K× B_0(K) when gcd(|N|,|K|)=1.The following result, which will be needed for some of our examples, is a consequence of <cit.> (see also <cit.>).Let G be a finite group such that all of its Sylow subgroups have zero Bogomolov multiplier. Then B_0(G)=0. §.§ Samperton's theoremEach finite group G can be realized as a group of orientation-preserving homeomorphisms of some closed orientable surface that acts freely.If the free action of G on S extends to a compact manifold M, then it might happen that in such extension the action is no longer free. In that case, if the G-stabilizer of every point is cyclic, then one says that the extension is non-singular (and that G extends non-singularly). In <cit.>, Samperton observed that B_0(G) provides an obstruction for the extendability of (all possible) free actions of G. Let us denote by D_n, where n ≥ 2, the dihedral group of order 2n, by 𝒜_4 and 𝒜_5 the alternating groups of orders 12 and 60, respectively, and by𝒮_4 the symmetric group in 4 letters.[Samperton <cit.>] Let G be a finite group. Then * Every free action of G on a closed orientable surface extends non-singularly if and only if B_0(G)=0.* If B_0(G) ≠ 0 and G does not contain a subgroup isomorphic to either D_n, n ≥ 2, 𝒜_4, 𝒜_5 or 𝒮_4, then G affords free actions on some closed orientable surface that do not extend.A consequence, of part (1) of the above theorem, is the following fact (this, in particular, provides the results in <cit.>) Every free action of a group G, of either type (a)-(f) in Section <ref>, extends. (i) The condition that Gdoes not contain a subgroup isomorphic to either D_n, n ≥ 2, 𝒜_4, 𝒜_5 or 𝒮_4, in part (2) of the above theorem, is equivalent for it to have at most one element of order two(if a finite group has two different elements of order two, then these two generate a dihedral group). In particular, this holds if G has odd order. (ii) If B_0(G) ≠ 0, then there might be both, extendable and non-extendable, free actions of G on surfaces. In <cit.>, there is observed that the group G= SmallGroup(3^5,28) ≅ (ℤ_9⋊ℤ_9) ⋊ℤ_3 satisfies (2) in Theorem <ref> (M(G) ≅ℤ_9 and B_0(G) ≅ℤ_3), so there exists a free action of it that cannot extend. In Section <ref>, we consider some free actions of this group and observe that some of them extend to a handlebody. §.§ Examples of groups which extends to handlebodiesAs observed in the previous section, if the group G is either (i) abelian, (ii) dihedral, (iii) alternating, (iv) symmetric, or (v) abelian-by-cyclic, then their free actions always extend. In the case of abelian and dihedral groups, the free actions always extend to a handlebody. Below (see Proposition <ref>), we observe that the same situation happens for abelian-by-cyclic groups.Let us first recall that there is exactly one topological free action of a finite cyclic group on a closed orientable surface.Let h:S → S be an order q orientation-preserving homeomorphism of a closed Riemantable surface S of genus g ≥ 2 and H=⟨ h ⟩≅ℤ_q acting freely. Let P:S → R=S/H be a Galois cover with deck group H. Let δ⊂ R bean essential dividing simple loop δ⊂ R such that one of the components T of R ∖{δ} is of genus one. As δ is a commutator of the fundamental group of R, the collection P^-1(δ) consists of q essential simple loops, each one with a trivial H-stabilizer. If α is one of the loops in P^-1(δ), theneither:(i) it is the only boundary of some component of S ∖ P^-1(δ) (in particular a commutator) or(ii) it is one of the q boundaries of the component of S ∖ P^-1(δ) of genus one (so a product of commutators). Let G be a finite group, admitting a normal subgroup A such that C=G/A is a cyclic group. If G acts freely as a group of orientation-preserving homeomorphisms of a closed orientable surface S of genus g ≥ 2, then it extends to a handlebody. Let π:S → R=S/G be a Galois covering with deck group G and let g_R≥ 2 be the genus of R. The surface X=S/A is a closed orientable surface on which the induced group of orientation preserving homeomorphisms C=G/K ≅ℤ_q is acting freely and X/C=R. Let P:S → X and Q:X → R be Galois coverings, with respective deck groups A and C, such that π=Q ∘ P.Let δ^1,…,δ^g_R⊂ R be a collection of pairwise disjoint essential dividing simple loops, each δ^j cutting R into a surface T_j of genus one (and other of genus g_R-1). As Q has cyclic deck group, the collection Q^-1(δ^j) ⊂ X consists of exactly q pairwise disjoint essential simple loops, δ^j_1,…,δ^j_q, each one being represented by a product of commutator elements of the fundamental group of X. Moreover, the collection of loops ∪_j=1^g_R Q^-1(δ^j) divides X into genus one and genus zero surfaces.Now, as P is an abelian Galois cover, with deck group the abelian group A (and as each δ^j_i is in the commutator subgroup), it can be seen that Q^-1(δ^j_i) consists of exactly |A| simple loops (each with trivial G-stabilizer). Moreover,the collection of simple lops 𝒢:=∪_j=1^g_R∪_i=1^q P^-1(δ^j_i) divides S into genus one and genus zero surfaces. We choose a non-dividing simple loop for each surface T_j⊂ R. If we adjoin the π-liftings of all of these loops to the collection 𝒢, then we get a collection ℱ of loops which is G-invariant and which divides S into planar surfaces. This collection of loops provides a handlebody for which G extends. §.§ A special set of loopsIn the proof of Proposition <ref>, we obtained a very particular Schottky system of loops. In the next result, we observe that this is always the case fora finite group G of orientation-preserving homeomorphisms acting freely on a closed orientable surface S. Let G be a finite group of orientation-preserving homeomorphisms of a closed orientable surface S which acts freely and R=S/G has genus h ≥ 2. Let h_0=1 if h=2 and h_0=h if h ≥ 3. Then * G extends to a handlebody if there is a collection 𝒜⊂ R consisting of h_0 pairwise disjoint essential simple dividing loops on R, each one dividing R into a genus one surface and other of genus h-1, such that each loop in the collection 𝒢⊂ S, obtained by lifting 𝒜 to S, has trivial G-stabilizer. * If G is either of odd order or it has a unique element of order two (for instance, a direct product ℤ_2×Ĝ where Ĝ has odd order), then the above condition is an “if and only if" statement. (1) Let π_G:S → R be a Galois covering with deck group G, and let 𝒜={δ_1,…,δ_h_0}⊂ R the collection of dividing simple loops as in the hypothesis. In this case, R ∖𝒜 consists of h surfaces, sayT_1,…,T_h, each one of genus one with one boundary component (T_j having as boundary the loop δ_j), and (for h ≥ 3) one planar surface T_0 with h boundary loops (all the loops in 𝒜). As each loop in π_G^-1(δ_j)has trivial G-stabilizer, it follows that each connected component X_j,k of π_G^-1(T_j) (for j=1,…,h) is of genus one (with exactly |G| boundary components) and (for h ≥ 3) each connected component Y of π_G^-1(T_0) (for j=1,…,h) is of genus zero and π_G:Y → T_0 is a homeomorphism.Let η_j⊂ X_j be a non-dividing simple loop (so X_j∖η_j is a planar surface with three boundary components). Then the liftings, under π_G, of η_j will cut-off each of the surfaces X_j,k into planar surfaces.By adding to 𝒢=π_G^-1(𝒜) all of these lifted loops will provide a collection ℱ of pairwise disjoint simple loops which is G-invariant and such that S ∖ℱ consists of planar surfaces, so G extends to a handlebody.(2) Assume G extends to a handlebody. By the Nielsen realization theorem <cit.>, we may assume that S is a closed Riemann surface and that G is a group of its conformal automorphisms. In this case, R=S/G is a closed Riemann surface of genus two and we may think of F as a Fuchsian group uniformizing R, that is, R=ℍ^2/F. If F_S denotes the kernel of θ, then S=ℍ^2/F_S and G=F/F_S.The extension of G to a handlebody is equivalent to the existence of a Schottky group Γ of rank g, with a region of discontinuity Ω, and a Galois covering map P:Ω→ S, with deck group Γ, for which G lifts. This means that, for every element ϕ∈ G, there is a Möbius transformation M_ϕ∈ PSL_2(ℂ) such that P ∘ M_ϕ = ϕ∘ P. The group obtained by the liftings of all elements of G is a Kleinian group K which contains Γ as a normal subgroup and K/Γ=G. This extension property asserts the existence of a surjective homomorphism ρ:F → K and asurjective homomorphism ω:K → G, whose kernel is Γ, such that θ=ω∘ρ. By Theorem <ref>,K is the free product, in the sense of the Klein-Maskit combination theorems <cit.>,of α≥ 0 cyclic groups generated by loxodromic transformations, and β≥ 0 groups, each one generated by a loxodromic transformation and an elliptic transformation, both of them commuting, such that α+β=h.In any of these situations, there are simple loops in Ω over which the free product is done. Such a collection of loop projects to R=Ω/K as a collection of simple loop 𝒜 as desired. [Proposition <ref> in the presence of orientation-reversing elements] Note from the above proof (and by Theorem <ref>) that a similar result is still valid for the case that G contains orientation-reversing elements. In this more general situation, the system of loops 𝒜 on S/G has the property that each loop cut-off R into two surfaces, one of them being either a torus or a Klein bottle, and the lift of any of the loops has trivial G-stabilizer. In the case h=2, Proposition <ref> can be stated as follows. Let G be a finite group of orientation-preserving homeomorphisms of a closed orientable surface S which acts freely and S/G has genus two. Then* G extends to a handlebody if there is an essential simple loop γ⊂ S satisfying the following properties: * the G-stabilizer of γ is trivial;* for every a ∈ G ∖{1}, a(γ) ∩γ = ∅;* if 𝒢={a(γ): a ∈ G}, then each connected componet of S ∖𝒢 has genus one. * If G is either of odd order or it has a unique element of order two (for instance, a direct product ℤ_2×Ĝ where Ĝ has odd order), then the above condition is an “if and only if" statement. As seen in the proof of part (1) of Proposition <ref>, for h=2, we may assume that the loop γ⊂ S (as in Lemma <ref>) projection on S/G to an essential dividing simple loop. §.§ Automorphism group of the genus two fundamental groupThe fundamental group of a closed orientable surface R of genus two has a presentation as follows:F=⟨ x_1,x_2,y_1,y_2: [x_1,y_1][x_2,y_2]=1⟩,where [a,b]=aba^-1b^-1. Figure <ref> shows the generators of F seen as simple loops in R. By identifying F with π_1(R) (Figure <ref>), the Nielsen theorem <cit.> asserts thatevery automorphism of F is induced by a homeomorphism of R. We denote by Aut^+(F) the subgroup consisting of those induced by orientation-preserving ones and by Out^+(F)= Aut^+(F)/ Inn(F). The group Out^+(F) can be identified with the mapping class group MCG(R)= Hom^+(R)/ Hom_0(R) of R.Some of elements of Aut^+(R) are given by the automorphisms σ_j, which are described as Dehn-twits along simple loops w_j, where (up to isotopy) w_1=y_1, w_2=x_1, w_3=x_2^-1y_1, w_4=y_2 and w_5=x_2. In terms of the given generators of F, these automorphisms are the following ones:[σ_1:(x_1,y_1,x_2,y_2) ↦ (x_1y_1,y_1,x_2,y_2),; σ_2:(x_1,y_1,x_2,y_2) ↦ (x_1,y_1x_1^-1,x_2,y_2),; σ_3:(x_1,y_1,x_2,y_2) ↦ (x_2^-1y_1x_1,y_1,x_2,x_2^-1y_1y_2),; σ_4:(x_1,y_1,x_2,y_2) ↦ (x_1,y_1,x_2y_2^-1,y_2),; σ_5:(x_1,y_1,x_2,y_2) ↦ (x_1,y_1,x_2,y_2x_2^-1). ] If we set σ_5+j=σ_j^-1, then[ σ_6:(x_1,y_1,x_2,y_2) ↦ (x_1y_1^-1,y_1,x_2,y_2),;σ_7:(x_1,y_1,x_2,y_2) ↦ (x_1,y_1x_1,x_2,y_2),; σ_8:(x_1,y_1,x_2,y_2) ↦ (y_1^-1x_2x_1,y_1,x_2,y_1^-1x_2y_2),;σ_9:(x_1,y_1,x_2,y_2) ↦ (x_1,y_1,x_2y_2,y_2),; σ_10:(x_1,y_1,x_2,y_2) ↦ (x_1,y_1,x_2,y_2x_2). ] Let us denote by Out_0^+(F) the subgroup of Aut^+(F) generated by σ_1,…,σ_5. The elements σ_1,…,σ_5 project to a set of generators of Out^+(F). This permitted to observe (see <cit.>) that Out^+(F) is a homomorphic image of the Artin-braid group B_6=⟨σ_1,…,σ_5: σ_iσ_j=σ_jσ_i, |i-j| ≥ 2,σ_iσ_i+1σ_i=σ_i+1σ_iσ_i+1, i=1,2,3,4⟩. §.§ The collection ℭLet us denote by ℭ⊂ F the (infinite) collection of all of the images under Out_0^+(F) of the commutator [x_1,y_1]. The following identities:σ_j([x_1,y_1])={[ [x_1,y_1], j ∈{1,2,4,5,6,7,9,10},; (x_2^-1y_1)[x_1,y_1](x_2y_1^-1), j=3,; (y_1^-1x_2)[x_1,y_1](y_1x_2^-1), j=8, ].permit to see thatℭ consists of the elements [x_1,y_1],(x_2^-1y_1)[x_1,y_1](x_2y_1^-1), (y_1^-1x_2)[x_1,y_1](y_1x_2^-1),together those of the formσ_i_n∘⋯∘σ_i_1∘σ_l([x_1,y_1]),l ∈{3,8}, n ≥ 1, i_j∈{1,…,10}.Let θ:F → G be a surjective homomorphism and set a=θ(x_1), b=θ(y_1), c=θ(x_2) and d=θ(y_2). If w ∈ℭ∩(θ), then it produces suitable u_w,v_w∈ G such that[a,b]=[u_w,v_w]. For instance,(i) if w=σ_3([x_1,y_1]) ∈(θ), then [a,b]=[b^-1,c], (ii) if w=σ_8([x_1,y_1]) ∈(θ), then [a,b]=[c^-1,b], (iii) if w=σ_2(σ_3([x_1,y_1])) ∈(θ), then [a,b]=[ab^-1,c], (iv) if w=σ_3(σ_3([x_1,y_1])) ∈(θ), then [a,b]=[b^-1,cb^-1c], (v) if w=σ_4(σ_3([x_1,y_1])) ∈(θ), then [a,b]=[b^-1,cd^-1], (vi) if w=σ_1(σ_2(σ_3([x_1,y_1]))) ∈(θ), then [a,b]=[a,c].§ A SIMPLE CONDITION FOR EXTENDABILITY OF FREE ACTIONS TO HANDLEBODIESLet us consider a free action of a finite group G on a closed orientable surface S such that R=S/G has genus γ≥ 2. Such an action corresponds to asurjective homomorphism θ:F^γ→ G, where F^γ=⟨ x_1,y_1,…,x_γ,y_γ: ∏_j=1^γ[x_j,y_j]=1⟩ (which we identify with the genus γ fundamental group of R). In this section, we discuss the existence of a Schottky system of loops for G in terms of θ.§.§ The case γ=2In this case, F^2=F. If a=θ(x_1), b=θ(y_1), c=θ(x_2) and d=θ(y_2), then G is generated by a,b,c,d and it has as one of the relations the following [a,b][c,d]=1.Let ℭ⊂ F as defined in Section <ref>. Let G be a finite group, either of odd order or with a unique element of order two, which can be generated by four elements a,b,c,d such that [a,b][c,d]=1. * A free action of G on a surface S, induced by a surjective homomorphism θ:F → G, extends to a handlebody if and only if there is some ϕ∈ Out_0^+(F), such that ϕ([x_1,y_1]) ∈(θ), i.e., if and only ifℭ∩(θ) ≠∅.* There exists a free action of G on some surface S such that S/G has genus two and which extends to a handlebody if and only ifG can be generated by four elements s_1, s_2, s_3, s_4∈ G such that [s_1,s_2]=1=[s_3,s_4].Let us fix a finite group G, realized as a group of orientation-preserving homeomorphisms of an orientable surface S, acting freely and with quotient R=S/G of genus two (so S has genus g=g_G=1+|G|). The free action is defined by a surjective homomorphism θ:F → G (unique up to post-composition with an automorphism of G and pre-composition by an automorphism of F).(1) By Lemma <ref> (and Remark <ref>), the free action of G extends to a handlebody if and only if there is an essential dividing simple loop δ⊂ Rthat lifts to exactly |G| simple loops on S. For any two essential simple diving loops on R, say δ_1, δ_2, there is an orientation-preserving homeomorphism of R carrying δ_1 to δ_2. By Section <ref>, this provides the desired result for Aut^+(F) instead of Out_0^+(F). But, as inner automorphisms keep invariantthe kernel of θ, we may replace Aut^+(F) by Out_0^+(F). The last part follows from Section <ref>.(2) If G extends to a handlebody, then (by providing a suitable Riemann surface on S making G a group of conformal automorphisms), there is a Kleinian group K (as in the last part of the previous proof) associated with some pair (α,β) ∈{(2,0),(1,1), (0,2)}. The geometrical structure of K asserts thatK=⟨ A_1, B_1: B_1^n_1=[A_1,B_1]=1 ⟩ * ⟨ A_2, B_2: B_1^n_1=[A_1,B_1]=1 ⟩≅ (ℤ×ℤ_n_1) * (ℤ×ℤ_n_2),where: (i) A_1 and A_2 are loxodromic elements;(ii) if (α,β)=(2,0), then n_1=n_2=1, i.e., B_1=B_2=I;(iii) if (α,β)=(1,1), then n_1=1, n_2≥ 2, i.e., B_1=I;(iv) if (α,β)=(0,2), then n_1≥ 2, n_2≥ 2. The surjective homomorphism ω:F → K (described in the above proof) asserts that* G is generated by ω(A_1)=s_1, ω(B_1)=s_2, ω(A_2)=s_3, ω(B_2)=s_4∈ G, with [s_1,s_2]=1=[s_3,s_4], and * there are elements S_1,S_2,S_3,S_4∈Γ_R such that: * ρ(S_1)=A_1, ρ(S_2)=B_1, ρ(S_3)=A_2, ρ(S_4)=B_2, and * the commutator [S_1,S_2] represents an essential simple dividing loop on R. Conversely, if G admits a set of generators, s_1,s_2,s_3,s_4, such that [s_1,s_2]=1=[s_3,s_4], then we may consider a Kleinian group K as in (<ref>), where n_1 is the order of s_2 and n_2 is the order of s_4. In this case, we may consider the surjective homomorphism ω:K → G defined by ω(A_1)=s_1, ω(B_1)=s_2, ω(A_2)=s_3 and ω(B_2)=s_4. The kernel of ω is a Schottky group Γ of rank g such that S=Ω/Γ admits the group G as a group of automorphisms acting freely and S/G of genus two (but it might be that this topological action of G in this surface is not topologically conjugated to the one we started on S).As a consequence of part (1) of Theorem <ref>, if N is a normal subgroup of F, of a finite odd index, such that N ∩ℭ = ∅, then the canonical projectionθ:F → G=F/N provides a free action of G that does not extend to a handlebody. Given an element w ∈ℭ, one may use GAP <cit.> to check if it belongs to ℭ∩ N (but ℭ is infinite).Fortunately, in F there are only a finite number of subgroups of a fixed index d=|G|. This means that the Aut^+(F)-orbit of N is finite, that is, there is a suitable finite subcollection ℭ_G of ℭ such that the above free action of G does not extend to a handlebody if and only if ℭ_G∩ N =∅. Part (1) of Theorem <ref> can be restated as follows. The free action of G on S extends to a handlebody if and only ifthere is a set of generators {S_1, S_2, S_3, S_4} of F such that: (i) each S_1, S_2, S_3, S_4, [S_1,S_2] represents an essential simple loop,(ii) [S_1,S_2][S_3,S_4]=1, and (iii) [S_1,S_2]∈(θ). A known algorithm to detect elements of F representing simple loops is provided in <cit.> but it seems to be hard to check.Let θ:F → G be a surjective homomorphism, where G is a finite group of odd order, such that [x_1,y_1] ∉(θ). Let N be the (finite) intersection of all the Aut^+(F)-images of (θ). Then the free action of the group G_N=F/N (of odd order) induced by the natural projection θ_N:F → F/N cannot extend to a handlebody. If moreover, B_0(G_N)=0, then this is an extendable free action that does not extend to a handlebody. Let us denote by N_1=(θ), …, N_r the Aut^+(F)-orbit of (θ) and let N:=N_1∩⋯∩ N_r. As seen in Lemma <ref>, N ∩ℭ = ∅. Now, we may consider the surjective homomorphism θ_N:F → G_N=F/N. We note that G_N has odd order (G_N is a subgroup of G^r). As a consequence of Theorem <ref>, the corresponding free action of G_N cannot extend to a handlebody. In the above result, we have two possibilities for the free action of G_N. Either (i) G_N does not extend or (ii) G_N extends but not to a handlebody. Also, if K ≤ N is a finite index subgroup, which is also normal in F, then F/K provides a free action on a closed orientable surface that does not extend to a handlebody.As a consequence of Part (2) of Theorem <ref>, we may observe the following. Let G a finite group of odd order which: (i) it can be generated by elements a,b,c,d such that [a,b][c,d]=1, and (ii) it does not have a set of generators s_1,s_2,s_3,s_4 such that[s_1,s_2]=1=[s_3,s_4]. Then every free action of G on a surface of genus g_G=1+|G| never extends to a handlebody. In particular, if moreover B_0(G) =0, then every free action of G in genus g_G is extendable, but not to a handlebody. Let θ:F → G, defined by θ(x_1)=a, θ(y_1)=b, θ(x_2)=c and θ(y_2)=d. The kernel of θ provides a free action of G on a closed orientable surface S of genus g_G=|G|+1 such that S/G has genus two.Since B_0(G)=0, it follows (from Samperton's theorem <ref>) that G extends. Theorem <ref> assert that this action cannot extend to a handlebody. Two groups of orientation-preserving homeomorphisms of a surface S are topologically equivalent if an orientation-preserving homeomorphism of S conjugates one into the other. We say that G is topologically rigid in genus 1+|G| if any two free actions of G in that genus are topologically equivalent. Let G be a finite group of odd order.If the topological free action of G in genus 1+|G| is rigid, then the non-extendability of G to a handlebody is equivalent to the non-existence of a set of generators s_1,s_2,s_3,s_4 of G with [s_1,s_2]=1=[s_3,s_4]. §.§ The case γ≥ 3Let ℭ^γ be the Aut^+(F^γ)-orbit of the set {[x_1,y_1],…,[x_γ,y_γ]}, where Aut^+(F^γ) denotes the group of automorphisms of F^γ induced by orientation-preserving homeomorphisms of R.Let G be a finite group, either of odd order or with a unique element of order two, which can be generated by 2γ elements a_1,b_1…,a_γ,b_γ such that [a_1,b_1] ⋯ [a_γ,b_γ]=1, where γ≥ 3.A free action of G on a surface S, induced by a surjective homomorphism θ:F^γ→ G, extends to a handlebody if and only if there is some ϕ∈ Aut^+(F^γ) such that ℭ∩(θ) ≠∅.The proof follows from similar arguments as for the proof of Theorem <ref> (by using Proposition <ref>). It is also possible to write down an equivalent result for the case that G admits orientation-reversing elements (see Remark <ref>)§ FINITE GROUPS ACTING FREELY WHICH EXTEND BUT NOT TO HANDLEBODIESLet N_1 be a normal subgroup of F of finite index d.As F has a finite number of subgroups of index d, the Aut^+(F)-orbit of N_1 consists of a finite collection N_1, …, N_r of normal subgroups of F. Let us consider the intersection N=N_1∩⋯∩ N_r; a finite index normal subgroup which is invariant under Aut^+(F). For each j ∈{1,…,r}, the natural projection θ_j:F → G_j=F/N_j induces a free action of G ≅ G_j on a closed orientable surface S_j such that S_j/G_j=R (as before, R represents a fixed closed orientable surface of genus two). Let π_j:S_j→ R be a Galois covering with deck group G_j associated with the free action induced by θ_j. The fiber product of these r pairs (S_1,π_1), …, (S_r,π_r),Ŝ={(s_1,…,s_r) ∈ S_1×⋯× S_r: π_1(s_1)=⋯=π_r(s_r)}⊂ S_1×⋯× S_r,is a (possibly non-connected) closed orientable surface. This surface admits the group G^r = G_1×⋯× G_r as a group of orientation-preserving homeomorphisms, acting freely and with Ŝ/G^r=R. Moreover, the map π:Ŝ→ R defined by π(s_1,…,s_r)=π_1(s_1)(=π_j(s_j)) is a Galois covering with deck group G^r. The natural projections P_j:Ŝ→ S_j, defined by P_j(s_1,…,s_r)=s_j are Galois coverings, with deck group G^r-1 (obtained from G^r by eliminating the factor G_j), satisfying π=π_j∘ P_j.The surface Ŝ might not be connected, but any two connected components of Ŝ are necessarily homeomorphic <cit.>. Let S_N be one of these connected components and let G_N be its G^r-stabilizer (note that, G_N has odd order if G has odd order). Then G_N is a group of orientation-preserving homeomorphisms of S_N which acts freely andsuch that S_N/G_N=R. If [x_1,y_1] ∉ N, then N ∩ℭ=∅, that is, G_N cannot extend to a handlebody. Let us assume, by contradiction, there is some w ∈N ∩ℭ. Let ϕ_j∈ Out_0^+(F) be such that N_j=ϕ_j(N_1). Then there is some k ∈{1,…,r} and there is some ϕ∈ Aut^+(F) with ϕ(N_1)=N_1 such that w=ϕ_k(ϕ([x_1,y_1])). So, as w ∈ N < N_k, it holds that ϕ_k(ϕ([x_1,y_1])) ∈ N_k, that is, [x_1,y_1] ∈ϕ^-1(ϕ_k^-1(N_k))=N_1=(θ), a contradiction. If all Sylow subgroups of G are abelian groups, then B_0(G_N)=0. We first note that a p-Sylow subgroup of G^r is a direct product of r copies of p-Sylow subgroups of G (which are abelian by our hypothesis). It follows that the p-subgroups of the subgroup G_N are also abelian. Now, the result follows from Lemma <ref>. As a consequence of Lemma <ref> and lemma <ref>, together with Theorem <ref>, we can answer Zimmermann's question in a negative form. There are finite groups of orientation-preserving homeomorphisms of a closed orientable surface that act freely and such that they extend but not to a handlebody. We may consider G ≅ℤ_p⋊ℤ_q as in example <ref> in Section <ref>, together the surjective homomorphism θ:F → G as defined in there (for which [x_1,y_1] ∉ N_1=(θ)). Let us consider the above fiber product construction to obtain G_N. By Lemma <ref>, together Theorem <ref>, the free action of G_N cannot extend to a handlebody. As G only has two Sylow subgroups, one of them a p-Sylow subgroup (isomorphic to ℤ_p) and the other a q-Sylow subgroup (isomorphic to ℤ_q), it follows from Lemma <ref> (and Samperton's theorem) that G_N extends. § EXAMPLESThis section discusses some examples of free actions of a finite group G of orientation-preserving homeomorphisms of a closed orientable surface S such that R=S/G has genus two. As seen in Theorem <ref>, this action is induced by a a surjective homomorphism θ:F → G, where F=⟨ x_1,y_1,x_2,y_2: [x_1:y_1][x_2:y_2]=1⟩. Theorem <ref> asserts that the action extends to a handlebody if and only if ℭ∩(θ) ≠∅. The collection ℭ is infinite, but we know there is a suitable finite subcollection of it (depending on G) over which we need to search those elements. For making explicit computations, we consider the subcollectionℭ_0 (of cardinality 13.446) consisting of those elements of ℭ of the form [x_1,y_1],(x_2^-1y_1)[x_1,y_1](x_2y_1^-1),together those of the formσ_i_n∘⋯∘σ_i_1∘σ_3([x_1,y_1]),n ∈{1,2,3,4,5,6,7,8,9}, i_j∈{1,…,5}. §.§ Example 1Let (i) 3 ≤ q < p be prime integers,(ii) 2 ≤ r<p, and (iii) r^q≡ 1 (p) and the semi-direct productG=⟨ a,b: a^p=1=b^q, bab^-1=a^r⟩≅ℤ_p⋊ℤ_q,As p and q are relatively prime, then B_0(G)=0 (see <cit.> and Section <ref>). Note that [a,b]=a^1-r, [b^-1,ba^-1]=a^r-1. Let us consider the surjective homomorphismθ:F → G, defined by θ(x_1)=a, θ(y_1)=b, θ(x_2)=c=b^-1,θ(y_2)=d=ba^-1. The homomorphism θ induces a free action of G on a closed orientable surface S such that S/G has genus two. As B_0(G)=0, this action always extends. By Proposition <ref>, it also extends to a handlebody, so (by part (1) of Theorem <ref>) this means that for each tuple [p,q,r] there is some w ∈ℭ∩(θ) ≠∅. We observe that [x_1,y_1] ∉(θ). Since w:=σ_5(σ_4(σ_3([x_1,y_1])=y_2x_2^-2y_1 [x_1,y_1] x_2^2y_2^-1y_1^-1∈(θ) if and only if r^3≡ 1 (p). As r^q≡ 1 (p) and r ∈{2,…,p-1}, we obtain that this is equivalent to have q=3 (as q is a prime integer) and r^3≡ 1 (p). This asserts that, for the tuplesof the form [p,3,r], one has that w ∈(θ). If q >3, then we may use GAP to find an element w ∈ℭ∩(θ). For instance,let us consider those tuples of the form [p,q,r], where 5 ≤ q <p ≤ 23.In table <ref>, we describe one element w ∈ℭ_0∩(θ) for each of these triples.§.§ Example 2LetG be the group generated by α,β,γ,δ, together the following relationsα^3,β^3,γ^3,δ^3, [α,β]^3, [α,δ], [γ,δ], [[α,β],α], [[α,β],β], γ^-1αγβ^-1α^-1β, γ^-1βγα^-1β^-1α, δ^-1βδα^-1β^-1α. G = ⟨α,β⟩⋊⟨γ,δ⟩≅(ℤ_3^2⋊ℤ_3) ⋊ℤ_3^2,If we set a=α, b=γ, c=β and d=δ, then [a,b][c,d]=1.In the GAP Library, this group corresponds to SmallGroup(3^5,65) and (using GAP)B_0(G)=0, so every free action of G always extends. Next, we observe that those actions on genus 244 necessarily extend to hadlebodies. Every free action ofthe group SmallGroup(3^5,65) on genus 244 extends to a handlebody. Let θ:F → G be any surjective homomorphism and set u=θ(x_1) and v=θ(y_1). If [u,v]=1, then (by part (1) of Theorem <ref>) the free action of G induced by θ extends to a handlebody. Let us assume now that [u,v]≠ 1. By computations with GAP, we may observe that, up to Aut(G), there is only one pair (u,v) ∈ G × G such that [u,v] ≠ 1.So, up to post-composition by an automorphism of G, we only need to consider those θ such that θ(x_1)=a and θ(y_1)=b. If we set r=θ(x_2) and t=θ(y_2), then the pair (r,t) satisfies the following * [a,b][r,t]=1,* G=⟨ a,b,r,t⟩; As the elements in the set {x_1x_2, ,x_1y_1^± 1,x_1y_2^-1, x_2y_2^± 1, x_2y_1^-1, y_2y_1}⊂ F represent essential simple loops, if1 ∈{ur, uv^± 1,ut^-1, rt^± 1, rv^-1,tv}, then the free action of G induced by θ agains extends to a handlebody.Let us only consider those pairs (r,t) such that 1 ∉{ur, uv^± 1,ut^-1, rt^± 1, rv^-1,tv}. Such a collection has cardinality 12.312. Using GAP, for each of these pairs (r,t) we canfind some w ∈ℭ_0∩(θ) and, by part (1) of Theorem <ref>, such a free action extends to a handlebody.§.§ Example 3Let us consider the group G= SmallGroup(3^5,28) which, by <cit.>, admits some free action on genus 244 which does not extend. In this case, the list of pairs(u,v) ∈ G × G with the property that [u,v] ≠ 1 has cardinality 54.432. Up to Aut(G), there are 96 such pairs. We fixed a particular pair (a,b). For such a pair, we obtain 3.132 other pairs (r,t) such that (i) [a,b][r,t]=1 and G=⟨ a,b,r,t⟩. For each (r,t), we consider the surjective homomorphism θ_a,b,r,t:F → G, defined by θ_a,b,r,t(x_1)=a, θ_a,b,r,t(y_1)=b, θ_a,b,r,t(x_2)=r and θ_a,b,r,t(y_2)=t. For each of them, we use GAP to compute ℭ_0∩(θ_a,b,r,t). We obtain that only 1.188 of them have non-empty intersections (so they extend to a handlebody).§.§ Example 4Let p ≥ 3 be a prime and consider the Heisenberg group (of order p^3)G:=⟨ x,y: x^p=y^p=[x,y]^p=[x,[x,y]]=[y,[x,y]]=1⟩. As G is an extraspecial group, B_0(G)=0, so every free action of G extends. Let us consider the case p=3. In this case, the list of pairs(u,v) ∈ G × G with the property that [u,v] ≠ 1 has cardinality 432 and, up to Aut(G), there is only one. One of these pairs is (a,b)=(x,y). For such a pair, we obtain 189 other pairs (r,t) such that (i) [a,b][r,t]=1 and G=⟨ a,b,r,t⟩. For each (r,t), we consider the surjective homomorphism θ_a,b,r,t:F → G, defined by θ_a,b,r,t(x_1)=a, θ_a,b,r,t(y_1)=b, θ_a,b,r,t(x_2)=r and θ_a,b,r,t(y_2)=t. For each of them, we use GAP to check that ℭ_0∩(θ_a,b,r,t) ≠∅. So, all of these free actions of Gextend to a handlebody. §.§ AcknowledgementsThe author is indebted to E. Samperton and A. Carocca, for valuable discussions on preliminary versions of this paper.99BM P. Bergau and J. Mennicke.Über topologische Abbildungen der Brezelfläche vom Geschlecht 2. Math. Z. 74 (1960), 414–435.BS Joan S. Birman and C. Series. Dehn's algorithm revisited, with applications to simple curves on surfaces. Ann. of Math. Stud. 111, Princeton University Press, Princeton, NJ, 1987, 451–478.BSS M. Boggi, E. Samperton and C. Segovia. Private communication.Bogomolov F. A. Bogomolov. The Brauer group of quotient spaces by linear group actions. Izv. Akad. Nauk SSSR Ser. Mat. 51 (1987) 485–516; English transl. in Math. USSR Izv. 30 (1988) 455–485. BMP F. A. Bogomolov, J. Maciel and T. Petrov. Unramified Brauer groups of finite simple groups of Lie type A_ℓ. Amer. J. of Math. 126 No.4 (2004), 935–949. CH A. F. Costa and R. A. Hidalgo.Anticonformal automorphisms and Schottky coverings.Ann. Acad. Scie. Fenn. 26 (2001), 489–508. DS J. E. Dominguez and C. Segovia. Extending free actions of finite groups on surfaces. Topology and its Applications 305 (2022), 107898. GZ M. Gradolato and B. Zimmermann. Extending finite group actions on surfaces to hyperbolic 3-manifolds. Math. Proc. Cambridge Philos. Soc. 117 (1995), 137–151.Sumana S. Hatui. An exact sequence and triviality of Bogomolov multiplier of groups. Journal of Algebra 619 (2023), 199–220. H1 R. A. Hidalgo. On Schottky groups with automorphisms. Ann. Acad. Scie. Fenn. 19 (1994), 259–289.H2 R. A. Hidalgo. Schottky uniformizations of closed Riemann surfaces with abelian groups of conformal automorphisms. Glasgow Math. J. 36 (1994), 17–32.H4 R.A. Hidalgo. On the 12(g-1) Bound. C.R. Math. Rep. Acad. Sci. Canada (1996), 39–42.H5 R.A. Hidalgo. Automorphism groups of Schottky type.Ann. Acad. Scie. Fenn. 30 (2005), 183–204.H3 R. A. Hidalgo. Geometric description of virtual Schottky groups. Bull. London Math. Soc. 52 (2020), 530–545. H-M R. A. Hidalgo and B. Maskit.A Note on the Lifting of Automorphisms.In Geometry of Riemann Surfaces. Lecture Notes of the London Mathematics Society 368, 2009.Edited by Fred Gehring, Gabino Gonzalez and Christos Kourouniotis. ISBN: 978-0-521-73307-6; doi.org/10.1017/cbo9781139194266.013HRV R. A. Hidalgo, S. Reyes-Carocca and A. Vega. Fiber product of Riemann surfaces. Contemporary Mathematics 776 (2022), 161–175.Hopf H. Hopf. Fundamentalgruppe und zweite Bettische Gruppe.Comment. Math. Helv. 14 (1942), 257–309. Kang Ming-chang Kang. Bogomolov multipliers and retract rationality for semidirect products. Journal of Algebra 397 (2014), 407–425.Kang1 Ming-chang Kang. The Bogomolov multipliers of rigid finite groups Archiv der Mathematik 102 (2014), 209–218.Kerckhoff S. P. Kerckhoff. The Nielsen realization problem. Bull. of the Amer. Math. Soc. New Series 2 (1980), 452–454.Koebe P. Koebe. Über die Uniformisierung der Algebraischen Kurven II. Math. Ann. 69 (1910) 1–81.Kuy B. E. Kunyavskiǐ. The Bogomolov multiplier of finite simple groups.In: “Cohomological and geometric approaches to rationality problems”,Progr. Math. 282, Birkhäuser, Boston, MA, 2010, pp. 209–217.Maskit:Comb B. Maskit. On Klein's combination theorem III. Advances in the Theory of Riemann Surfaces (Proc. Conf., Stony Brook, N.Y., 1969), Ann. of Math. Studies 66 (1971),Princeton Univ. Press, 297-316; doi.org/10.2307/j.ctt9qh044Maskit:Comb4 B. Maskit. On Klein's combination theorem. IV.Trans. Amer. Math. Soc. 336 (1993), 265–294; doi.org/10.2307/2154347M-Y W. H. Meeks, III and S.-T. Yau. The equivariant loop theorem for three-dimensional manifolds. The Smith Conjecture, Academic Press, New York, 1984, pp. 153–163. Michailov1 I. M. Michailov. Bogomolov multipliers for unitriangular groups. Comptes rendus de l'Acade'mie bulgare des Sciences 68 (2015), 689–696. Moravec P. Moravec. Unramified Brauer groups of finite and infinite groups. Amer. Math. J. of Math. 134 No. 6 (2012), 1679–1704. Nielsen J. Nielsen. Untersuchungen zur Topologie der geschlossenen zweiseitigen Flächen. Acta Math. 50 (1927), 189–358.RZ M. Reni and B. Zimmermann. Extending finite group actions from surfaces to handlebodies. Proc. Amer. Math. Soc. 124 (1996), 2877–2887. Saltman D. J. Saltman.Noether’s problem over an algebraically closed field. Invent. Math. 77 (1984), 71–84.Samperton E. Samperton. Free actions on surfaces that do not extend to arbitrary actions on 3-manifolds. Comptes Rendus Mathematátique 360 (2022), 161–167.GAP The GAP Group, “GAP–Groups, Algorithms, and Programming”, 2021,Version 4.11.1, https://www.gap-system.org.
http://arxiv.org/abs/2310.18124v1
{ "authors": [ "Rubén A. Hidalgo" ], "categories": [ "math.GT", "57M60, 57M10" ], "primary_category": "math.GT", "published": "20231027131030", "title": "Extending finite free actions of surfaces" }
Instituto de Física Gleb Wataghin - Universidade Estadual de Campinas (UNICAMP), 13083-859, Campinas SP, Brazil Departamento de Física Teórica y del Cosmos, Universidad de Granada, Campus de Fuentenueva, E–18071 Granada, Spain Department of Physics, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan Instituto de Física Gleb Wataghin - Universidade Estadual de Campinas (UNICAMP), 13083-859, Campinas SP, Brazil Instituto de Física Gleb Wataghin - Universidade Estadual de Campinas (UNICAMP), 13083-859, Campinas SP, Brazil Instituto de Física Gleb Wataghin - Universidade Estadual de Campinas (UNICAMP), 13083-859, Campinas SP, Brazil Instituto de Física Corpuscular, Universitat de València, E-46980, Valencia, Spain Instituto de Física Gleb Wataghin - Universidade Estadual de Campinas (UNICAMP), 13083-859, Campinas SP, Brazil Since neutrino oscillation was observed, several experiments have been built to measure its parameters.NOνA and T2K are two long-baseline experiments dedicated to measuring mainly the mixing angle θ_23, the charge-parity conjugation phase δ_ CP, and the mass ordering. However,there is a tension in current data. The T2K allowed region is almostexcluded by the NOνA result at the 90% confidence level. We proposea non-standard interaction (NSI) in neutrino production to relieve this tension.The NSI is computed through quantum field theory (QFT) formalism,where we derive perturbative analytical formulae considering NSI in the pion decay. Within this new approach, we can alleviate NOνA and T2K tension for a NSI complex parameters of order 10^-3. We show the new phase has a degeneracy to the Dirac CP phase of the form δ_ CP±ϕ= 1.5π being a possible source of violation of charge-parity symmetry.14.60.Pq,14.60.St,13.15.+g Alleviating the present tension between T2K and NOνA with neutrino New Physics at source E. S. SouzaJanuary 14, 2024 ========================================================================================Introduction.— The neutrino oscillation phenomenon provides evidence of physics beyond the StandardModel. Since its discovery, several experiments have measured neutrino oscillationparameters <cit.>.One not yet measured is the charge-parity (CP) conjugation phaseδ_ CP that quantifies the asymmetry between particle and anti-particle. The two long-baseline acceleratorexperiments, NOνA and T2K, were designed to measure this parameter.NOνA and T2K have recently released new data, revealing a tension within theallowed parameter regions <cit.>. In the standard three-neutrino oscillation scenario, the preferred region at a 90%confidence level for T2K data is nearly excluded by the NOνA data in the sin^2 θ_23 vs. δ_ CP parameter space.These results could indicate physics beyond the Standard Model. Numerous studies have been dedicated to explaining this tension, exploring various newphysics scenarios <cit.>. We propose a novel approach that includes non-standard interactions in neutrinoproduction specifically via pion decay.By adopting an effective field theory approach <cit.>, we can straightforwardly modify the rate of pion decay to includethese non-standard interactions during production.We have derived for the first time a perturbative analytical expression for neutrino oscillation in matterconsidering this new interaction at the source.The new coupling constant may be complex,which introduces a new charge-parity violation phase.We investigate the interplay between the two phases: oneoriginating from Pontecorvo-Maki-Nakagawa-Sakata (PMNS) neutrino mixing matrix <cit.> and the other from the effects ofthe new interaction.In this Letter, we demonstrate that the tension is alleviated evenif only one new complex parameter can be non-zero. We have determined that theabsolute value of the new interaction parameter is of order 10^-3. New physics in neutrino sector from a EFT perspective.— We consider the non-standard interactions on neutrino production essentially following the formalism introduced in <cit.>.The new physics is described in the Wilson coefficients of four-fermion effective interactions betweenneutrinos (ν_β),charged leptons (l_α) and quarks (q_i) , ∼q_i Γ_A^ijq_jℓ̅_αΓ_A'^αβ P_Lν_β, where i,j=u,d,c,s, … and α,β=e,μ,τ. The index A corresponds to the Lorentz indices of the interaction. All possible combinations of vertex structure are encoded in Γ, Γ'. Typically, neutrinos are produced via pion decay, and only vector, axial, and pseudo-scalar couplings with q_i = u, q_j = d contribute. The most interesting case corresponds to the latter, which is given by <cit.>ℒ_ P⊃√(2)G_F V_ud^ CKMϵ_αβ(u̅γ^5 d )(ℓ̅_αP_L ν_β) +h.c.  ,where V^ CKM is the Cabibbo-Kobayashi-Maskawa (CKM) matrix <cit.> G_F is the Fermi constant. Furthermore, ϵ_αβ are complex Wilson coefficients describing the new interaction's magnitude relative to the Weak interaction. The new interaction in Eq. (<ref>) creates another vertex for neutrino production beyond the traditional one <cit.>. With the new vertex, the total matrix element is a combination of the standard model amplitude (𝒜_L^ S) and the new physics amplitude (𝒜_P^ S), ℳ_α k^S=U_α k^* 𝒜_L^S+ [ ϵU ]_α k^*𝒜_P^S   . The upper index, S, for source, indicates that the process occurs only in production, since typically there is no pseudo-scalar detection process<cit.>. Therefore, there are no relevant effects on the detection for the new interactions. It should be emphasized that the neutrino mass eigenstates are exclusively encoded in the PMNS mixing matrices <cit.>, so that the amplitudes A_L/P^ Sdepend solely on the neutrino flavor. Notice thatoff-diagonal terms of ϵ_αβviolate lepton flavor number.Neutrino event rate in the QFT formalism.— Theevent rate is the physical observable in neutrinooscillation experiments. In the formalism of Quantum Field Theory (QFT),neutrino production,propagation, and detection are considered single processes.Therefore, the neutrino oscillation is quantified by a single tree diagram, as illustrated in Figure (<ref>) by the decay π^+→μ^+ + ν (production) followed by the detection ν + n → p + e^-. The time direction is from bottom to top. In the production and detection processes, the initial states are the pion and the neutron. The detected particles (e.g., charged leptons and protons) are regarded as final states <cit.>.The neutrino participates in the process as an intermediate state, where the uncertainties of the initial state resultin the superposition of neutrino massive states  <cit.>.In this formalism, the neutrino event rate, including NSI in production is <cit.>R_αβ^ NSI =κ∑_k j e^ -i Δ_kj𝒰_β k𝒰_β j^*×∫dΠ_Sℳ_α k^Sℳ_α j^S×∫dΠ_D|𝒜_L^D|^2  ,where α and β denote produced and detected flavor states, respectively, κ is a constant that includes the kinematical factors and target size,Δ_kj≡Δ m_kj^2 L/2E_ν, with E_ν being the neutrino energy,L the source-detector distance,and Δ m_kj^2 ≡ m_k^2 - m_j^2 the neutrino mass squared difference and the amplitude ℳ_α k^S is given in Eq. (<ref>). The integrals are over the phase space elements for source (S) and detection (D). We denote by 𝒰 the PMNS mixing matrix <cit.> in constant matter <cit.>.The events rate Eq.(<ref>) is associatedto the oscillation probability by the definition: P_αβ^ NSI≡R_αβ^ NSI/ϕ_α^ SMσ_β^ SM, corresponding to the transitionν_α→ν_β. It is conveniently written asP_αβ^ NSI=∑_k je^ -iΔ_kj[( 1 - p_αϵ)𝒰]_α k^*[( 1 -p_αϵ)𝒰]_α j𝒰_β k𝒰_β j^*  ,where p_α= m_π/(m_α(m_u+m_d)), e.g.p_μ∼ 27 and p_e∼ 5500 represents a chiral enhancement compared with the standard model rate <cit.>.In the end, the effect of NSI consists of substituting the matrix 𝒰_α i by [(1 - p_αϵ)𝒰]_α i. Although we have named Eq. (<ref>) as the probability for the sake of resemblance to the traditional form, the presence of NSI makes the expression effectively unitarity-violating. In order to analyze the impact of individual NSI parameters on the oscillation probability, we consider two scenarios corresponding to a new source for muon or electron neutrinos. In the EFT formalism, they are implemented by allowing only one non-zero Wilson coefficient at a time,ϵ_μ e or ϵ_e μ respectively. For the experimental analyses of interest, the parameter ϵ_μ e will modify the signal, and ϵ_e μ will affect the background. In the following, we will discuss the ϵ_μ e scenario to exemplify the perturbative formalism. Because the initial state in pion decay is muonic neutrino, we need to calculate the probability P_μβ. We write for the first time an analytical formula for Eq. (<ref>) in terms of theevolution operator S^ OSC≡e^-i H t for the neutrino Hamiltonian, defined in a standard oscillation scenario. ThereforeP_μβ^ NSI = | S_βμ^ OSC-p_μ ϵ_μ e^* S_β e^ OSC|^2, where the complex coefficient is explicitly ϵ_μ e≡ |ϵ_μ e| e^iϕ_μ e. The advantage of writing the probability aboveis that there is in the literature the analytical expression for S^ OSCwith matter effects <cit.>. It can also be straightforwardly generalized to other NSI scenarios, and other conversion/survival rates.The most important equation of this paper is the ν_μ→ν_e probability with NSI, using an analytical expression in matter. We have derived it by employing a perturbative approach <cit.>, where the leading terms are given by P_μ e^ NSI = 4s^2_13 s_23^2/ (1 - r_a)^2 sin^2(1 - r_a) Δ L / 2 +8J_r r_Δ/ r_a (1 - r_a) cos(δ_ CP+Δ L / 2 ) sin r_aΔ L / 2 sin (1 - r_a) Δ L / 2+ p_μ^2 |ϵ_μ e|^2+ 4 p_μ |ϵ_μ e|s_13s_23/1-r_asin((1-r_a)Δ L/2)sin(δ_ CP - ϕ_μ e + (1-r_a)Δ L/2)+O(r_Δ,s_13^2)   . From the phenomenological nature of the parametersr_Δ ≡ Δ m_21^2/Δ m_31^2≃ζ and sinθ_13≃√(ζ) , withζ∼𝒪(10^-2).We also define Δ = Δ m_31^2/2E_ν,L is the distance between the source and detector,r_a = a/Δwith a=√(2)G_F N_e being the matter potential and the Jarskolg factor <cit.> J_r = c_12s_12c_23s_23s_13 in shorthand notation s_ij=sinθ_ij and c_ij=cosθ_ij. The probability for the antineutrino retains the form of Eq. (<ref>) with the replacements δ_ CP→ - δ_ CP,ϕ_μ e→ - ϕ_μ eanda → -a   . The analytical formulae are very useful to identify sources of CP violation. In the standard oscillation scenario, we recall that the survival probability (P_αα=|S_αα^ OSC|^2 for neutrinos of flavor α) is a CP-conserving quantity. Thus, CP-violating effects can only come from processes involving the conversion of flavor between neutrinos (given by P_βα=|S_αβ^ OSC|^2 with β≠α).In the presence of NSI, this reasoning does not hold, as can be easily checked by considering β=μ inEq. (<ref>). The case with β=e is even more instructive. First, it follows directly from Eq. (<ref>) that terms quadratic dependent on |ϵ_μ e| will not depend on δ_ CP. Secondly, the leading order terms given in Eq. (<ref>) show that the presence of NSI induces a CP-violation in terms of the difference of phases (δ_ CP-ϕ_μ e). Since the ratio between the standard term in the first line of Eq. (<ref>) to the last term is of order ζ, for p_μ|ϵ_μ e|∼ 27|ϵ_μ e|>ζ∼𝒪(10^-2), the NSI term may dominate, implying that the experiment may be more sensible to the difference (δ_ CP-ϕ_μ e) rather than the standard CP phase itself.We will show this tendency when presenting our numerical results.Finally, the perturbative formula is in good agreement with the exact one.Indeed, most of the energy range of the experiments discussed here exhibits an error of less than one percent  <cit.>, including the region of interest for the NOνA and T2K experiments.Experimental and simulation details.— We analyze the effects of NSI in neutrino production by pion decay through two long-baseline experiments: NOνA (NuMI Off-axis ν_e Appearance) and T2K (Tokai-to-Kamioka).The NOνA experiment <cit.> measures muonic neutrino disappearance and electronic neutrino appearance. Its beam is located in the Fermilab laboratory in United Statesand it travels 810 Km to the detector in Minnesota. Neutrinos go through a matter density ofρ_ NOν A = 2.84 g/cm^3. We adopt the configuration of 13.6× 10^20 protons on target (POT) for neutrinosand 12.5× 10^20 POT for antineutrinos. The mass of the target detector is 14 kt and the neutrino energy range is from 1 up to 5 GeV, with energy spectra peaked at 2.1 GeV.The T2K experiment <cit.> also measures muonic neutrino disappearance and electronic neutrino appearance.The beam is produced at J-PARC lab in Japan and travels 295 Km to the Super-Kamiokande detector.The matter density in this experiment is ρ_ T2K =2.6 g/cm^3.The T2K flux has 14.7 × 10^20 POT for neutrino mode and 16.4 ×10^20 POT for antineutrino mode.The detector has a target mass of 22.5 kt, and the neutrino energy range is from 0.1 up to 1.25 GeV, with energy spectra peaked at 0.6 GeV. We use GLoBES <cit.> to simulate thenumber of detected events, according to theEq. (<ref>) and to perform thestatistical analysis.We fix the solar parameters to their best-fit values <cit.> Δ m_21^2 =7.53× 10^-5 eV^2, andsin^2θ_12 = 0.307,minimizing the χ^2 function over all theother relevant parameters. We put a Gaussian prior on the reactor angle sin^22θ_13 = 0.083±0.0031 because it is well measured by other experiments <cit.>. We then present in the followingsections, a quantitative analysis of our model, andthe allowed region for oscillation and NSIparameters, for NOνA and T2K individually as well as combined.Alleviating the T2K and NOνA tension.— The NSI changes the neutrino oscillation probability, as seen in Eq. (<ref>). In particular, it modifies the dependence on CP-violation parameters. In the standard oscillation scenario, a common way to illustrate the impact of thestill unknown δ_ CP parameter is to consider the bi-probability idea <cit.>, the plane antineutrino - neutrino probability. We adopt the same idea here, but for the NSI scenario.In Figure <ref> we illustrate the influence of the complex NSI, by showing the bi-probability plotfor the conversion ν_μ→ν_e for neutrinos by the one for antineutrinos. The ellipses are generatingvarying the value of the CP phase,with the remaining parameters being the combined best-fit values for NOvA and T2K. We consider the two possible mass ordering, the so-called normal ordering (NO) and inverted ordering (IO)  <cit.>.In the left (right) panel we use the L and E_ν typical parametersfor the NOνA (T2K) experiment. We also show as dots the best fit valuefor δ_ CP.In the standard oscillation scenario, the best-fit parameters are sin^2θ_23=0.56, Δ m_31^2 = 2.49 (-2.38)× 10^-3 eV^2, and the CP phase δ_ CP/π=1.22 (1.50), for NO (IO). In the presence of NSI thesebest-fit parameters become sin^2θ_23=0.47, Δ m_31^2 = 2.50 (-2.38)× 10^-3 eV^2 and the CP phase δ_ CP/π = 1.23 (1.54),ϵ_μ e/10^-3 = 2.13 (1.22)and the CP phase of NSI parameterϕ_μ e/π = 1.58 (1.54), for NO (IO). The estimated values of the probabilities, with uncertainties, isrepresented by the black cross. For the best-fit values of theNSI parameters, we notice theellipses change appreciablyeven though ϵ_μ e is of order 10^-3.The noticeable changes are due to the chiral enhancement term presented in the pion decay p_μ∼ 27, which is always multiplying by |ϵ_μ e|, see Eq. (<ref>).In addition, the phase ϕ_μ e introduces a new source of CP violation. The main message from Figure <ref> is: for both NOνA and T2K, the presence of NSI allows the best-fit values (solid circles) to be closer to the experimental result, in particular for NO. As we now discuss, this will be essential to alleviate the tension between these two experiments.Data from NOνA and T2K, neutrino and antineutrino appearance, disagree when consideringthe standard neutrino oscillation model.Each experiment individually prefers NO, but when combined, the preference is for IO. As our results will show, by combining both experiments, T2K dominates over NOνA. In Figure <ref>, we show the allowed region with NSI in δ_ CP and sin^2θ_23 parameter space for NOνA (T2K) in blue (pink) with 90% of C.L., for NO and also the combined analysis in black lines. On the left-hand side we show the standard oscillation scenario. In the middle panel, we have the effects of NSI considering only the parameter ϵ_eμ and in the right-hand side only ϵ_μ e. In both cases, the regions overlap completely for NO with 90% of C.L., alleviating the tension between the experiments. Indeed, our analyses were quantified using the GLoBES software <cit.>, whose results are summarized in Table <ref>.A fair estimate for the compatibility of a given model for different data sets is given by the parameter goodness of fit <cit.>.The parameter goodness of fit (PG) is defined as χ^2_ PG≡χ^2_min-∑_k (χ^2_k)_min, whereχ^2_min and(χ^2_k)_min are the global minimum and the local minimum. It is illustrative to notice the p-values of the different scenarios in Table <ref>.If the NSI contribution is absent, the p-value for NO is only 4%, which allows to exclude at 95% C.L. the standard hypothesis for NO.On the other hand, IO is strongly favoured in this case.It clearly shows the nature of the present tension among the T2K and NOνA experiments, since each of them, individually, prefer NO.By including the NSI parameter the p-value for both NO and IO is close to each other with a slight preference for IO.Although it is not possible to define a preference for the neutrino mass ordering based on the combination of both experiments, the tension is lifted since NO is not disfavoured anymore.As seen in Table <ref>, the best-fit for the combined analysis has δ_ CP different than zero and the NSI phase. It is then natural to ask how sensitive are the experiments to claim that CP is violated in the leptonic sector. In Figure <ref>, we show the allowed regions with 68 and 90 % C.L. in the parameter space of phases for normal ordering. The left (right) panel corresponds to the parameter space δ_ CPvs. ϕ_eμ (δ_ CPvs. ϕ_μ e). As anticipated from Eq. (<ref>), for |ϵ_μ e|∼𝒪(10^-3) the conversion probability has a dependence on the phase difference δ_ CP-ϕ_μ e, which explains the tendency seen on right panel of Figure <ref>.For ϵ_e μ, the left panel ofFigure <ref>, there is a dependence on the sum of phases, which is now much more evident.We should emphasize again that ϵ_e μ modifies the survival probability, which, in the standard case, does not induce any CP violation.Once the NSI in production is taken into account, it is possible to observe CP violation in the scenario ϵ_eμ at 90 % C.L. for the standard phase δ_ CP but also for the sum δ_ CP + ϕ_e μ. For the other scenario, it is not possible to claim CP violation on the leptonic sector at 90 % C.L., only at 68 % C.L.Finally, we contrast the parameter region allowed by NOνA and T2K data against constraints from other experiments. The same Lagrangian shown inEq. (<ref>) can induce changes in the pion leptonic decay rate, which is one the best-measured values <cit.>.We show in Figure <ref> the allowed region in the real vs. imaginary part of the NSI parameter space, for ϵ_eμ in the left and ϵ_μ e in the right panel, in blue. We also show in pink the region allowed by the constraints on pion decay, which is the process with the most stringent bounds to our scenario with NSI <cit.>. The allowed region from neutrino experiments alone is dramatically reduced for the case ϵ_eμ. Nevertheless, we should emphasize that the standard oscillation scenario (ϵ_eμ=0) is excluded at 90% C.L. For the case ϵ_μ e, the main effect is to constrain the real part of the NSI parameter. Including data from the neutrino experiments reduces the allowed region in the imaginary axis from pion decay experiments alone. Previous constraints were obtained from neutrino oscillation experiments to be ϵ_μ e< 4× 10^-3 <cit.>and ϵ_μ e< 2.6× 10^-3 <cit.> and our bounds are more stringent. Discussion & Conclusion.—Neutrino oscillation is a unique probe to BSM interactions. Long-baseline neutrino oscillation experiments are particularly sensitive to non-standard neutrino interaction (NSI). We showed that a new pseudo-scalar four-fermion interaction between quarks and leptons modifies the neutrino production. In this scenario there is new source of CP violation from the complex NSI parameter, ϵ_μ e or ϵ_eμ. It impacts the T2K and NOνA analyses, being compatible with constraints from other experiments.The new interaction at the source (NSI) contains an extra source of CP violation through a CP violation phase. We have found for the first time an analytical formula for neutrino propagation in matter that is completely in agreement with the numerical solution.For the scenario with ϵ_eμ, we show that it is possible not only to alleviate the T2K-NOνA tension as shown in Figure <ref> by 2.5 σ C.L. but alsoto claim CP violation in the leptonic sector at 90 %C.L. We also predicted the correlation for the sumof the phases δ_ CP +ϕ_eμ =1.5π.For the scenario with ϵ_μ e, there is also a correlation between the new phase ϕ_μ e with thestandard Dirac CP phase roughly as δ_ CP -ϕ_μ e= 1.5π, which is predictedby the analytical formula in Eq. (<ref>). Our allowed region for the NSI parameters as shown in Figure <ref> is compatible with bounds of very precise measurement of the pion decay rate and indicate a non-zero parameter for NSI. We have found that the NSI parameter is non-null at 3.0σ (1.5σ) for ϵ_eμ (ϵ_μ e). The non-zero value of the NSI parameter opens a new window to understand the source of CP violation and it can be tested in future neutrino oscillation experiments.A.C. acknowledges support from National Council for Scientific and Technological Development – CNPq through projects 1665232020-8 and 2010132022-3. P.S.P. acknowledges support by the National Natural Science Foundation of China (12375101, 12090060 and 12090064) and the SJTU Double First Class start-up fund (WF220442604). P.S.P. also acknowledges support by the Grant-in-Aid for Innovative Areas No. 19H05810. O.L.G.P. acknowledges support for FAPESP funding Grant 201419164-6 and 202208954-2 and National Council for Scientific and Technological Development – CNPq grant 3065652019-6 and 3064052022-9. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. E. S. S. acknowledges support from National Council for Scientific and Technological Development – CNPq through project 1404842023-0.apsrev4-1Supplemental Material This Supplemental material contains a more detailed description of the analytical probabilities derived in this work, their comparison against the numerical method ,and additional plots regarding the χ^2 analysis of the NSI parameters.§PROBABILITIES WITH NSI AT SOURCE The transition amplitudein the presence of a pseudo-scalar interaction √(2)G_F V_ud^ CKMϵ_αβ(u̅γ^5 d )(ℓ̅_αP_L ν_β) changes thestandard oscillation amplitude S_βα^ OSC≡⟨ν_β | e^-i H L |ν_α⟩ by combining it with the ϵ matrix, S_βα^ OSC→S_βα^ NSI=( δ_αα' - p_αϵ_αα'^* )S_βα' ^ OSC.Then, the oscillation probability P_αβ^ NSI≡ |S_βα^ NSI|^2 for the twotransitions become P_μ e^ NSI = | S_e μ^ OSC-p_μ ϵ_μ e^* S_e e^ OSC|^2, P_e e^ NSI = | S_e e^ OSC-p_e ϵ_e μ^* S_e μ^ OSC|^2   . We write the NSI complex parameters as ϵ_μ e≡ |ϵ_μ e| e^iϕ_μ e andϵ_eμ = |ϵ_eμ|e^iϕ_eμ. The amplitudes S_e μ^ OSC and S_e e^ OSC were obtained analytically in <cit.>.Since S_e e^ OSC and S_e μ^ OSC enterin both oscillation probabilities, we can derive a transformation F, acting on the elements of ϵ matrix in such a way that P_μ e^ NSI P_e e^ NSI,F: p_μϵ_μ e^*→ 1 / p_e ϵ_e μ^*, by the relationshipP_e e^ NSI = p_e^2 |ϵ_e μ^*|^2 P_μ e^ NSI( [ F : p_μϵ_μ e^*] )   .We also obtain the perturbative formulas for the oscillation probabilities for the ν_μ→ν_e transition P_μ e^ NSI (ϵ_μ e≠ 0) and survival P_e e^ NSI (ϵ_eμ≠ 0), with matter effects included for the NSI at source scenario.The expansion uses the established hierarchy between the oscillation parameters <cit.>, r_Δ ≡ Δ m_21^2/Δ m_31^2≃ζ  , and sinθ_13≃√(ζ)  , where ζ∼ 0.01 is the perturbative expansion parameter.The advantages of developing a perturbative method lie primarily in separating the different orders of contribution <cit.>. This separation allows us to describe analytical solutions and understand which terms are predominant. Then we can write the evolution matrix and therefore the neutrino transition matrix as a series expansion in powers of ζ,S_βα=[S_βα]^(0)+[S_βα]^(1/2)+[S_βα]^(1)+[S_βα]^(3/2)+⋯where [S_βα]^(r) with the index r=0, 1/2,1, 3/2 denotes the power law dependency of ζ in the expansion.The expansion Eq. (<ref>) translates in an expansion in P_αβ. The explicit expression we obtain for the probability is P_μ e^ NSI= [P_μ e^ NSI]^(0) + [P_μ e^ NSI]^(1/2) + [P_μ e^ NSI]^(1) +[P_μ e^ NSI]^(3/2) + ⋯[P_μ e^ NSI]^(0) =p_μ^2 |ϵ_μ e|^2  ,[P_μ e^ NSI]^(1/2) =4 p_μ|ϵ_μ e|s_13s_23/1-r_asin((1-r_a)Δ L/2)sin(δ_ CP- ϕ_μ e + (1-r_a)Δ L/2)  ,[P_μ e^ NSI]^(1) = 4 (s_23^2-p_μ^2|ϵ_μ e|^2) s^2_13/ (1 - r_a)^2 sin^2 ( (1 - r_a) Δ L / 2 )-4 p_μ|ϵ_μ e| s_12c_12c_23r_Δ/r_asin(r_aΔ L/2)sin( r_aΔ L /2+ϕ_μ e)  , [P_μ e^ NSI]^(3/2) =8 J_r r_Δ/ r_a (1 - r_a) cos(δ_ CP+Δ L / 2 ) sin(r_aΔ L / 2 ) sin( (1 - r_a) Δ L / 2 ) +p_μ|ϵ_μ e| s_13s_23/(1-r_a)^3[ 2(1 - r_a)Δ L sin(δ_ CP- ϕ_μ e + (1-r_a)Δ L )(2 r_a s_13^2 -(1 - r_a) r_Δ s_12^2 )+[ -cos (δ_ CP -ϕ_μ e ) +cos( δ_ CP -ϕ_μ e +(1-r_a)Δ L )][ (3 + r_a (2 + r_a)) s_13^2 - 2(1 - r_a) r_a r_Δ s_12^2 ] +2 s_13^2[ cos ( δ_ CP - ϕ_μ e -(1-r_a) Δ L )- cos (δ_ CP-ϕ_μ e) ] ]   ,andP_e e^ NSI= [P_e e^ NSI]^(0) + [P_e e^ NSI]^(1/2) + [P_e e^ NSI]^(1) +[P_e e^ NSI]^(3/2)+ ⋯[P_e e^ NSI]^(0) =1   ,[P_e e^ NSI]^(1/2) = 4p_e|ϵ_eμ|s_13s_23/1-r_asin((1-r_a)Δ L/2)sin( δ_ CP+ϕ_eμ + (1-r_a)Δ L/2)   ,[P_e e^ NSI]^(1) =-4 (1-p_e^2|ϵ_e μ|^2s_23^2) s^2_13/ (1 - r_a)^2 sin^2((1-r_a)Δ L/2) -4p_e|ϵ_eμ|s_12c_12c_23r_Δ/r_asin(r_aΔ L/2)sin(r_aΔ L/2 -ϕ_eμ)  , [P_e e^ NSI]^(3/2) = 8 J_r p_e^2 |ϵ_eμ|^2r_Δ/ (1-r_a)r_acos( Δ L/2+δ_ CP) sin(r_aΔ L /2) sin((1-r_a) Δ L/2)+ p_e |ϵ_eμ| s_23s_13/(1-r_a)^3[ 2(1-r_a)Δ L sin ( δ_ CP + ϕ_eμ + (1-r_a)Δ L) ( 2r_a s_13^2- (1-r_a)r_Δ s_12^2 ) + [ - cos ( δ_ CP+ϕ_eμ )+ cos ( δ_ CP+ϕ_eμ+(1-r_a)Δ L) ] [ (3+r_a(2+r_a))s_13^2 -2(1-r_a)r_a r_Δ s_12^2 ]+ 2 s_13^2[ cos ( δ_ CP + ϕ_eμ - (1-r_a) Δ L )- cos ( δ_ CP + ϕ_e μ ) ]]   ,whereΔ = Δ m_31^2/2E_ν,L is the distance between the source and detector,r_a = a/Δwith a=√(2)G_F N_e being the matter potential and the Jarskolg factor <cit.> J_r = c_12s_12c_23s_23s_13 and the cosine and sine of the mixing angles are given in shorthand notation s_ij=sinθ_ij and c_ij=cosθ_ij. The oscillation probability for antineutrinos, is obtained by performing the replacements δ_CP→ - δ_CP,ϕ_μ e→ - ϕ_μ e,ϕ_e μ→ - ϕ_e μanda → -a.We highlight a few points about the NSI probabilities* The transformation given by Eq.(<ref>) between the probabilities at each expansion order k in Eq.(<ref>) is summarized as [ P_ee^ NSI]^(k) =p_e^2 |ϵ_eμ|^2[ P_μ e^ NSI( p_μ |ϵ_μ e| → 1 / p_e |ϵ_eμ|;ϕ_μ e→ -ϕ_e μ) ]^(k); * In the usual neutrino oscillation probability the CP violating phase δ_ CP appears in lowest order of perturbation as[P_μ e^ OSC]^(3/2). For the NSI scenario, the CP violating term appears in[P_α e^ NSI]^(1/2)forα = e, μ,as can be seen in Eq. (<ref>) and Eq. (<ref>); * The lowest term that has CP violation effects is given by the combination of the phases δ_ CP- ϕ_μ e and δ_ CP+ ϕ_e μ, respectivelytoEq.(<ref>) and Eq.(<ref>). This behavior is apparent in Figure 4 of the main paper, where the allowed region follows this dependence; * The survival probability in the NSI scenario depends on CP phase of the PMNS matrix, δ_ CP, when in the standard neutrino oscillation scenario the survival probability is independent of this parameter. § COMPARISON BETWEEN ANALYTICAL AND EXACT FORMULA In order to ensure the validity of the perturbative formulas derived in the article, we cross-checked the analytical expressions against numerical results using GLoBES <cit.>. Therefore, in the case of non-zero ϵ_μ e, the ν_μ→ν_e transition is given by Eqs. (<ref>)-(<ref>). We define the error as the ratio of the difference of conversion probability between the analytical formula and the numerical computation over the average value of the analytical and numerical probability as Error [%] = 100 ·| ( P_μ e^ Num - P_μ e^ Ana) /( P_μ e^ Num+P_μ e^ Ana)/2| , where P_μ e^ Num and P_μ e^ Ana correspond to neutrinos conversion for the numerical and analytical case, respectively.In Supplemental Figure <ref>, we show the relative error as given by Eq. (<ref>) for the conversion rate of neutrinos in the energy range ofNOνA and T2K experiments for the case of normal ordering (NO) and with NSI.As shown in Eq. (<ref>), the conversion probability can be decomposed in a series of terms, which are plotted as different lines in Supplemental Figure <ref>, P_μ e^ NSI≡∑_n [P_μ e^ NSI]^(n) with n truncated to the corresponding order 𝒪(n). Note that the truncated order at 𝒪(2) presents relative errors less than 1%at 2.1 GeV for NOνA and 0.6 GeV for T2K (denoted by a vertical line in Supplemental Figure (<ref>)), that is the typical value of the energy spectrum of both experiments. Similar studies were performed for the case of antineutrino conversion probability, resulting in analogous conclusions.§ ALLOWED VALUES FOR THE PARAMETERS IN THE NSI AT SOURCE SCENARIOWe are going to show the allowed region for the modulus of the different NSI parameters. We follow the same methodology described in the section Alleviating T2K and NOνA tension. By employing GLoBES <cit.>, a χ^2 function was built for the two experiments, NOνA and T2K, as well as for their combination. By marginalizing the standard oscillation parameters, as well as the phase of the NSI, we obtain the functions χ^2(|ϵ_eμ|) and χ^2(|ϵ_eμ|). As usual, we adopt the following function √(Δχ^2)≡√(χ^2-χ_ BF^2) whereχ_ BF^2 is the value of the global minimum of χ^2. The translation to confidence levels (C.L.) is straightforward,x σ C.L. is given by √(Δχ^2)<x. From Supplemental Figure <ref> we obtain at 3σ C.L. that0.03<|ϵ_e μ|/10^-3<2.36, for the ϵ_e μ scenario, while at 2σ C.L. we have that |ϵ_μ e|/10^-3<5, for the ϵ_μ e scenario.
http://arxiv.org/abs/2310.18401v1
{ "authors": [ "Adriano Cherchiglia", "Pedro Pasquini", "O. L. G. Peres", "F. F. Rodrigues", "R. R. Rossi", "E. S. Souza" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20231027180005", "title": "Alleviating the present tension between T2K and NO$ν$A with neutrino New Physics at source" }
Constraining the growth rate on linear scales by combining SKAO and DESI surveys Muhammad Bilal1^,2^,* Dinis Martinho3 Reiner Sim4 Adnan Qayyum5^,6 Hunaid Vohra7 Massimo Caputo7 Taofeek Akinosho2 Sofiat Abioye2 Zaheer Khan2 Waleed Niaz2 Junaid Qadir5 January 14, 2024 ===============================================================================================================================================================================Sentiment analysis is a well-established natural language processing task, with sentiment polarity classification being one of its most popular and representative tasks. However, despite the success of pre-trained language models in this area, they often fall short of capturing the broader complexities of sentiment analysis. To address this issue, we propose a new task called Sentiment and Opinion Understanding of Language (SOUL). SOUL aims to evaluate sentiment understanding through two subtasks: Review Comprehension (RC) and Justification Generation (JG).RC seeks to validate statements that focus on subjective information based on a review text, while JG requires models to provide explanations for their sentiment predictions. To enable comprehensive evaluation, we annotate a new dataset comprising 15,028 statements from 3,638 reviews. Experimental results indicate that SOUL is a challenging task for both small and large language models, with a performance gap of up to 27% when compared to human performance. Furthermore, evaluations conducted with both human experts and GPT-4 highlight the limitations of the small language model in generating reasoning-based justifications. These findings underscore the challenging nature of the SOUL task for existing models, emphasizing the need for further advancements in sentiment analysis to address its complexities. The new dataset and code are available at <https://github.com/DAMO-NLP-SG/SOUL>. § INTRODUCTION Sentiment analysis, a well-established natural language processing task, aims to analyze and understand subjective information from text <cit.>. One of its most popular and representative tasks is sentiment classification (SC), which involves classifying a given text like customer review to a pre-defined sentiment label, such as positive, negative, or neutral <cit.>. With the advent of pre-trained language models, especially the recent large language models (LLMs), remarkable performance has been achieved on SC which sometimes even surpasses human performance <cit.>. This leads to a common belief that SC, and sentiment analysis in general, has reached its saturation.However, SC is not equivalent to the broader field of sentiment analysis as it does not require a deep understanding of the underlying sentiments and opinions expressed in the text. To determine the overall sentiment orientation, a model can simply rely on superficial textual features, such as the presence of specific words or phrases indicating positivity or negativity <cit.>. Therefore, even if a model demonstrates satisfactory performance in sentiment classification, it may not fully capture the subtle nuances of sentiment in languages, such as mixed sentiments towards different aspects, motivation of the expressed opinions, and possible outcomes of such sentiments, etc. In order to assess whether a model can truly comprehend the sentiment and accurately interpret intricate emotions, it is essential to adopt a more comprehensive approach that extends beyond merely predicting the polarity of sentiment.To this end, we introduce a new sentiment analysis task, namely Sentiment and Opinion Understanding of Language (SOUL). Our inspiration comes from reading comprehension tasks, which assess human understanding of a passage by asking to judge the validity of a statement. Similarly, we adopt the form of verifying comprehension statements regarding an opinionated review text. We also generate justifications for such predictions as a means of testing the sentiment understanding capability of models. As shown in Figure <ref>, given a review text, as well as statements that focus on subjective information discussed in the review, SOUL features two novel subtasks: Review Comprehension (RC) and Justification Generation (JG).Specifically, the RC task aims to determine if the given statement is , , orbased on the review, answering the question of what the sentiment is.While this task still involves a classification format, it can cover a broad range of sentiment phenomena with the flexibility to create statements focusing on diverse subjective aspects of the text. This flexibility breaks the restriction of SC purely focusing on sentiment polarity and allows for the introduction of more complex sentiment problems.In Figure <ref>, the reviewer's sentiment towards the raptor graphics lacks specific reasons, making it difficult for a simple pattern matching model to accurately predict the first statement aswithout contextual understanding. The second statement in Figure <ref> also presents a challenge for models in detecting sarcasm. The JG task, on the other hand, seeks to provide an explanation for the rationale behind the model's interpretation of sentiment, answering the question of why the sentiment is as predicted.By generating justifications for its predicted label, the model is forced to consider the context and nuances of the input text, rather than relying solely on superficial features such as individual words or phrases.For example, the second justification in Figure <ref> explains why the statement isand identifies the sarcastic meaning conveyed by the reviews. To facilitate such an investigation, we carefully annotate a new dataset based on common review corpora. In total, it consists of 15,028 statements across 3,638 reviews. Each statement is also annotated with a label and the corresponding justification. We extensively benchmark SOUL with both small language models (SLMs) trained with the complete training set and also LLMs under the zero-shot setting. Our experimental results indicate that SOUL is a challenging task that demands a deep understanding of sentiment, with a performance gap of up to 27% when compared to human performance. In addition, based on comprehensive evaluations conducted by both human experts and the GPT-4 model, it has been observed that SLMs have demonstrated proficiency in validating statements but struggle with generating reasoning-based justifications, indicating significant potential for enhancement in their comprehension of sentiment.In comparison, ChatGPT's strength lies in producing well-reasoned justifications, showcasing its powerful sentiment-understanding ability. However, there is still room for improvement regarding the overall accuracy, originality, and conciseness of ChatGPT's responses. Overall, we believe SOUL will advance sentiment analysis and encourage the creation of models capable of understanding sentiments at a human-like proficiency.§ SOUL §.§ Task FormulationLet t be an opinionated text item (e.g., a product review); s be a textual statement about the subjective information in the text; l ∈{, , } be the label of s; j be the justification for l; f be a model. Review Comprehension The objective of RC is to determine the validity l of the statement s in relation to review t.This involves classifying the statement s as either , , or :f(t, s) → lTo accomplish this task effectively, a model must fully comprehend the subjective information presented in both the review and the statement, and subsequently judge the validity. Justification Generation JG aims to generate predictions l and justifications j jointly: f(t, s) → l, jThe purpose is to enable the model to generate a justification that explains its predicted label, thereby helping us to examine whether the model has truly understood the sentiment. §.§ Dataset Construction Data Collection We utilize review texts from two corpora: Yelp <cit.> and IMDb <cit.>. The Yelp dataset is a collection of business reviews from the Yelp website, while the IMDb corpus consists of movie and TV show reviews from the IMDb website. These two datasets cover various review types and are widely used in existing sentiment analysis research, e.g., classifying the sentiment polarity of a given review. Therefore, we also take them as our data source for constructing subjective statements.Statement and Justification AnnotationFirstly, we instruct annotators to propose several statements focusing on various subjective information given a review. To achieve this goal, we request annotators to focus on multiple crucial sentiment elements, including the sentiment of opinion, sentiment target, opinion holder, the reason for the opinion, customer intent, etc <cit.>.Annotators are instructed to delve beyond the surface-level content and generate more challenging statements that require a deeper level of sentiment and opinion understanding ability.For instance, simply describing the user does not like the product is discouraged, but statements focusing on mixed sentiments towards various aspects, or the underlying reasons behind opinions are encouraged. In the meantime, the label of each statement is annotated. Unlike traditional natural language inference (NLI) tasks, the primary objective of statement annotation is to extract and label subjective information rather than establish logical connections or entailment between different texts. Besides, we request annotators to provide justifications for their proposed statements.These justifications provide the rationale behind the statement categorization.By treating them as the target in the JG task, we can gain valuable insight into the model's prediction processes and verify whether the model possesses real sentiment understanding ability.Data Validation and ProcessingAfter the initial construction phase, a separate group of annotators classifies each proposed statement without access to the original labels, aiming to evaluate the quality of the constructed statements. In cases of conflicting classifications, an expert annotator is consulted to resolve the discrepancies and assign a final label. In addition, annotators are instructed to categorize statements as simple, medium, or hard to determine their difficulty level. Reviews containing only simple statements are excluded to maintain an appropriate level of challenge. Dataset Statistics The SOUL dataset comprises 15,028 statements related to 3,638 reviews, resulting in an average of 4.13 statements per review.To create training, development, and test sets, we split the reviews in a ratio of 6:1:3, respectively. Detailed statistics can be found in Table <ref>. § EXPERIMENTS §.§ Setup Models We benchmark SOUL with several widely used Small Language Models with the complete training set, including Roberta <cit.>, T5 <cit.>, and Flan-T5 <cit.>.We adopt theversion for each model type. In addition, we extend our analysis to two representative LLMs from the Flan and GPT model families, namely Flan-T5_XXL (13B) <cit.> and ChatGPT[We conducted the experiments using the May 24th version of ChatGPT.], respectively. We evaluate these LLMs under a zero-shot setting.To reduce variance, we report the average results with three random seeds. The detailed setup can be found in Appendix <ref>.Evaluation Metrics For the RC task, we report f1 scores for each class and the overall accuracy. For the JG task, we use different evaluation metrics for predictions l and justifications j. We measure statement predictions l using overall accuracy. For justifications j, we employ commonly used text generation metrics, including BLEU <cit.>, ROUGE(1/2/L) <cit.>, and BERTScore <cit.> to calculate their similarity with the annotated justifications. §.§ Main Results Review Comprehension The results of the RC task are presented in Table <ref>.We can make the following observations: 1) All models exhibit limited sentiment ability, resulting in a performance gap of 17% to 27% compared to human performance. This shows the difficulty of the RC task, and there is still much room for improvement in developing models that can accurately comprehend sentiment and opinion. The challenges may arise from the complexity and diversity of statements that incorporate mixed sentiments, underlying reasons of opinions, and other aspects. 2) Among SLMs, Flan-T5 achieves the best performance, surpassing T5 with the same model size by 1.41%, possibly due to the effectiveness of instruction tuning during its training process. 3) LLMs demonstrate effective zero-shot ability, with Flan-T5_XXL achieving the best results even without any training data. In particular, ChatGPT appears to have difficulty with theclass, due to its overconfidence to misclassify theclass as . This failure shows the challenges posed by SOUL and emphasizes that a large model size alone is not sufficient to ensure comprehensive sentiment capabilities. Justification Generation We exclude Roberta from the JG task as it is a discriminative model and not well-suited for text generation tasks. The results for the JG task are presented in Table <ref>. We report commonly used text generation metrics as similarity evaluation and overall accuracy for reference. When it comes to accuracy, it appears that SLMs and Flan-T5_XXL show either minimal improvement or even a negative impact.Instead, ChatGPT stands out with a notable improvement of approximately 6% in validating subjective statements, The incorporation of justifications likely facilitated ChatGPT a more thorough comprehension of the sentiment conveyed, thereby enhancing its performance.However, this may require a strong reasoning ability, which is not observed in these SLMs. Therefore, attempting to perform justification generation could result in a decrease in accuracy performance. Regarding similarity evaluation, it can be inferred that Flan-T5 is capable of generating justifications that closely resemble the annotated justifications,whereas Flan-T5_XXL exhibits the weakest performance in this respect. Nevertheless, the results obtained from the similarity evaluation contradict the overall accuracy, indicating a need for a more robust evaluation method.§.§ Comprehensive EvaluationThere is a variation in accuracy between these two tasks, and there are conflicting evaluations of accuracy and similarity within the JG task. To perform a thorough analysis, we aim to assess the generated justifications using the following criteria, rated on a scale of 1 (poor) to 3 (excellent): 1) Correct: whether it is sensible and logical when compared to the gold label; 2) Align: whether it aligns with its generated label; 3) Relevant: whether it is relevant to the statement; 4) Concise: whether it is brief and concise; 5) Original: whether it demonstrates innovation and uniqueness. We sample 50 instances and utilize both human evaluators and the GPT-4 model <cit.> for assessment. See Appendix <ref> for the GPT-4 evaluation prompt.The evaluation results are shown in Figure <ref>. We can see that while SLMs and Flan-T5_XXL have satisfactory performance in the RC task, their justifications in the JG task lack originality, which means that they often rely on copied reviews without providing additional insights. This prediction process, without proper reasoning, potentially reduces its overall accuracy and creates inconsistencies between the two tasks. Conversely, ChatGPT exhibits promising performance across various criteria, indicating its robust sentiment understanding capability. Nevertheless, there is still room for improvement in terms of overall accuracy, as well as enhancing originality and conciseness in the JG task. We include examples of justifications generated by these models in Appendix <ref> for detailed illustration. Moreover, the high agreement between human evaluators and GPT-4 suggests that automated evaluation using GPT-4 is a more viable approach than similarity evaluation.§.§ Comparison with NLIFurthermore, we conduct an inference on SOUL test set using a widely used NLI model, namely the NLI-RoBERTa model[<https://huggingface.co/cross-encoder/nli-roberta-base>], trained on the SNLI <cit.> and MNLI <cit.> datasets, to demonstrate the focus of SOUL on subjective information rather than logical connections. As presented in Table <ref>, the NLI-RoBERTa model achieves an accuracy of only 55.02%, which is significantly lower compared to the RoBERTa model trained on the SOUL dataset. This outcome emphasizes the distinction between the objectives of SOUL and traditional NLI tasks. While they may share some similarities, the primary goal of SOUL is to extract and label subjective information, rather than establishing logical connections or entailment between different texts. § CONCLUSIONThis paper introduces a novel task called Sentiment and Opinion Understanding of Language (SOUL), including two subtasks: review comprehension and justification generation.Our experimental results show that SOUL is a challenging task that demands a deep understanding of sentiment, with a performance gap of up to 27% when compared to human performance. Moreover, evaluations conducted with both human experts and GPT-4 demonstrate the weakness of SLMs in generating reasoning-based justifications while showcasing ChatGPT's powerful sentiment understanding ability. Nevertheless, there is still scope for enhancing the overall accuracy, originality, and conciseness of ChatGPT's responses. § LIMITATIONThe newly proposed dataset SOUL utilizes customer reviews as the main source of constructing subjective statements. However, incorporating more opinionated texts, such as social media posts and dialogues, could potentially enable the assessment of models in a wider variety of text types. Also, SOUL currently features two tasks, including review comprehension and justification generation, to evaluate the model's sentiment understanding abilities. More task formats can be designed to comprehensively understand the model's capabilities and limitations.§ ACKNOWLEDGEMENTSY. Deng is supported by Alibaba Group through Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore. Sinno J. Pan thanks for the support from HK Global STEM Professorship and the JC STEM Lab of Machine Learning and Symbolic Reasoning.acl_natbib§ APPENDIX §.§ Detailed Setup We perform a grid search on the development set to find the best hyper-parameters for fine-tuning SLMs. Specifically, we search the learning rate among {1e-6, 5e-6, 1e-5, 5e-5, 1e-4}, batch size among {2, 4, 8}, and number of epochs among {4, 8}. For LLMs, we utilize their APIs to perform zero-shot inference.§.§ GPT-4 Prompt for Evaluation We adopt the following prompt to evaluate the justification generated by different models:§.§ Case Study Table <ref> presents examples of justifications generated by various models. In this particular sample, all models, except Flan-T5_XXL, have made the correct prediction. However, when it comes to justifications, both T5 and Flan-T5 have simply copied text from the review without any reasoning. On the other hand, ChatGPT has demonstrated a strong ability to understand sentiment by providing reasonable justifications based on the original review text, which led to the correct prediction.
http://arxiv.org/abs/2310.17924v1
{ "authors": [ "Yue Deng", "Wenxuan Zhang", "Sinno Jialin Pan", "Lidong Bing" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027064848", "title": "SOUL: Towards Sentiment and Opinion Understanding of Language" }
0000-0002-7825-1526]Indrani Pal Indian Institute of Astrophysics, Bangalore, 560034, Karnataka, India Pondicherry University, R.V. Nagar, Kalapet, 605014, Puducherry, India Department of Physical Sciences, Indian Institute of Science Education And Research Kolkata, Mohanpur, Nadia-741246, West Bengal, India Indian Institute of Astrophysics, Bangalore, 560034, Karnataka, India Department of Physics, Faculty of Natural Sciences, University of Haifa, Mount Carmel, Haifa 3498838, Israel Cochin University of Science and Technology, South Kalamassery, Kochi, Kerala, 682022, India Indian Institute of Astrophysics, Bangalore, 560034, Karnataka, India Instituto de Estudios Astrofísicos, Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Avenida Ejercito Libertador 441, Santiago, Chile Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, China0000-0001-5544-0749]S. Marchesi Dipartimento di Fisica e Astronomia (DIFA), Università di Bologna, via Gobetti 93/2, I-40129 Bologna, Italy Department of Physics and Astronomy, Clemson University, Kinard Lab of Physics, Clemson, SC 29634, USA INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Piero Gobetti, 93/3, 40129, Bologna, Italy We carried out a uniform and systematic analysis of a sample of 112 nearby bright Seyfert 1 type AGN, the observations of which were carried out by the Nuclear Spectroscopic Telescope Array (NuSTAR) between August 2013 and May 2022. The main goal of this analysis is to investigate the nature of the X-ray corona in Seyfert 1 galaxies. From the physical model that fits the NuSTARspectra, we could constrain the high energy cut-off (E_cut) for 73 sources in our sample. For those 73 sources, we fitted the Comptonization model to estimate the temperature (kT_e) of their corona. kT_ecould be constrainedin 42 sources. We investigated for possible correlations between various properties of the corona obtained from physical model fits to the observed spectra and between various coronal parameters and physical properties of the sources such as Eddington ratio and black hole mass. We found (a) a strong correlation between E_cut and the photon index and (b) a significant negative correlation between kT_e and the optical depth.§ INTRODUCTIONMost massive galaxies host supermassive black holes (SMBHs) at their centres with masses (M_BH) of the order of 10^5 to 10^10 M_⊙. These SMBHs power active galactic nuclei (AGN) by accretion of matter from their surroundings <cit.>. The observed optical, ultra-violet (UV) radiation from these accretion-powered systems is believed to be thermal emission from the standard optically thick, geometrically thin accretion disk <cit.> that surrounds the SMBHs.These AGN are also sources of intense X-ray emission <cit.>. The X-ray emission in the radio-quiet category ofAGN is believed to originate from a compact region that contains hot electrons (T_e ∼ 10^8-9 K) called the corona situated close to the vicinity of the SMBH. Observations indicate that the corona is physically compact with size scales of the order of 3 - 10 R_G <cit.>, where R_G is the gravitational radius defined as R_G = GM_BH/c^2, here, G is the gravitational constant and c is the speed of light.The hot electrons in the corona, inverse Compton scatter the optical or UV thermal photons from the geometrically thin, optically thick accretion disk, thereby producing X-ray emission <cit.>. The emergent X-ray spectrum follows a power law of the form N(E) ∝ E^-Γ, whereΓ is the power law photon index with a high energy cutoff (E_cut; ). In this paradigm, expecting a connection between the accretion disk and the X-ray-emitting corona is natural. One piece of observational evidence for this accretion disk corona connection is the observed positive correlation <cit.> between Γ and the mass-normalized accretion rate usually represented by the Eddington ratio (λ_Edd = L_Bol/L_Edd). Here, L_Bol is the bolometric luminosity and L_Edd is the Eddington luminosity defined as L_Edd = 1.3 × 10^38 M_BH/M_⊙ erg s^-1. A possible explanation for this observed correlation is that at a higher λ_Edd the increased optical, UV photons from the accretion disk can lead to a more effective cooling of the corona, thereby leading to a decrease in the temperature of the corona (kT_e) and larger Γ or softening of the X-ray spectrum. Recently, <cit.> proposed that another explanation for this is the pair thermostat, due to the changes in temperature across the compactness-temperature (l-θ) plane, and they could successfully reproduce the slope of the Γ-λ_Edd correlation.According to Comptonization models, for a corona with slab geometry, E_cut is related to the temperature of the corona as E_cut = 2-3 kT_e for optically thin and thick plasma respectively <cit.>. However, according to <cit.>, the relation betweenE_cut and kT_e cannot be simple in the case of the non-static corona. Also, <cit.> have shown that the relation of E_cut = 2-3 kT_e is only valid for low values of kT_e and τ. Recently, for the source MR 2251-178, <cit.> found E_cut = 4.84 ± 0.11 kT_e, which deviates from the generally considered relation betweenE_cut and kT_e <cit.>. Also, Γ is expected to depend on various parameters of the corona, such as its temperature kT_e, the optical depth (τ) as well as the seed photon temperature.To understand the properties of AGN, it is important to have better constraints on the corona of AGN that characterise the X-ray emission, such as Γ and kT_e.Earlier studies on the determination of the temperature of the corona in Seyfert galaxies used data from high-energy instruments such as theCGRO <cit.>, BeppoSAX <cit.>, INTEGRAL <cit.>, Swift-BAT <cit.> and Suzaku <cit.>. These studies have found that in Seyfert galaxies, the coronal temperature shows a wide range, with the values of E_cut ranging from 50-500 keV. These less sensitive observations were, however, limited to nearby bright Seyfert galaxies.Increased interest in studieson the hard X-ray spectra of AGN, as well as the determination of its coronal temperature, happened after the launch of the Nuclear Spectroscopic Telescope Array (NuSTAR; ) in the year 2012, due to its wide spectral coverage of 3-79 keV and its high sensitivity beyond 10 keV. Since its launch, values of the temperature of the corona are known for many AGN, but most of those studies are restricted to the determination of E_cut. Also, data from NuSTAR have led to the finding of the variation inkT_e <cit.> as well as E_cut <cit.>. In recent years, there have been a few studies on characterising the temperature of the corona (E_cut or kT_e) in samples of AGN <cit.>. Most of these studies were focussed on the determination of E_cut from phenomenological model fits to the observed X-ray spectra. Though E_cut can serve as a good proxy for kT_e, the recent findings of deviation from the E_cut = 2-3 kT_e in few sources, have necessitated the determination of kT_e in AGN based on physical model fits to the observed X-ray spectra. In this work, we carried out an analysis of 112 Seyfert 1 type AGN to determine E_cut based on physical model fits to the NuSTAR data. Of these 112 sources, we could constrain E_cut in 73 sources. Further, physical model fits were carried out on these 73 sources to constrain kT_e. We could put constraints on kT_e in 42 sources. We investigated the correlation between different physical parameters obtained from the physical model fits. The selection of our sample of sources and data reduction are given in Section 2. We describe in Section 3 the model fits carried out on the data; the results are given in Sections 4 and 5, a comparison of our findings on E_cut and kT_e with those found from the literature are given in Section 6, followed by the Discussion and the Summary in the final two sections. In this work we adopted the cosmological parameters of H_0 = 70 km sec^-1 Mpc-1, Ω_M = 0.3 and Ω_λ = 0.7. All the quoted uncertainties in the derived parameters were calculated at the 90 per cent confidence level. § SAMPLE SELECTION AND DATA REDUCTION §.§ Sample selectionOur sample of sources for this study was selected from theNuSTAR Master Catalog[https://heasarc.gsfc.nasa.gov/W3Browse/nustar/numaster.htmlhttps://heasarc.gsfc.nasa.gov/W3Browse/nustar/numaster.html]. From this catalogue,we looked into the publicly available data for Seyfert galaxies between August 2013 and May 2022. We found a total of 850 Seyfert galaxies. We selected only Seyfert 1 galaxies, with a classification of Sy1-Sy1.9 following the Osterbrock classification system <cit.>, with a net count rate greater than0.1 counts/sec in the 3-79 keV band to have a sufficientlygood signal-to-noise ratio spectrum for model fitting.Adopting the above-mentioned criteria, we arrived at afinal sample of 130 Seyfert 1 galaxies spanning the redshift interval of 0.002 < z <0.692. Of these 130 sources, around 90 per cent of the sources were studied in <cit.>. Based on the value of the line of sight column densities (N_H) required in the absorption power-law fit, 18 sources were classified as obscured AGN (10^22 ≤ (N_H cm^-2)< 10^24) in <cit.>. Therefore, for the remaining 10 per cent of the sample, we carried out a spectral fit with absorbed power law to determine the hydrogen column densities and found N_H < 10^22 atoms cm^-2 in all of them. Thus, for this study, we selected the 112 unobscured nearby AGNs with a median redshift of 0.035.We show in Fig. <ref> the redshift distribution for our sample of sources. The redshifts are taken from SIMBAD [http://simbad.cds.unistra.fr/simbad/http://simbad.cds.unistra.fr/simbad/]. The full list of the Seyfert 1 galaxies and their NuSTAR observational details are given in Table <ref>. §.§ Data reductionFor the 112 sources, we carried out the reduction of the raw event data taken from the HEASARC archive [https://heasarc.gsfc.nasa.gov/db- perl/W3Browse/w3browse.plhttps://heasarc.gsfc.nasa.gov/db- perl/W3Browse/w3browse.pl], using the standard NuSTAR data reduction software NuSTARDAS [https://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar swguide.pdfhttps://heasarc.gsfc.nasa.gov/docs/nustar/analysis/nustar swguide.pdf] v1.9.3 distributed by HEASARC within HEASoft v6.26.1. We generated the calibrated and cleaned event files using the nupipeline task and the instrument responses taken from the NuSTAR calibration database (CALDB release 20190607). To exclude the periods of elevated background, we selected the filtering options SAACALC=2, SAAMODE=OPTIMIZED and TENTACLE=YES to consider the passage of the satellite through the South Atlantic Anomaly (SAA). The source regions for the 112 Seyferts were extracted using circular radii between 30” - 70”, depending on the source. Similarly, we selected the same circular area on the same chip to extract the background counts. All the science products, including energy spectra, response matrix files (RMFs) and auxiliary response files (ARFs), were generated using the task nuproducts for both the focal plane modules FPMA and FPMB. For spectral analysis, we fitted the background subtracted spectra from FPMA and FPMB simultaneously using XSPEC version 12.10.1 <cit.>, allowing the cross normalization factor to vary freely during spectral fits. The spectra were binned to have minimum counts of 20 per spectral energy bin. To get an estimate of the model parameters that best describe the observed data, we used the chi-square (χ^2) statistics, and for calculating the errors in the model parameters, we used the χ^2 = 2.71 criterion, which is equivalent to the 90 per cent confidence range in XSPEC.§ SPECTRAL ANALYSISWe carried out a detailed spectral analysis of the NuSTAR data in the energy range of 3-79 keV for the 112 sources, a few of which also have soft X-ray observations. Since these are unobscured Seyfert 1 galaxies, we do not have degeneracies between N_H and continuum parameters that have been found in obscured Seyfert 2 galaxies <cit.>. Therefore, we chose to fit the NuSTAR data alone, and we do not expect our results to be significantly affected by the lack of information at energies < 3 keV. In the past, too, a similar approach has been followed in several studies <cit.> aimed at characterising the corona. For the completeness of our study, we compared our findings with those found in the literature where E_cut/kT_e were obtained with and without the soft X-ray coverage (see Table <ref>).We used the following two models * Model-1: const × TBabs × zTBabs × (xillver/relxill/(relxill+xillver)) * Model-2: const × TBabs × zTBabs × (xillverCP/relxillCP/(relxillCP+xillverCP))In both the models,const represents thecalibration constant between the NuSTAR focal plane modules, FPMA and FPMB.TBabs was used to model the Milky Way Galactic hydrogen column density,which was taken from <cit.> for each source. The component zTBabs represents the hydrogen column density (N_H^INT) of thehost galaxy. During the modelling of the source spectrum, the value ofN_H^INT was allowed to vary freely.xillver/relxill <cit.> was used to model the spectra with an absorbed cut-off power law along with the reflection features present in it. In XSPEC Model-1 took the following forms, * Model-1a: const × TBabs × zTBabs × (xillver)* Model-1b: const × TBabs × zTBabs × (relxill)* Model-1c: const × TBabs × zTBabs × (relxill+xillver) During the fit using Model-1a, the parameters that were kept free were Γ, E_cut, R and the normalization (N_xillver) of the xillver component. The reflector was considered neutral; therefore, we fixed the ionization parameter (logξ) to 0.0. The values of AF_e and the inclination angle were fixed to the solar value (=1.0) and 30^∘ respectively.In Model-1b, we replaced xillver with relxill to take care of the relativistic smeared Comptonization spectrum for a few sources. In addition to the parameters described in Model-1a, there are a few more parameters, such as the inner and outer emissivity indices (β1 and β2 respectively), inner and outer radii of the accretion disk (r_in and r_out respectively), break radius (r_br) between r_in and r_out and the spin of the black-hole (a_*). We tied β1 and β2 together during the fit and kept them as free parameters. r_br, r_in and r_out were kept frozen to their default values of 15r_g, 3r_g and 400r_g respectively. We considered a highly spinning SMBH and fixed a_* to 0.998 <cit.>. AF_e was frozen to the solar value. The inclination angle was fixed to 30^∘. The other parameters that were kept free during the fit were Γ, E_cut, R, logξ and the normalization (N_relxill) of the relxill model. The spectra of a few sources could not be well-fitted using either xillver or relxill. In those sources, where significant narrow Fe-Kα emission lines were detected, we used Model-1c, in which we fitted relxill and xillver together. Between these two components Γ, kT_e and AF_e were tied together and kept as free parameters during the fitting. The other parameters were treated similarly as described earlier in Model-1a and Model-1b. We could constrain E_cut for 73 sources from the model fits. The summary of the spectral analysis from this model fits to the spectra given in Table <ref>. Out of 112 sources, we used Model-1a in 86 sources to estimate different coronal parameters. In 20 out of the remaining 26 Seyferts, the presence of a broad emission line was confirmed. To take care of the relativistic broadening of the Fe-Kα line, we fitted the spectra of those sourceswith Model-1b. In the other six sources (ARK 564, MCG-06-30-15, Mrk 1044, Mrk 279, NGC 3783 and NGC 4051), we used a xillver component in addition to relxill (Model-1c), since one model alone could not fit the reflection spectra properly. The distributions of Γ and E_cut as found from the Model-1 fits are given in Fig. <ref>. The median value of Γ as obtained from the analysis using Model-1 was found to be 1.79±0.18, which is consistent with the median values of Γ as found from the broad-band analysis of the unobscured sources by <cit.>. Using only the constrained E_cut, a median of 104±72 keV is obtained. The broad-band spectral fit using Model-1c with the data to model residue for the source Mrk 279 is presented in Fig. <ref>. cccccccccResults of the correlation analysis between different parameters. Provided are the Spearman's rank correlation coefficient (ρ) and the probability (p) for the null hypothesis (no correlation). If p is larger than 0.01, then we fail to reject the null hypothesis. Here, Method I indicates the correlation study between two parameters having only the uncensored values. The correlation analysis between two parameters considering both uncensored (including the corresponding asymmetric errors) and censored values(E_cut^MAX = 1000 keV, R^MIN = 0.01 and kT_e^MAX = 150 keV) is denoted by method II, wherein Method III represents the same as Method II except the E_cut^MAX is considered to be 500 keV here. Parameter 1 Parameter 2 Method2cFull sample 2cModerately accreting sources 2cHighly accreting sources 2c(λ_Edd<0.1) 2c(λ_Edd>0.1) ρ p ρ p ρ p E_cut λ_Edd I 0.19 0.12 0.41 0.009 -0.02 0.89 II 0.03 0.75 0.12 0.37 -0.08 0.59 III 0.03 0.73 0.09 0.48 -0.08 0.61 R λ_Edd I -0.28 0.01 -0.30 0.04 -0.07 0.68 II -0.18 0.07 -0.21 0.11 -0.002 0.73 R Γ I 0.28 0.009 - - - - II 0.30 0.002 - - - - E_cut R I -0.03 0.84 - - - - II 0.15 0.12 - - - - III 0.15 0.12 - - - - Γ λ_Edd I 0.17 0.08 0.18 0.16 0.1 0.49 II 0.17 0.08 0.17 0.18 0.12 0.41 E_cut Γ I 0.69 1.75E-11 - - - - II 0.60 4.11E-12 - - - - III 0.61 1.75E-12 - - - - τ kT_e I -0.96 1.82E-23 - - - - II -0.66 1.89E-10 - - - - E_cut M_BH/M_Sun I -0.02 0.86 - - - - II -0.20 0.04 - - - - III -0.20 0.03 - - - - Γ M_BH/M_Sun I -0.05 0.58 - - - - II -0.05 0.58 - - - - We carried out physical model fits (Model-2) to these 73 sources for which E_cut could be constrained. We could constrain kT_e for 42 sources using this model.In XSPEC Model-2 took the following forms, * Model-2a: const × TBabs × zTBabs × (xillverCP)* Model-2b: const × TBabs × zTBabs × (relxillCP)* Model-2c: const × TBabs × zTBabs × (relxillCP+xillverCP) All the model parameters were handled similarly as described for Model-1. The best-fit values of various coronal parameters found from the Comptonization model fit (Model-2)are given in Table <ref>. The distribution of the best-fit values of Γ and kT_e, as obtained from Model-2, are shown in Fig. <ref>. The median value of Γ was determined to be 1.84±0.12, and when considering only the constrained value of kT_e, the median was found to be 24±13 keV. § RELATION BETWEEN E_CUT AND KT_EIt is believed that the phenomenological high-energy cutoff could be related to the temperature as E_cut = 2-3 kT_e <cit.>. However, recent studies do indicate that this simple relation between E_cut and kT_e may not be valid for all sources <cit.>. The relation can be complicated in the case of a non-static corona, such as the one with outflows <cit.>. Also, according to <cit.>, the relation of E_cut = 2-3 kT_e is valid only for low values of τ and kT_e. The authors also argued that if the origin of the X-ray emission is different than the thermal Comptonization, the typical relation between E_cut and kT_e may not hold. We show in the left panel of Fig. <ref> the distribution of the ratio between E_cut to kT_e. We found the ratio to vary between 2.33 and 5.35, with a median of 3.59. In the right panel of Fig. <ref> is shown the distribution of our sources in the E_cut versus kT_e plane. Also, shown in the same figure are the E_cut = 2 kT_e (red dashed) and E_cut = 3 kT_e (green dashed) lines. From the linear least squares fit to our sample of sources (blue dashed line in Fig. <ref>), we found E_cut = (2.44 ± 0.20) kT_e + ( 28.36 ± 5.96)We also calculated the Pearson's correlation coefficient and the null hypothesis probability for no correlation to check the significance of the linear fit, and we found r = 0.90 and p = 3.37×10^-15. These observations thus indicate that for the sample of sources studied in this work, the generally accepted relation of E_cut = 2-3 kT_e holds good, given the large error bars in E_cut and kT_e measurements. Most of our sources lie around the E_cut = 3 kT_e line. § CORRELATION ANALYSISThis section presents the correlation analysis between different physical parameters obtained from Model-1 fits to the spectra of the 112 sources. We also discussed the correlation between the coronal properties and the physical properties of the sources, such as λ_Edd and M_BH. For the latter, we had to exclude three sources from the correlation analysis, namely, ESO 416-G002, IRAS F12397+3333 and UGC 10120, as we did not find a black hole mass (M_BH) measurements for them from the literature. We adopted black hole mass estimates from the second data release of optical broad emission line measurements from the BASS survey <cit.> except for ARK 564. The black hole mass for this source was taken from <cit.>.For getting L_bol, we used the 2-10 keV intrinsic luminosity. The absorption and k- corrected intrinsic luminosities were converted to bolometric luminosities using the relation log(L_Bol) = 20 × log(L_2-10 keV) <cit.>. The distribution of the logarithm of the Eddington ratio (L_bol/L_Edd=λ_Edd) for 109 sources is given in Fig. <ref>.The analysis of 112 sources using Model-1 revealed that E_cut could be constrained in 73 sources, while in the remaining 39 sources, only lower limits could be determined. These controlled E_cut measurements in 73 sources exhibited asymmetric errors, and the other 39 E_cut measurements only provided lower limits. Consequently, it is essential to take into account both the asymmetric errors and lower limits in the correlation analysis. Therefore, we employed a similar approach as described in <cit.> to perform various correlation analyses and find the median of the parameters. In the initial approach, we neglected both the asymmetric errors associated with the controlled E_cut measurements and the lower limits. Instead, we solely considered the controlled best-fit values of E_cut and calculated the median values. We also performed correlation analysis using only those best-fit values between E_cut and other parameters by employing a logarithmic scale for fitting the parameters with a linear relation:log(y) = alog(x)+bTo assess the strength of the linear correlation, we computed Spearman's rank correlation coefficient (ρ) and the null hypothesis probability (p) for no correlation. We considered a correlation to be significant if p was less than 0.01.In the second approach, we incorporated the asymmetric errors related to the controlled E_cut measurements and the lower limits of E_cut by simulating 10^5 random points within the range of (E_cut^min, E_cut^max) and (E_cut^LL, E_cut^MAX) respectively. For constrained E_cut with asymmetric errors, E_cut^min and E_cut^max represented the respective lower and upper bounds, while E_cut^LL denoted the lower limit obtained from the model fit. The 10^5 random points were generated between the lower limit (E_cut^LL) and a hypothetical upper bound (E_cut^MAX) of 1000 keV. Following this approach, we calculated the median of E_cut for each run and then determined the mean of the median distribution.In the third case, we handled the asymmetric errors corresponding to the constrained E_cut in a similar manner as discussed in the second case. However, here, the upper bound E_cut^MAX was set to 500 keV for cases where only lower limits were available. It was necessary to consider E_cut^MAX = 1000 keV in three sources for which the lower limit of E_cut exceeded 450 keV. The median of each run was calculated, and the mean of this distribution was determined.In both cases, the linear relation was fitted (using equation <ref>) between E_cut and other parameters for each run, resulting in distributions of the slope (a), the intercept (b), the Spearman's rank correlation coefficient (ρ), and the probability of no correlation (p). The median values from these distributions were used to represent the best-fit values of the correlation. All values and errors for the unweighted and simulated correlations are presented in Table <ref>.We followed a similar approach for the other parameters also (Γ, R, and kT_e), simulating 10^5 points between the minimum and maximum bounds for the correlation analysis. In the case of the reflection fraction (R) and the coronal temperature (kT_e), respective upper and lower limits were obtained. During the correlation analysis, the lower bound of R was set at 0.01, and the upper bound of kT_e was 150 keV.Using the constrained values only, the median of E_cut as obtained from Model-1 was 104±72 keV. Considering the upper limit of 1000 keV for the censored values and the asymmetric errors associated with the controlled E_cut measurements, we obtained a median of 153±8 keV for the full sample, 158±11 keV for the moderately accreting system (λ_Edd<0.1) and 150±10 keV for the systems with higher accretion (λ_Edd>0.1). Using the third approach, considering the upper limit of 500 keV, a median of 151±7 keV was obtained for the entire sample. For the moderate (λ_Edd<0.1) and high (λ_Edd>0.1) accreting systems we obtained a median of 152±10 keV and 147±10 keV, respectively. Our result is consistent with the measurements of the median E_cut value obtained from the literature. For example, using a sample of unobscured Seyfert galaxies from Swift-BAT 70-month catalogue,<cit.> reported a median value of 210±36 keV considering both censored and uncensored measurements. Using the xillver model to a total number of 195 Seyfert1 galaxies, <cit.> found a median E_cut of 156±13 keV.We determined the median of kT_e using only the constrained best-fit values and obtained the median as 24±13. Including the asymmetric errors in the simulation and the lower limits by considering the upper bound of 150 keV, a median of 48±5 keV was obtained. The distribution of E_cut and kT_e (constraints and the lower limits) are given in Fig. <ref> and Fig. <ref> respectively.We also determined the median value of kT_e by analyzing a sample of 96 sources common to both this study and <cit.>. We obtained the estimate for E_cut from <cit.> and computed kT_e using Equation <ref>. The median value of kT_e for the subset with constrained values was found to be 27±23 keV, consistent with our findings of the median for only the controlled measurements. For 67 sources where only lower limits were reported in <cit.>, we considered both the asymmetric errors associated with the best-fit E_cut and an upper limit of 500 keV for cases where only a lower limit was reported, yielding a median kT_e of 85±6 keV. This value is higher than what we observed for our sample but aligns with the results presented in <cit.>. The variance in median kT_e values between this study and <cit.> may be attributed to the higher proportion of unconstrained E_cut values in the latter work. Additionally, setting an upper limit of 500 keV for unconstrained cases biases the median value towards higher temperatures. The distribution of kT_e, as calculated using Equation <ref> based on kT_evalues from <cit.>, is illustrated in Figure <ref>.§.§ Correlation between E_cut and λ_EddWe looked for a relation between E_cutand λ_Edd. Using all three approaches of the correlation analysis we could not find any significant correlation between them. This is in agreement with the recent results in the literature <cit.>. For AGN with moderate accretion (λ_Edd<0.1) <cit.>, the observed spectral energy distribution can be explained by the standard optically thick geometrically thinaccretion disk with H/R << 1, where H is the height of the disk at a radius R <cit.>. But for AGN withhigher λ_Edd, the accretion disk becomes geometrically thick with H≤R and therefore, the accretion flow nature is expected to differ from the moderately accreting ones <cit.>. The emergent X-ray spectrum from AGN with thick and thin accretion disks is likely to be different, and hence, the connection between the accretion disk and corona in low and high accreting AGN could be different. To look for any differences in the corona between low and high accreting AGN, we divided our sample into moderately accreting AGN (λ_Edd < 0.1) and highly accreting AGN (λ_Edd > 0.1) andcarried out linear fit to the data using equation <ref> in the E_cut versus λ_Edd plane. For the highly accreting sub-sample, we found no correlation between E_cut and λ_Edd, which is expected for the sources with higher accretion rate <cit.>. From the Spearman's rank correlation test, we found ρ of 0.41 and a p of 0.009 considering only the controlled best-fit values E_cut in the moderately accreting systems (λ_Edd < 0.1), but including the lower limits into consideration we did not find any significant relation between these two parameters (see Table <ref>).§.§ Correlation between R and λ_EddWe performed a simple linear fit to the data using equation <ref> to look for the correlation between R and λ_Edd. In a few cases, we could not constrain R; rather, we found an upper limit of it. We considered both controlled values and the upper limits during the correlation analysis. Using only the controlled best-fit values the Spearman's correlation analysis yielded a ρ of -0.28 and a p value of 0.01. Using the second approach of the correlation analysis, we considered the asymmetric errors associated with the uncensored best-fit values of R and a lower bound of 0.01 in the cases where only the upper limit was found; we performed the linear fit for 10^5 times. From the distribution of ρ and p, we obtained the median of ρ = -0.18 and a p = 0.07. We thus conclude that using both uncensored and censored values of R, we could not get a meaningful correlation between R and λ_Edd. For the sources with moderate (λ_Edd<0.1) and high (λ_Edd>0.1) accretion rates, we did not notice any significant correlation using both approaches. Thus, considering our sample of objects and dividing them into two different accreting systems, we conclude that the relation between R and λ_Edd is insignificant. §.§ Correlation between R and ΓWe obtained a significant correlation between Γ and the reflection fraction in Fig. <ref>. We used equation <ref> to perform a linear fit between these two parameters. The fit gave us a ρ of 0.30 and a p of 0.002, considering both the controlled and upper limits of R. We obtained a positive correlation using only the constrained values with a ρ of 0.28 and a p of 0.009. Such a study on the dependence of Γ on R has been done several times in past <cit.>.In most of the studies, the authors found a strong correlation between R and Γ. Although <cit.> argued that this strong positive correlation could be due to the model degeneracies rather than any physical act. <cit.> suggested that the observed correlation could be explained by considering an internal feedback mechanism, where the medium emitting seed photons for the primary X-ray emission also serves as the medium for reflection. Recently, from the analysis of 14 nearby bright Seyfert galaxies, <cit.> also confirmed a strong correlation between R and Γ. The authors argued that the observed correlation could be either due to the Compton cooling process or the changing geometry of the disk-corona system. Recently, <cit.> also reported a strong correlation between R and Γ, suggesting the outflowing corona model could explain the correlation.§.§ Correlation between E_cut and RWe also look for the correlation betweenE_cut against R. From the correlation analysis using three approaches (constrained best-fit values of E_cut and R; E_cut^MAX = 1000 keV and R^MIN = 0.01 and E_cut^MAX = 500 keV and R^MIN = 0.01), we obtained p = 0.84, 0.12 and 0.12 respectively, suggesting no significant correlation between these two parameters. Previously, <cit.> reported a mild anti-correlation between E_cut and R. §.§ Correlation between Γ and λ_EddNext we examined the correlation between Γ and λ_Edd. Considering the whole sample, we noticed a positive trend, though insignificant, between these two parameters. The correlation analysis produced a ρ of 0.17 and a p value of 0.08. We also conducted an analysis accounting for the errors in the measurements of Γ, but it did not yield any significant correlation, which is in agreement with what was reported by <cit.>. No significant correlation was found in the moderate (λ_Edd<0.1) and high (λ_Edd>0.1) Eddington ratio subsets. It is worth noting that previous studies <cit.> have reported a positive correlation between these two parameters.§.§ Correlation between E_cut and ΓThe correlation between E_cut and Γ is shown in Fig. <ref>. From correlation analysis of the linear fit, we obtained a ρ of 0.69 and a p value of 1.75× 10^-11 considering only the controlled measurements of E_cut and Γ, suggesting a significant correlation between these two parameters. Using the second approach, we found a ρ of 0.60 and a p value of 4.11× 10^-12. We also obtained a significant correlation between them using our third approach and got a ρ of 0.61 and a p value of 1.75× 10^-12. In both the second and third cases, the errors in Γ are also taken care of by producing 10^5 random points between Γ^min and Γ^max at each run, where Γ^min and Γ^max are the respective lower and upper bounds of Γ. Similar studies on the correlation analysis between the temperature of the corona and Γ are available in the literature. From a study of 19 Seyfert galaxies using data from NuSTAR, <cit.>found no significant correlation between kT_e and Γ. In contrast, <cit.> found a positive correlation between E_cut and Γ based on spectral analysis of 18 Seyfert galaxies using data from Swift-XRT and NuSTAR. The authors suggested that the correlation observed in their sample could result from the systematic uncertainties affecting one of the two parameters or the lack of high-quality data in the soft X-ray regime. A few recent studies reportedpositive correlation between E_cut and Γ <cit.>. Of these, <cit.> analysed a total of 46 Seyfert 1 galaxies, while <cit.> and <cit.> carried out spectral analysis of33 and 60 sources, respectively. From an analysis of multiple epochs of observations on a source SWIFT J2127.4+5654, <cit.> found a Λ shaped pattern. According to the authors, up toΓ < 2.05, the source showed a steeper-when-hotter behaviour, while beyond Γ > 2.05, the source showed a softer-when-cooler behaviour. Though <cit.>'s finding is from multiple observations of a single source, we attempted to check the prevalence of such a trend in our sample of sources. There are only a few sources in our sample with Γ > 2.05, and the statistical test resulted in a negative trend between E_cut and Γ. However, we could not conclude about the significance of the anti-correlation noticed since very few sources were found in the Γ > 2.05 region. A systematic and homogeneous analysis of many sources is needed to confirm this finding.§.§ Correlation between kT_e and τWe calculated τ using the following equation <cit.> τ = √(9/4 + 3/θ[(Γ + 1/2)^2 - 9/4]) - 3/2where θ = kT_e/m_ec^2. Considering only the constrain values of kT_e we found a strong negative correlation between kT_e and τ (see Fig. <ref>). Spearman's rank correlation analysis yielded a ρ of -0.96 with a p value of 1.82× 10^-23. Earlier, <cit.> also found a strong anti-correlation between these parameters for slab and spherical geometries. The authors fitted a similar linear relation in the kT_e vs τ plane and reported,a = -0.7 ± 0.2; b = 1.8 ± 0.1.for the spherical geometry. We also found similar values of a and b from our linear fit to the data points,a = -1.24 ± 0.07; b = 2.02 ± 0.82.Using an upper limit of 150 keV for unconstrained kT_e, we also confirmed a strong negative correlation between kT_e and τ with a ρ of -0.66 and p = 1.89× 10^-10 (see Table <ref>). § COMPARISON WITH PREVIOUS WORKThis section compares the best-fit values of E_cut from this work with those available in the literature.Of the 112 sources analysed in this work, we could constrain E_cut for 73 sources. For all these 112 sources, E_cut measurements were carried out using the most recent physical models (xillver/relxill/(relxill+xillver)) available. In the past, most of these nearby unobscured AGNs were analysed vividly using mostly phenomenological models such as pexrav/pexmon etc.Here, we present a comparison of the E_cut measurements obtained from our analysis with those available in the literature in Table <ref>. For the majority of the sources in literature, E_cut was reported using the broad-band spectral analysis of the NuSTAR data in conjunction with the soft X-ray data from several other instruments, such as XMM-Newton, Swift-XRT etc. <cit.>. In a few references E_cut was obtained from the analysis of the Swift-BAT, BeppoSAX and INTEGRAL broad-band X-ray data <cit.>. As seen from Table <ref>, our results from the analysis of only the NuSTAR data agree with the previous analysis (see Fig. <ref>). Our derived E_cut also matches with those already reported in the literature using only the NuSTAR data <cit.>. Since these sources are known to be variable, we noticed a mismatch in the E_cut values in a few cases where the epoch of observations differs from those used in this work. In Fig. <ref>, we plotted the constrained E_cut values obtained both from this work and the previous work with green dots; the red dots represent the lower limit of E_cut from both this work and the literature, The black ones represent the constrained E_cut from this work and the lower limit of E_cut from the literature. Finally, we plotted the constrained E_cut from literature and the lower limit of E_cut values from this work using blue dots. The grey lines indicated the errors and the lower limits. Most of the sources lie around the 1:1 line (black dotted line), except for a few red dots, representing the lower limit obtained from this work is lower than that found in the literature.§ DISCUSSIONWe examined the correlation between various coronal properties as well as between the coronal parameters and the physical properties of the sources studied in this work. We also examined whether moderately accreting sources (λ_Edd<0.1) have different X-ray emission characteristics relative to the highly accreting sources (λ_Edd>0.1).From Table <ref>, we noticed a significant correlation between E_cut and Γ (see right panel of Fig. <ref>) for the entire sample of sources. Such positive correlation between E_cut and Γ was also reported in the past <cit.>. However, there are also instances where the observed correlation between E_cut and Γ was not definitively established <cit.>. According to <cit.>, the observed correlation might result from potential systematic uncertainties associated with one of the two parameters. The observed correlation could also be accounted for by the presence of optically thin corona. In the case of an optically thin corona, such a positive correlation is expected between E_cut and Γ <cit.> (see equation <ref>). Furthermore, the relationship between these two variables remains incomplete even for individual AGN. Recently, from the X-ray spectral analysis of the source, SWIFT J2127.4+5654, based on eight epochs of observations <cit.>observed a Λ shaped pattern in the E_cut versus Γ plane. Below Γ < 2.05, the source showed a hotter-when-softer trend, and above Γ > 2.05, the source showed a cooler-when-softer trend. We found a hotter-when-softer trend for our complete sample of sources, and the correlation is significant (see Table <ref>), but there are only a few sources with Γ > 2.05; therefore, no statistically significant consideration could be made. Observations of more sources with Γ > 2.05 are needed to confirm the trend found by <cit.>.From our analysis, another strong anti-correlation was found between kT_e and τ of the corona (see Fig. <ref>). Such negative correlation between kT_e and τ is already known in literature and is attributed to either the fact that the cooling rate is more efficient in corona with higher opacity or to variation in the intrinsic disk emission from the sources <cit.>. The significant relations observed between different sets of physical parameters indicate that an optically thin corona is needed to sustain a hot corona. Thus, a steeper spectrum is expected in this scenario.We also found a positive correlation between R and Γ. A similar correlation has already been reported in the literature<cit.>. One plausible interpretation of the observed pattern could be attributed to the Compton cooling model. According to this model, an increase in the number of input seed photons entering the corona leads to enhanced cooling of the plasma. Consequently, this results in a steeper X-ray power law, which, in turn, leads to a greater proportion of X-ray photons illuminating the accretion disc, ultimately yielding a higher reflection fraction <cit.>. In a scenario where cooling efficiency in the optically thicker corona is more than that in the optically thinner corona, we would anticipate a positive correlation between τ and R. To investigate this scenario, we conducted a correlation analysis between them. Our analysis revealed that these two parameters exhibit a weak negative relationship, with a ρ value of -0.21 and a p value of 0.08. Recently, <cit.> reported a similar positive correlation between R and Γ and the authors argued that the observed correlation could favour the outflowing corona model. In case of an outflowing corona, a flatter spectrum is expected with weaker reflection in case of a higher outflowing velocity. We also looked for the correlations between the coronal parameters (E_cut, Γ) and the physical parameters of the sources (λ_Edd and M_BH). In the past, several authors have reported a positive correlation between Γ and the Eddington luminosity ratio (λ_Edd) <cit.>. Most authors have utilized a similar linear relationship, as described in Equation <ref>, to investigate the connection between these two parameters. In our study, we also explored this relationship and found a slope (b) of 0.26 for the correlation, with an ρ-value of 0.17 and a p-value of 0.08 from Spearman's correlation analysis. Our findings align with the slope reported by <cit.> and <cit.>, both of whom identified a similar slope of approximately 0.3 in their correlation analyses between Γ and λ_Edd. However, <cit.>, in their examination of SDSS quasars with archival XMM–Newton observations, reported a steeper slope (b∼0.6) for this correlation. More recently, <cit.> employed BASS data and found a considerably weaker and flatter correlation (b∼0.15).<cit.> also argued that as M_BH decreases, the number of optical-UV seed photons increases and due to the production of a larger amount of seed photons, the corona interacts with it rapidly, and that in turns cools the corona down and we expect a softer spectrum. Therefore, based on this argument, one should expect a positive correlation between Γ and M_BH. From the analysis of our sample of sources, we could not confirm such a trend.In our analysis, we also investigated the relationship between E_cut and λ_Edd. When considering the entire sample, we did not observe any significant correlation between these two parameters. These findings align with similar results reported by <cit.>. However, we uncovered an opposite pattern when we divided the sources into two distinct accreting regions. For sources with moderate accretion, we identified a weak positive correlation between E_cut and λ_Edd. Conversely, for sources with higher Eddington ratios, we found a weak negative relationship between them. This varying nature of the correlation in these two different accreting regions supports the conclusion put forth by <cit.>, who argued that the primary driver of the cut-off energy is the Eddington ratio. § SUMMARY In this study, we analysed the 3-79 NuSTAR spectra of a sample of 112 Seyfert 1 galaxies, the data for which were publicly available between August 2013 and May 2022 in NuSTAR Master Catalog. The motivation is to carry out a systematic study of the coronal properties of Seyfert 1 type AGN. From the physical model fits to the spectrum of112 source spectra, we could constrain E_cut in 73 sources. For these, we carried out a physically motivated Comptonizationmodel that fits the spectra to derive various coronal properties. We could constrain kT_e in 42 sources. The results of this study are summarized below:* From the Model-1 fitting of the source spectra, we calculated the median value of Γ to be 1.79±0.18. Using Model-2, we derived a median of 1.84±0.12 for Γ. * When considering an upper limit of 1000 keV for the censored E_cut measurements, and accounting for the asymmetric errors in the constrained E_cut values, the median value for E_cut across the entire sample is determined to be 153±8 keV. In the sub-sample with moderate accretion rates (λ_Edd<0.1), the median E_cut is found to be 158±11 keV, while in the sub-sample with high accretion rates (λ_Edd>0.1), the median E_cut is 150±10 keV.* Using both the controlled and censored kT_e (considering an upper bound of 150 keV), the median value of kT_e is calculated to be 48±5 keV.* For our sample of sources we found E_cut is strongly correlated with kT_e as E_cut = (2.44 ± 0.20) kT_e +(28.36 ± 5.35). This is in agreement with the notion that the X-ray spectra of AGN are related to the temperature of the corona as E_cut = 2-3 kT_e. For our sample of sources, observations tend to more closely follow the relation: (E_cut = 3kT_e). * For our entire sample, we found a strong correlation between E_cut and Γ. * We found a significant anti-correlation between kT_e and τ. The best-fit relation yielded a slope and intercept of -1.24 ± 0.07 and 2.02 ± 0.82. * We observed a significant correlation between R and Γ. * All these correlations indicate that an optically thin corona is necessary to sustain a hotter corona with a steeper spectrum. With the increasing accretion rate, the hotter corona could move vertically away from the central engine and become optically thinner. After reaching a certain luminosity, the corona could interact with the seed photons rapidly, which cools the corona down and the reflection fraction increases.A systematic and homogeneous analysis of a larger sample of sources is needed to establish the correlation observed between various physical quantities, thereby enhancing our understanding of AGN corona.We thank the NuSTAR Operations, Software and Calibration teams for support with the execution and analysis of these observations. This research has made use of the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA). C.R. acknowledges support from the Fondecyt Regular (grant 1230345) and ANID BASAL (project FB210003). This research has used data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), a service of the Astrophysics Science Division at NASA/GSFC. NuSTAR FTOOLS [https://heasarc.gsfc.nasa.gov/ftools/https://heasarc.gsfc.nasa.gov/ftools/]§ ADDITIONAL TABLElllccccccc Details of the sources analysed in this work. The columns are (1) the name of the source, (2) right ascension (h:m:s), (3) declination (d:m:s), (4) redshift, (5) type of the source, (6) observation ID (OBSID), (7) count rate (counts/sec) (8) exposure time in sec (9) black hole mass, and (10) the Eddington ratio. Some of the information, including the right ascension, declination, and z, are from SIMBAD[http://simbad.cds.unistra.fr/simbad/http://simbad.cds.unistra.fr/simbad/]. Source α_2000 δ_2000 z Type OBSID Count rate Exposure M_BH/M_⊙ L_bol/L_edd1H 0419-577 04 26 00.71 -57 12 01.76 0.104 Sy1.0 60101039002 0.4 169462 8.06 0.05 1H1934-063 19 37 33.02 -06 13 04.80 0.01 Sy1.0 60702018006 0.52 65521 6.33 0.37 2E1739.1-1210 17 41 55.25 -12 11 56.58 0.037 Sy1.2 60160670002 0.3 21366 8.23 0.03 2MASS J1830231+731310 18 30 23.16 +73 13 10.71 0.123 Sy1.0 60464150002 0.16 26019 - - 2MASSJ17485512-3254521 17 48 55.13 -32 54 52.10 0.02 Sy1.0 60160677002 0.27 21801 8.02 0.01 2MASXJ04372814-4711298 04 37 28.16 -47 11 29.48 0.053 Sy1.0 30001061002 0.12 73821 7.89 0.04 2MASXJ11324928+1017473 11 32 49.27 +10 17 47.27 0.044 Sy1.0 60061212002 0.05 20469 7.44 0.04 2MASXJ12313717-4758019 12 31 37.14 -47 58 02.00 0.028 Sy1.0 60160498002 0.14 19356 7.41 0.06 2MASXJ15295830-1300397 15 29 58.33 -13 00 39.78 0.104 Sy1.0 60160617002 0.15 24227 7.52 0.63 2MASXJ1802473-145454 18 02 47.30 -14 54 55.00 0.035 Sy1.0 60160680002 0.59 19958 7.56 0.24 2MASXJ18470283-7831494 18 47 02.69 -78 31 49.60 0.074 Sy1.0 60160699002 0.22 21505 8.37 0.07 2MASXJ18560128+1538059 18 56 01.28 +15 38 05.90 0.084 Sy1.0 60160701002 0.22 21352 8.47 0.06 2MASXJ19380437-5109497 19 38 04.39 -51 09 49.38 0.04 Sy1.0 60160716002 0.24 21830 7.43 0.17 2MASXJ21192912+3332566 21 19 29.12 +33 32 56.67 0.051 Sy1.5 60061358002 0.23 21483 7.71 0.15 2MASXJ21355399+4728217 21 35 54.02 +47 28 21.89 0.025 Sy1.0 60160761002 0.24 18704 7.24 0.11 2MASXJ23013626-5913210 23 01 36.23 -59 13 21.08 0.15 Sy1.8 60160814002 0.16 19500 - - 3C 109 04 13 40.34 +11 12 14.78 0.306 Sy1.8 60301011004 0.17 89150 9.07 0.18 3C 111 04 18 21.27 +38 01 35.80 0.05 Sy1.0 60202061004 0.74 49361 8.57 0.09 3C 120 04 33 11.09 +05 21 15.61 0.034 Sy1.0 60001042003 1.31 127716 7.99 0.19 3C 206 08 39 50.58 -12 14 34.32 0.198 Sy1.2 60160332002 0.29 17390 9.22 0.10 3C 227 09 47 45.14 +07 25 20.59 0.086 Sy1.5 60061329002 0.3 17195 8.94 0.03 3C 380 18 29 31.78 +48 44 46.16 0.692 Sy1.0 60160690002 0.13 19610 - - 3C 382 18 35 03.38 +32 41 46.85 0.058 Sy1.0 60001084002 0.82 82583 8.6 0.08 3C 390.3 18 42 08.99 +79 46 17.12 0.06 Sy1.0 60001082003 1.03 47557 9.1 0.04 6dFJ1254564-265702 12 54 56.37 -26 57 02.10 0.059 Sy1.0 60363001002 0.14 20296 8.28 0.03 ARK 120 05 16 11.40 -00 08 59.15 0.03 Sy1.0 60001044004 0.99 65453 8.31 0.06 Ark 241 10 21 40.25 -03 27 13.75 0.041 Sy1.0 60160392002 0.18 20329 8.5 0.01 ARK 564 22 42 39.35 +29 43 31.31 0.025 Sy1.8 60401031004 0.28 408958 6.41 1.44 CGCG229-015 19 05 25.94 +42 27 39.76 0.028 Sy1.0 60160705002 0.13 21992 7.18 0.08 ESO 025-G002 18 54 40.26 -78 53 54.10 0.029 Sy1.0 60160700002 0.24 27978 7.35 0.11 ESO 031-G008 03 07 35.34 -72 50 02.50 0.028 Sy1.0 60160141002 0.19 31655 7.1 0.15 ESO 209-G012 08 01 57.97 -49 46 42.39 0.04 Sy1.5 60160315002 0.29 23715 8.11 0.05 ESO 323-G077 13 06 26.12 -40 24 52.59 0.015 Sy1.5 60202021006 0.13 43403 7.05 0.02 ESO 416-G002 02 35 13.45 -29 36 17.25 0.059 Sy1.9 60061340002 0.1 20606 - - ESO 511-G030 14 19 22.40 -26 38 41.13 0.022 Sy1.0 60502035008 0.12 41807 7.29 0.03 ESO381-G007 12 40 46.96 -33 34 11.84 0.055 Sy1.5 60160508002 0.12 21250 8.03 0.04 FAIRALL 1146 08 38 30.77 -35 59 33.33 0.032 Sy1.5 60061082002 0.34 21278 7.52 0.14 Fairall 51 18 44 53.98 -62 21 52.87 0.014 Sy1.0 60402014002 0.24 63532 7.33 0.02 GRS 1734-292 17 37 28.38 -29 08 02.11 0.021 Sy1.0 60301010002 0.15 26020 7.84 0.10 H1821+643 18 21 57.21 +64 20 36.22 0.297 Sy1.0 60160683002 0.37 22173 9.48 0.19 HE 1143-1810 11 45 40.46 -18 27 14.96 0.033 Sy1.0 60302002006 0.69 23096 7.38 0.40 HE1136-2304 11 38 51.00 -23 21 35.34 0.027 Sy 80002031003 0.26 63565 6.97 1.09 IC 1198 16 08 36.38 +12 19 51.60 0.033 Sy1.5 60361014002 0.11 26973 7.51 0.04 IC 4329A 13 49 19.26 -30 18 34.21 0.016 Sy1.2 60001045002 2.61 162390 7.88 0.11 IGR J14471-6414 14 46 28.20 -64 16 24.00 0.053 Sy1.2 60061257002 0.1 15042 7.61 0.09 IGRJ14552-5133 14 55 17.51 -51 34 15.18 0.016 Sy1.0 60401022002 0.23 100942 6.96 0.08 IGRJ19378-0617 19 37 33.02 -06 13 04.80 0.01 Sy1.0 60101003002 0.52 65521 6.33 0.37 IRAS 05589+2828 06 02 10.47 +28 28 19.40 0.033 Sy1.0 60061062002 0.78 29276 8.32 0.05 IRAS 09149-6206 09 16 09.36 -62 19 29.56 0.057 Sy1.0 90401630002 0.4 112121 8.76 0.03 IRAS F12397+3333 12 42 10.60 +33 17 02.66 0.044 Sy1.0 60501007002 0.16 48709 - - IRAS04124-0803 04 14 52.66 -07 55 39.68 0.039 Sy1.0 60761001002 0.32 18345 8 0.06 IRAS04392-2713 04 41 22.53 -27 08 19.33 0.084 Sy1.0 60160201002 0.19 19553 9.63 0.00 KUG 1141+371 11 44 29.87 +36 53 08.61 0.038 Sy1.0 90601618002 0.28 38562 8.06 0.05 MCG-06-30-15 13 35 53.76 -34 17 44.16 0.008 Sy1.2 60001047005 0.8 23267 6.09 0.52 MCG+05-40-026 17 01 07.77 +29 24 24.58 0.036 Sy1.0 60061276002 0.12 21000 6.86 0.25 MCG+08-11-011 05 54 53.61 +46 26 21.61 0.02 Sy1.5 60201027002 1.23 97921 7.9 0.09 MR 2251-178 22 54 05.88 -17 34 55.40 0.064 Sy1.0 60102025002 1.22 23112 8.34 0.30 MRK 1040 02 28 14.46 +31 18 41.46 0.017 Sy1.5 60101002004 0.69 64242 7.41 0.10 MRK 1044 02 30 05.52 -08 59 53.20 0.016 Sy1.0 60401005002 0.22 267078 6.23 0.90 MRK 110 09 25 12.84 +52 17 10.38 0.036 Sy1.0 60201025002 0.98 184563 7.13 1.24 MRK 1148 00 51 54.76 +17 25 58.50 0.064 Sy1.0 60160028002 0.5 22087 8.05 0.25 MRK 1310 12 01 14.35 -03 40 41.01 0.02 Sy1.0 60160465002 0.23 21131 6.83 0.17 MRK 1383 14 29 06.57 +01 17 06.15 0.086 Sy1.0 60501049002 0.18 95955 8.67 0.04 MRK 1392 15 05 56.55 +03 42 26.33 0.036 Sy1.0 60160605002 0.14 21084 7.9 0.03 MRK 1393 15 08 53.95 -00 11 49.00 0.054 Sy1.5 60376005002 0.21 30816 7.42 0.32 MRK 205 12 21 44.07 +75 18 38.24 0.071 Sy1.0 60160490002 0.21 20372 8.11 0.12 Mrk 279 13 53 03.43 +69 18 29.41 0.031 Sy1.5 60601011004 0.16 200632 7.89 0.02 MRK 290 15 35 52.40 +57 54 09.51 0.03 Sy1.0 60061266004 0.2 26348 7.64 0.05 MRK 335 00 06 19.53 +20 12 10.61 0.025 Sy1.2 60001041005 0.17 93022 7.08 0.11 MRK 359 01 27 32.52 +19 10 43.83 0.017 Sy1.5 60402021002 0.15 52526 6.11 0.43 Mrk 509 20 44 09.75 -10 43 24.72 0.034 Sy1.5 60101043002 1.19 165885 8.13 0.13 MRK 590 02 14 33.56 -00 46 00.18 0.026 Sy1.2 80502630002 0.33 68123 8.12 0.02 MRK 595 02 41 34.87 +07 11 13.85 0.027 Sy1.5 60160119002 0.06 21298 6.58 0.15 MRK 684 14 31 04.78 +28 17 14.12 0.045 Sy1.0 60160586002 0.08 20497 6.83 0.34 MRK 704 09 18 25.99 +16 18 19.63 0.029 Sy1.5 60061090002 0.27 21524 8.35 0.01 MRK 732 11 13 49.75 +09 35 10.58 0.029 Sy1.5 60061208002 0.21 26359 7.06 0.19 MRK 79 07 42 32.82 +49 48 34.78 0.022 Sy1.2 60601010002 0.58 65805 7.48 0.13 MRK 813 14 27 25.05 +19 49 52.26 0.11 Sy1.0 60160583002 0.21 24562 8.73 0.07 MRK 817 14 36 22.08 +58 47 39.39 0.031 Sy1.5 60601007002 0.21 135300 7.74 0.99 MRK 841 15 04 01.19 +10 26 15.78 0.036 Sy1.5 60101023002 0.44 23419 8.16 0.05 MRK 876 16 13 57.18 +65 43 09.95 0.121 Sy1.0 60160633002 0.1 29969 9.11 0.02 MRK 885 16 29 48.38 +67 22 41.98 0.025 Sy1.5 60160641002 0.08 28304 7.27 0.03 MRK 915 22 36 46.50 -12 32 42.89 0.024 Sy1.0 60002060004 1.53 54249 7.13 0.07 MRK 926 23 04 43.48 -08 41 08.62 0.047 Sy1.5 60201029002 1.53 106201 8.37 0.18 Mrk739E 11 36 29.30 +21 35 45.00 0.03 Sy1.0 60260008002 0.12 18547 7.48 0.05 NGC 0985 02 34 37.88 -08 47 17.02 0.043 Sy1.0 60761008002 0.39 21326 8.25 0.05 NGC 3227 10 23 30.57 +19 51 54.28 0.004 Sy1.5 60202002002 0.96 49800 6.58 0.05 NGC 3516 11 06 47.46 +72 34 07.29 0.009 Sy1.5 60002042004 0.17 72088 7.11 0.01 NGC 3783 11 39 01.71 -37 44 19.00 0.009 Sy1.0 60101110002 1.11 41265 7.13 0.08 NGC 4051 12 03 09.61 +44 31 52.68 0.002 Sy1.5 60401009002 0.43 311139 5.95 0.02 NGC 4579 12 37 43.52 +11 49 05.49 0.005 Sy1.9 60201051002 0.17 117843 7.8 0.00 NGC 4593 12 39 39.44 -05 20 39.03 0.008 Sy1.0 60001149002 0.63 23317 6.77 0.08 NGC 5273 13 42 08.38 +35 39 15.46 0.004 Sy1.9 60061350002 0.46 21117 6.42 0.03 NGC 5548 14 17 59.54 +25 08 12.60 0.016 Sy1.5 60002044006 0.99 51460 7.97 0.04 NGC 7469 23 03 15.67 +08 52 25.28 0.017 Sy1.2 60101001002 0.75 21579 7.48 0.09 NGC 931 02 28 14.46 +31 18 41.46 0.017 Sy1.0 60101002004 0.74 64242 7.41 0.09 PG0026+129 00 29 13.70 +13 16 03.94 0.142 Sy1.0 60663003002 0.19 147374 7.82 0.85 PG0052+251 00 54 52.11 +25 25 38.98 0.155 Sy1.2 60661001002 0.13 24392 8.7 0.09 PG0804+761 08 10 58.66 +76 02 42.45 0.101 Sy1.0 60160322002 0.18 17315 7.9 0.33 RBS 1037 11 49 18.68 -04 16 50.79 0.085 Sy1.0 60061215002 0.1 40679 8.36 0.04 RBS0295 02 14 37.40 -64 30 05.06 0.074 Sy1.0 60061021002 0.13 23366 8.15 0.07 RBS0770 09 23 43.00 +22 54 32.57 0.033 Sy1.2 60602018002 0.57 42960 7.34 0.27 S52116+81 21 14 01.17 +82 04 48.35 0.084 Sy1.0 60061303002 0.36 18542 8.16 0.23 SDSS J114921.52+532013.4 11 49 21.53 +53 20 13.29 0.095 Sy1.0 60260009002 0.06 24886 8.16 0.01 SDSSJ104326.47+110524.2 10 43 26.47 +11 05 24.26 0.047 Sy1.0 60376004002 0.13 31062 8.01 0.04 SWIFTJ2127.4+5654 21 27 45.39 +56 56 34.91 0.0147 Sy1.0 60001110005 0.712 74578 7.15 0.15 UGC 10120 15 59 09.62 +35 01 47.56 0.031 Sy1.0 60560027002 0.05 62881 - - UGC 3478 06 32 47.17 +63 40 25.28 0.013 Sy1.2 60061068002 0.13 21680 - - UGC03601 06 55 49.53 +40 00 01.12 0.017 Sy1.5 60160278002 0.1 19674 7.33 0.02 UGC06728 11 45 15.94 +79 40 53.37 0.067 Sy1.2 60160450002 0.14 22615 5.28 51.96 VII ZW 653 16 25 25.95 +85 29 41.69 0.063 Sy1.0 60160639002 0.14 27580 7.46 0.26 VII ZW 742 17 46 59.94 +68 36 39.59 0.063 Sy1.0 60160676004 0.05 31393 7.37 0.10 lcccccc Best-fit parameters obtained from the model const×TBabs×zTBabs× (xillver/relxill/(relxill+xillver)) to the source spectra. E_cut is in units of KeV. Source Γ E_cut R χ^2/dof E_cut from the literature References1H 0419-577 1.67^+0.03_-0.04 59^+8_-7 0.25^+0.06_-0.06 1138/1096 83^+78_-31 <cit.> 63^+8_-9 <cit.> 49^+7_-5 <cit.> 1H1934-063 2.34^+0.05_-0.06 200^+102_-52 0.62^+0.17_-0.12 1223/1215 ≥126 <cit.> 2E1739.1-1210 1.89^+0.04_-0.03 >286 0.57^+0.29_-0.17 443/405 ≥230 <cit.> 2MASS J18302317+731310 1.44^+0.05_-0.04 59^+14_-11 0.36^+0.45_-0.20 298/280 60^+49_-20 <cit.> 2MASSJ17485512-3254521 1.61^+0.04_-0.04 75^+18_-14 <0.59 414/427 159^+66_-55 <cit.> 2MASXJ04372814-4711298 1.98^+0.05_-0.05 116^+96_-38 0.87^+0.49_-0.38 272/297 ≥91 <cit.> >142 <cit.> 2MASXJ11324928+1017473 2.00^+0.10_-0.10 >108 3.50^+3.00_-1.93 63/58 ≥50 <cit.> 2MASXJ12313717-4758019 1.88^+0.06_-0.06 >112 <0.89 136/178 ≥231 <cit.> 2MASXJ15295830-1300397 1.79^+0.05_-0.05 >201 <0.60 224/214 ≥34 <cit.> 2MASXJ1802473-145454 1.72^+0.06_-0.06 133^+165_-52 0.32^+0.09_-0.11 603/569 ≥74 <cit.> 66^+36_-18 <cit.> 2MASXJ18470283-7831494 1.80^+0.04_-0.04 122^+60_-36 0.59^+0.49_-0.24 250/276 ≥93 <cit.> 2MASXJ18560128+1538059 1.47^+0.04_-0.04 41^+5_-5 0.65^+0.40_-0.31 287/307 43^+20_-11 <cit.> 2MASXJ19380437-5109497 1.85^+0.05_-0.05 102^+64_-29 0.56^+0.42_-0.34 243/255 78^+191_-42 <cit.> >195 <cit.> 2MASXJ21192912+3332566 1.80^+0.04_-0.04 82^+33_-18 0.46^+0.36_-0.27 351/344 62^+150_-32 <cit.> 89^+199_-38 <cit.> 2MASXJ21355399+4728217 1.66^+0.05_-0.04 56^+15_-10 <0.89 287/292 67^+96_-23 <cit.> 55^+50_-19 <cit.> 2MASXJ23013626-5913210 1.68^+0.06_-0.06 41^+8_-6 0.68^+0.62_-0.62 174/179 31^+47_-13 <cit.> 59^+150_-26 <cit.> 3C 109 1.64^+0.03_-0.03 72^+10_-9 0.33^+0.21_-0.15 587/627 ≥115 <cit.> 112^+62_-58 <cit.> 3C 111 1.75^+0.01_-0.01 124^+13_-16 0.11^+0.07_-0.06 910/890 ≥144 <cit.> 136^+47_-29 <cit.> 3C 120 1.82^+0.01_-0.01 147^+12_-9 0.32^+0.04_-0.04 1591/1594 ≥193 <cit.> 158^+8_-7 <cit.> 3C 206 1.76^+0.05_-0.05 112^+47_-33 <0.59 272/264 ≥272 <cit.> >68 <cit.> >53 <cit.> >79 <cit.> 3C 227 1.79^+0.05_-0.04 >152 <0.24 264/290 ≥90 <cit.> >87 <cit.> 3C 380 1.66^+0.06_-0.06 >217 <0.28 198/178 - - 3C 382 1.65^+0.01_-0.01 111^+22_-19 0.13^+0.03_-0.03 1188/1244 158^+39_-76 <cit.> 133^+98_-40 <cit.> 215^+150_-60 <cit.> 3C 390.3 1.77^+0.01_-0.01 144^+34_-19 0.21^+0.07_-0.08 997/1017 166^+64_-37 <cit.> 130^+42_-32 <cit.> 120±20 <cit.> 6dFJ1254564-265702 1.58^+0.04_-0.05 32^+7_-5 <0.78 164/186 ≥42 <cit.> 91^+100_-50 <cit.> ARK 120 1.95^+0.03_-0.03 346^+422_-133 0.51^+0.10_-0.09 1147/1146 ≥292 <cit.> >763 <cit.> 233^+147_-67 <cit.> Ark 241 1.88^+0.05_-0.05 >115 0.90^+0.60_-0.47 197/214 - - ARK 564 2.41^+0.04_-0.03 73^+30_-16 0.34^+0.12_-0.07 582/574 43^+3_-3 <cit.> CGCG229-015 1.71^+0.08_-0.06 46^+14_-8 0.93^+0.74_-0.56 164/173 ≥38 <cit.> 54^+76_-22 <cit.> ESO 025-G002 1.67^+0.04_-0.04 133^+93_-26 >0.23 385/417 ≥23 <cit.> ESO 031-G008 2.04^+0.04_-0.04 >286 0.74^+0.37_-0.29 305/354 ≥76 <cit.> ESO 209-12 1.90^+0.04_-0.03 >260 0.38^+0.24_-0.22 399/427 ≥91 <cit.> ESO 323-G077 1.45^+0.03_-0.03 89^+14_-13 2.40^+0.85_-0.57 409/386 115^+114_-42 <cit.> ESO 416-G002 1.81^+0.07_-0.07 >172 0.31^+0.47_-0.23 119/125 ≥366 <cit.> ESO 511-G030 1.70^+0.04_-0.04 69^+29_-15 0.80^+0.50_-0.41 308/289 ≥591 <cit.> ESO381-G007 1.66^+0.07_-0.07 64^+35_-18 <1.00 163/161 ≥76 <cit.> FAIRALL 1146 2.03^+0.03_-0.03 138^+73_-39 1.12^+0.42_-0.35 433/423 ≥72 <cit.> >166 <cit.> Fairall 51 1.53^+0.33_-0.09 62^+115_-14 4.11^+1.49_-0.62 780/762 ≥105 <cit.> GRS 1734-292 1.67^+0.02_-0.01 75^+6_-5 0.27^+0.11_-0.06 894/923 84^+38_-26 <cit.> 53^+13_-9 <cit.> 53±10 <cit.> H1821+643 1.91^+0.03_-0.03 229^+221_-77 0.23^+0.21_-0.19 450/454 ≥130 <cit.> 114^+159_-44 <cit.> HE 1143-1810 1.79^+0.07_-0.06 104^+24_-17 0.33^+0.15_-0.14 524/617 183^+219_-59 <cit.> HE 1136-2304 1.61^+0.02_-0.02 80^+15_-11 0.18^+0.14_-0.12 648/650 ≥63 <cit.> 97^+136_-77 <cit.> IC 1198 1.75^+0.06_-0.05 124^+107_-43 0.96^+0.68_-0.47 191/195 - - IC 4329A 1.77^+0.01_-0.01 191^+14_-10 0.32^+0.04_-0.02 2211/2088 236^+42_-26 <cit.> IGR J14471-6414 2.01^+0.08_-0.08 >153 <1.98 115/106 ≥78 <cit.> IGRJ14552-5133 1.93^+0.02_-0.02 254^+194_-72 0.55^+0.15_-0.15 741/775 ≥59 <cit.> IGRJ19378-0617 2.11^+0.07_-0.06 228^+419_-83 0.75^+0.35_-0.20 757/786 ≥126 <cit.> 241^+1377_-114 <cit.> IRAS 05589+2828 1.83^+0.11_-0.09 136^+109_-56 1.25^+0.57_-0.36 843/771 71^+20_-14 <cit.> IRAS 09149-6206 1.69^+0.14_-0.19 81^+60_-26 0.80^+0.13_-0.11 1000/1005 ≥99 <cit.> IRAS F12397+3333 2.34^+0.04_-0.04 >97 0.76^+0.38_-0.36 453/399 - - IRAS04124-0803 1.53^+0.04_-0.04 80^+21_-14 0.52^+0.27_-0.25 298/330 ≥40 <cit.> IRAS04392-2713 1.92^+0.06_-0.06 >188 0.46^+0.20_-0.28 172/185 ≥43 <cit.> KUG 1141+371 1.92^+0.11_-0.14 90^+27_-17 0.39^+0.22_-0.19 470/514 - - MCG-06-30-15 1.82^+0.11_-0.09 126^+23_-19 0.98^+0.66_-0.34 1574/1516 123^+101_-39 <cit.> 170^+240_-53 <cit.> 63^+24_-14 <cit.> >110 <cit.> MCG+05-40-026 1.77^+0.07_-0.06 104^+151_-41 <0.96 161/155 - - MCG+08-11-011 1.83^+0.01_-0.01 153^+15_-13 0.40^+0.06_-0.05 1506/1419 252^+131_-60 <cit.> 163^+53_-32 <cit.> 171^+44_-30 <cit.> 175^+110_-50 <cit.> MR 2251-178 1.63^+0.01_-0.02 96^+17_-9 <0.06 818/859 ≥59 <cit.> 132^+130_-68 <cit.> 138^+57_-38 <cit.> MRK 1040 1.88^+0.01_-0.01 300^+108_-70 0.61^+0.11_-0.10 1025/1007 ≥152 <cit.> >356 <cit.> 198^+212_-70 <cit.> MRK 1044 1.80^+0.05_-0.06 381^+553_-179 0.50^+0.14_-0.11 1004/1004 ≥99 <cit.> >120 <cit.> ≥214 <cit.> MRK 110 1.70^+0.01_-0.01 92^+12_-10 0.12^+0.02_-0.02 1624/1576 191^+207_-57 <cit.> 117^+12_-17 <cit.> MRK 1148 1.76^+0.03_-0.03 99^+30_-20 <0.22 545/532 ≥71 <cit.> 101^+11_-9 <cit.> MRK 1310 1.82^+0.04_-0.04 >173 0.28^+0.31_-0.22 304/293 ≥62 <cit.> MRK 1383 1.92^+0.02_-0.02 >276 0.98^+0.21_-0.21 818/768 - - MRK 1392 1.93^+0.06_-0.05 >187 0.84^+0.50_-0.45 185/193 ≥91 <cit.> MRK 1393 1.95^+0.04_-0.04 >295 0.37^+0.26_-0.21 352/376 ≥140 <cit.> MRK 205 1.92^+0.05_-0.05 131^+122_-45 0.60^+0.44_-0.35 259/255 ≥56 <cit.> >108 <cit.> >365 <cit.> MRK 279 1.49^+0.04_-0.05 68^+18_-13 0.18^+0.11_-0.12 989/994 ≥125 <cit.> MRK 290 1.59^+0.04_-0.04 102^+46_-25 0.33^+0.28_-0.22 316/364 184^+256_-100 <cit.> >53 <cit.> MRK 335 1.98^+0.26_-0.18 >74 4.52^+3.73_-1.10 764/771 ≥185 <cit.> MRK 359 1.91^+0.03_-0.03 >163 0.86^+0.35_-0.31 485/433 ≥40 <cit.> Mrk 509 1.77^+0.02_-0.03 123^+17_-18 0.32^+0.07_-0.07 1673/1603 102^+43_-19 <cit.> 60^+71_-23 <cit.> MRK 590 1.68^+0.02_-0.02 127^+33_-23 0.23^+0.13_-0.11 818/775 ≥112 <cit.> 66^+86_-26 <cit.> MRK 595 1.31^+0.18_-0.23 >35 frozen 104/100 75^+408_-42 <cit.> MRK 684 2.14^+0.09_-0.08 >150 <2.52 99/103 ≥127 <cit.> MRK 704 1.80^+0.04_-0.04 207^+146_-64 1.11^+0.48_-0.32 374/342 ≥261 <cit.> MRK 732 1.78^+0.04_-0.04 >279 0.20^+0.27_-0.18 324/352 81^+200_-40 <cit.> MRK 79 1.86^+0.05_-0.05 349^+516_-152 0.55^+0.18_-0.15 993/968 224^+366_-97 <cit.> 402^+165_-90 <cit.> MRK 813 1.98^+0.04_-0.04 >252 0.49^+0.35_-0.24 345/321 ≥60 <cit.> MRK 817 1.65^+0.29_-0.18 >68 1.64^+0.39_-0.32 1013/1007 ≥242 <cit.> MRK 841 1.80^+0.03_-0.03 125^+49_-30 0.42^+0.23_-0.19 467/508 ≥152 <cit.> 139^+142_-49 <cit.> MRK 876 1.81^+0.06_-0.06 140^+154_-52 0.74^+0.55_-0.40 193/187 ≥43 <cit.> MRK 885 1.90^+0.08_-0.06 >161 0.56^+0.83_-0.30 172/151 ≥212 <cit.> MRK 915 1.72^+0.03_-0.03 136^+68_-36 0.42^+0.26_-0.22 519/547 ≥79 <cit.> 58^+11_-7 <cit.> MRK 926 1.70^+0.02_-0.01 142^+33_-19 0.11^+0.03_-0.02 1491/1493 320^+166_-79 <cit.> 211^+235_-95 <cit.> Mrk739E 2.07^+0.07_-0.07 >241 0.84^+0.43_-0.48 133/132 ≥50 <cit.> NGC 0985 1.84^+0.03_-0.03 >188 0.42^+0.21_-0.25 393/469 ≥102 <cit.> NGC 3227 1.64^+0.01_-0.01 94^+7_-6 0.62^+0.10_-0.10 1224/1163 94^+16_-12 <cit.> 60^+5_-4 <cit.> 87^+16_-12 <cit.> NGC 3516 1.90^+0.03_-0.03 >448 1.27^+0.29_-0.14 696/655 132^+87_-43 <cit.> NGC 3783 1.55^+0.07_-0.03 112^+24_-19 0.90^+0.11_-0.12 1137/1145 77^+16_-11 <cit.> 98^+79_-34 <cit.> NGC 4051 1.78^+0.07_-0.07 >452 1.33^+1.02_-0.67 1674/1647 59^+25_-13 <cit.> NGC 4579 1.73^+0.06_-0.06 82^+49_-23 0.29^+0.09_-0.07 819/739 - - NGC 4593 1.87^+0.02_-0.02 >648 0.57^+0.26_-0.13 523/571 ≥655 <cit.> NGC 5273 1.46^+0.06_-0.06 68^+25_-16 1.03^+0.33_-0.28 593/574 ≥294 <cit.> 115^+91_-37 <cit.> >220 <cit.> NGC 5548 1.71^+0.03_-0.01 118^+12_-8 0.48^+0.10_-0.06 1219/1143 ≥281 <cit.> 70^+40_-10 <cit.> NGC 7469 1.95^+0.02_-0.02 122^+27_-21 0.77^+0.19_-0.18 647/692 ≥316 <cit.> 113^+33_-22 <cit.> NGC 931 1.88^+0.01_-0.01 229^+78_-42 0.63^+0.12_-0.12 974/954 ≥152 <cit.> PG0026+129 1.82^+0.02_-0.02 110^+20_-15 0.34^+0.12_-0.11 903/856 ≥45 <cit.> PG0052+251 1.66^+0.02_-0.01 >76 <0.18 167/190 ≥137 <cit.> PG0804+761 1.94^+0.05_-0.05 >269 0.71^+0.49_-0.35 217/221 ≥67 <cit.> RBS 1037 2.00^+0.06_-0.06 >133 1.04^+0.28_-0.30 208/200 ≥34 <cit.> RBS0295 1.73^+0.06_-0.05 >91 <0.42 212/218 - - RBS0770 1.65^+0.07_-0.04 59^+18_-12 0.57^+0.22_-0.18 755/713 ≥256 <cit.> ≥267 <cit.> S52116+81 1.75^+0.04_-0.04 103^+40_-24 0.37^+0.27_-0.22 370/409 ≥175 <cit.> >93 <cit.> SDSS J114921.52+532013.4 1.53^+0.10_-0.10 29^+9_-7 <2.02 88/75 47^+86_-14 <cit.> SDSSJ104326.47+110524.2 1.72^+0.05_-0.05 >123 <0.43 220/236 ≥91 <cit.> SWIFTJ2127.4+5654 1.89^+0.01_-0.01 84^+6_-6 0.75^+0.10_-0.09 1094/1089 62^+25_-15 <cit.> 92^+26_-17 <cit.> UGC 10120 1.91^+0.07_-0.06 >225 <0.71 185/192 - - UGC 3478 1.99^+0.06_-0.06 >98 <1.07 212/196 - - UGC03601 1.49^+0.09_-0.08 58^+45_-19 <0.86 127/109 74^+240_-32 <cit.> UGC06728 1.62^+0.05_-0.05 66^+20_-14 0.55^+0.43_-0.32 223/241 73^+31_-19 <cit.> 63^+133_-25 <cit.> VII ZW 653 2.05^+0.05_-0.05 >114 1.10^+0.61_-0.51 187/213 - - VII ZW 742 1.88^+0.08_-0.08 >174 1.25^+1.04_-0.71 99/108 ≥52 <cit.> lccccccc Best-fit parameters obtained from the model const×TBabs×zTBabs× (xillverCP/relxillCP/(relxillCP+xillverCP)) to the source spectra. N_H^INT is the host galaxy hydrogen column density in units of 10^22 atoms cm^-2, kT_e is in units of KeV; the model normalization is in units of 10^-4 photons keV^-1 cm^-2 s^-1. Source N_H^INT Γ kT_e R N_relxillCP N_xillverCP χ^2/dofSource N_H^INT Γ kT_e R N_relxillCP N_xillverCP χ^2/dof1H 0419-577 1.95^+0.36_-0.36 1.82^+0.01_-0.01 14^+1_-1 0.15^+0.06_-0.06 - 0.79^+0.01_-0.01 1321/1255 1H1934-063 - 2.14^+0.03_-0.03 >53 0.62^+0.16_-0.12 0.97^+0.13_-0.13 - 1221/1215 2MASS J1830231+731310 - 1.64^+0.03_-0.03 17^+22_-4 <0.55 - 0.31^+0.01_-0.01 307/280 2MASSJ17485512-3254521 - 1.74^+0.03_-0.03 >15 <0.43 - 0.68^+0.02_-0.02 420/427 2MASXJ04372814-4711298 - 2.03^+0.04_-0.04 >16 0.90^+0.50_-0.41 - 0.18^+0.01_-0.01 271/297 2MASXJ1802473-145454 - 1.79^+0.03_-0.04 29^+60_-11 0.27^+0.13_-0.10 1.11^+0.18_-0.12 - 604/569 2MASXJ18470283-7831494 - 1.87^+0.04_-0.04 >17 <0.95 - 0.44^+0.02_-0.02 194/263 2MASXJ18560128+1538059 - 1.72^+0.03_-0.03 13^+3_-2 0.49^+0.40_-0.41 - 0.49^+0.01_-0.01 295/307 2MASXJ19380437-5109497 - 1.93^+0.04_-0.04 >17 <0.95 - 0.43^+0.02_-0.02 244/255 2MASXJ21192912+3332566 - 1.90^+0.03_-0.03 >15 0.38^+0.36_-0.27 - 0.45^+0.01_-0.01 354/344 2MASXJ21355399+4728217 - 1.81^+0.03_-0.03 16^+12_-4 0.40^+0.36_-0.35 - 0.43^+0.02_-0.02 291/292 2MASXJ23013626-5913210 - 1.87^+0.04_-0.04 13^+6_-3 0.52^+0.65_-0.42 - 0.30^+0.02_-0.02 176/179 3C 109 - 1.78^+0.02_-0.02 18^+8_-3 0.26^+0.17_-0.16 - 0.32^+0.01_-0.01 591/627 3C 111 2.46^+0.39_-0.38 1.84^+0.01_-0.01 37^+34_-10 0.07^+0.07_-0.06 - 2.46^+0.03_-0.03 1081/1025 3C 120 - 1.85^+0.01_-0.01 45^+19_-8 0.23^+0.04_-0.04 2.72^+0.11_-0.08 - 1566/1592 3C 206 - 1.84^+0.04_-0.04 >15 <0.52 - 0.63^+0.02_-0.02 271/264 3C 382 - 1.77^+0.01_-0.01 34^+18_-8 0.07^+0.05_-0.05 1.73^+0.02_-0.02 - 1276/1247 3C 390.3 2.55^+0.41_-0.41 1.85^+0.01_-0.01 44^+49_-12 0.17^+0.08_-0.08 - 2.44^+0.03_-0.03 1002/1017 6dFJ1254564-265702 - 1.69^+0.05_-0.04 9^+4_-2 <1.09 - 0.23^+0.01_-0.01 121/143 ARK 120 - 1.96^+0.02_-0.03 >67 0.50^+0.09_-0.07 2.27^+0.06_-0.12 - 1147/1145 ARK 564 - 2.40^+0.02_-0.02 24^+10_-6 0.46^+0.29_-0.07 0.96^+0.10_-0.26 <0.08 1168/1165 CGCG229-015 - 1.89^+0.05_-0.04 17^+42_-6 0.82^+0.83_-0.55 - 0.21^+0.01_-0.01 168/173 ESO 025-G002 - 1.74^+0.03_-0.03 >22 <0.25 - 0.69^+0.02_-0.02 389/417 ESO 323-G077 38^+3_-3 1.70^+0.02_-0.02 35^+13_-12 2.35^+0.81_-0.66 - 0.29^+0.02_-0.02 411/386 ESO 511-G030 - 1.81^+0.04_-0.03 >16 0.57^+0.55_-0.32 - 0.22^+0.01_-0.01 313/289 ESO381-G007 - 1.80^+0.05_-0.05 >12 <0.77 - 0.25^+0.01_-0.01 165/161 FAIRALL 1146 - 2.05^+0.03_-0.03 >25 1.06^+0.53_-0.31 - 0.69^+0.02_-0.02 434/423 Fairall 51 11.55^+1.60_-1.23 2.00^+0.09_-0.17 >34 4.42^+1.38_-1.04 0.39^+0.05_-0.05 - 786/761 GRS 1734-292 3.75^+0.46_-0.44 1.81^+0.01_-0.01 20^+4_-2 0.24^+0.08_-0.09 - 2.98^+0.05_-0.04 818/856 H1821+643 - 1.94^+0.03_-0.03 >39 0.21^+0.21_-0.17 - 1.04^+0.03_-0.03 451/454 HE 1143-1810 - 1.85^+0.02_-0.02 27^+19_-7 0.26^+0.15_-0.13 - 1.37^+0.02_-0.02 534/597 HE1136-2304 - 1.73^+0.02_-0.02 18^+7_-3 <0.24 - 0.40^+0.01_-0.01 656/650 IC 1198 - 1.83^+0.04_-0.04 >18 0.91^+0.75_-0.47 - 0.21^+0.01_-0.01 193/195 IC 4329A 1.89^+0.11_-0.12 1.83^+0.003_-0.003 64^+15_-12 0.30^+0.03_-0.03 - 6.27^+0.03_-0.04 2245/2088 IGRJ14552-5133 - 1.96^+0.02_-0.02 >45 0.53^+0.15_-0.13 - 0.50^+0.01_-0.01 741/775 IGRJ19378-0617 - 2.13^+0.05_-0.05 >46 0.74^+0.33_-0.19 0.91^+0.17_-0.19 - 757/786 IRAS 05589+2828 - 1.90^+0.11_-0.07 43^+120_-24 1.00^+1.03_-0.40 1.19^+0.14_-0.27 - 788/738 IRAS 09149-6206 - 1.90^+0.10_-0.08 18^+21_-4 2.05^+0.68_-0.47 0.60^+0.02_-0.02 - 992/1005 IRAS04124-0803 - 1.67^+0.03_-0.03 15^+4_-3 0.39^+0.27_-0.23 - 0.58^+0.02_-0.02 292/330 KUG 1141+371 - 1.92^+0.02_-0.02 29^+79_-11 0.33^+0.22_-0.18 - 0.58^+0.01_-0.01 517/547 MCG-06-30-15 - 1.97^+0.06_-0.04 50^+98_-11 0.58^+0.39_-0.12 1.71^+0.31_-0.47 0.89^+0.19_-0.18 1572/1517 MCG+05-40-026 - 1.84^+0.06_-0.06 >16 <0.78 - 0.27^+0.02_-0.01 162/155 MCG+08-11-011 - 1.88^+0.01_-0.01 56^+26_-16 0.35^+0.06_-0.05 2.92^+0.02_-0.02 - 1506/1419 MR 2251-178 - 1.76^+0.01_-0.01 22^+8_-4 <0.02 - 2.59^+0.02_-0.02 835/859 MRK 1040 - 1.91^+0.01_-0.01 >62 0.60^+0.12_-0.10 - 1.48^+0.02_-0.02 1029/1007 MRK 1044 - 2.15^+0.03_-0.05 >96 0.53^+0.16_-0.15 0.63^+0.16_-0.15 0.19^+0.10_-0.11 1388/1286 MRK 110 - 1.82^+0.01_-0.01 25^+4_-3 <0.09 2.03^+0.01_-0.01 - 1630/1576MRK 1148 - 1.85^+0.02_-0.02 25^+74_-8 <0.17 - 1.04^+0.02_-0.02 555/532 MRK 205 - 1.97^+0.04_-0.04 >20 0.57^+0.43_-0.35 - 0.47^+0.02_-0.02 260/255 MRK 279 - 1.68^+0.01_-0.01 15^+2_-2 0.13^+0.09_-0.08 0.29^+0.02_-0.01 0.09^+0.02_-0.02 1007/995 MRK 290 - 1.70^+0.03_-0.03 >15 <0.51 - 0.43^+0.01_-0.01 319/364 Mrk 509 - 1.82^+0.02_-0.01 23^+3_-2 0.19^+0.03_-0.04 2.15^+0.05_-0.04 - 1691/1603 MRK 590 2.07^+0.59_-0.57 1.82^+0.03_-0.03 >49 0.20^+0.12_-0.11 0.92^+0.19_-0.16 - 815/774 MRK 704 10.24^+1.38_-1.36 1.86^+0.03_-0.03 >30 1.11^+0.50_-0.37 - 0.62^+0.04_-0.03 374/342 MRK 79 - 1.88^+0.04_-0.04 >47 0.53^+0.19_-0.16 1.26^+0.06_-0.13 - 994/968 MRK 841 - 1.86^+0.02_-0.02 33^+185_-14 0.37^+0.23_-0.19 - 0.89^+0.02_-0.02 469/508 MRK 876 - 1.86^+0.05_-0.04 >15 0.70^+0.56_-0.39 - 0.18^+0.01_-0.01 192/187 MRK 915 6.90^+0.98_-1.04 1.81^+0.03_-0.03 >28 0.33^+0.25_-0.16 - 0.45^+0.01_-0.02 522/547 MRK 926 - 1.78^+0.01_-0.01 31^+7_-5 0.10^+0.02_-0.02 3.13^+0.11_-0.09 - 1525/1494 NGC 3227 2.66^+0.34_-0.33 1.76^+0.01_-0.01 26^+6_-4 0.56^+0.09_-0.09 - 1.89^+0.02_-0.02 1198/1163 NGC 3783 - 1.74^+0.02_-0.02 48^+48_-9 0.41^+0.20_-0.16 2.33^+0.23_-0.15 0.76^+0.27_-0.26 1270/1132 NGC 4579 - 1.84^+0.03_-0.03 23^+23_-7 0.25^+0.09_-0.07 0.33^+0.04_-0.03 - 828/739 NGC 5273 - 1.79^+0.06_-0.10 >18 0.98^+0.53_-0.33 1.06^+0.21_-0.20 - 589/572 NGC 5548 3.32^+0.34_-0.35 1.81^+0.01_-0.01 35^+9_-7 0.43^+0.10_-0.09 - 2.23^+0.07_-0.03 1228/1143 NGC 7469 - 2.00^+0.02_-0.02 45^+52_-17 0.74^+0.20_-0.17 - 1.59^+0.03_-0.03 650/692 NGC 931 - 1.91^+0.01_-0.01 >50 0.58^+0.14_-0.09 - 1.40^+0.02_-0.02 979/954 PG0026+129 - 1.89^+0.01_-0.01 22^+9_-4 0.30^+0.12_-0.11 - 0.39^+0.12_-0.11 899/856 RBS0770 - 1.78^+0.02_-0.02 15^+3_-2 0.42^+0.16_-0.13 0.75^+0.04_-0.04 - 757/713 S52116+81 - 1.84^+0.03_-0.03 >18 0.29^+0.26_-0.20 - 0.74^+0.02_-0.02 373/409 SDSS J114921.52+532013.4 - 1.76^+0.07_-0.07 7^+1_-1 1.38^+3.34_-1.26 - 0.08^+0.01_-0.01 88/75 SWIFTJ2127.4+5654 - 1.96^+0.01_-0.01 21^+3_-2 0.72^+0.10_-0.10 - 1.49^+0.01_-0.01 1084/1089 UGC03601 - 1.68^+0.07_-0.06 >9 <0.53 - 0.24^+0.01_-0.01 129/109 UGC06728 - 1.73^+0.07_-0.07 17^+25_-5 0.34^+0.42_-0.21 0.31^+0.58_-0.39 - 223/239 § KT_E: THE TEMPERATURE OF THE CORONA Here, we discuss the results obtained from the spectral analysis using the modelconst × tbabs × ztbabs × (xillverCP/relxillCP/(relxillCP+xillverCP)). We also compare the best-fit values of our analysis with the previously measured values of kT_e from the literature, if available. Among the 42 sources for which we could constrain kT_e, 18 sources were already discussedby us earlier <cit.>.Therefore, here we give details on the rest of the 24 sources. 2MASXJ23013626-5913210: This source at a redshift z = 0.150 was observed by NuSTAR once in 2017. We used Model-2a to estimate the coronal properties of the source. We found the source spectra to be well described with Γ = 1.87^+0.04_-0.04 and kT_e = 13.35^+06.23_-03.48 keV. Previously, using similar Comptonization model <cit.> found a valueof Γ = 1.78^+0.14_-0.11 and kT_e = 11.10^+12.22_-02.87 keV. Our results are thus in agreement with <cit.>.3C 120: This is a radio-loud Seyfert 1 galaxy at z=0.033. NuSTAR observed the source twice on the same day in February 2013. Of the two observations, we analysed the spectrum with the highest exposure time using Model-2b. We obtained values of Γ = 1.85^+0.01_-0.01 andkT_e = 45.31^+18.79_-07.82 keV. Analysing the same data set using relxillCP <cit.> found a lower limit of kT_e > 91 keV.3C 390.3: This radio-loud Seyfert 1 galaxy at z=0.05613 was observed twice by NuSTARon the same day in May 2013. From spectral analysis of the data using Model-2a we obtainedΓ = 1.84^+0.01_-0.01 and kT_e = 44.13^+54.75_-12.50 keV. For the same data set <cit.> and <cit.> reported lower limits of kT_e > 46 keV and kT_e > 49.86 keV respectively.ARK 564: This source was observed by NuSTAR three times between May 2015 and November 2018. Of these, results on the observation done by NuSTAR in September 2018 is reported in this work for the first time. Fitting the observed data with Model-2c, weobtained Γ = 2.40^+0.02_-0.02 andkT_e = 24.28^+13.60_-04.29 keV respectively. From analysis of the data acquired by NuSTAR in 2015,<cit.> determined kT_e = 15±2 keV arguing the source to have the coolest corona. Also, based on two epochs of data, <cit.> reported variation in the temperature of the corona. HE 1136-2304: Active Galactic Nuclei exhibit flux variations across various timescales and across the entire electromagnetic spectrum. In the last decade, an increasing number of sources have displayed notably more pronounced changes in their flux and spectral characteristics, both in the X-ray range and the optical/UV range. These events are often referred to as changing-look AGN <cit.>. HE 1136-2304 is such a changing look AGN. It was found to change its optical spectral nature from Type 2 in 1993 to Type 1.5 in 2014 <cit.>. It was observed by NuSTARtwice on the same day in July 2014. Of the two, we analysed the spectrum with maximum exposure. The best-fit values obtained from fitting Model-2a to the source spectrum were Γ = 1.78^+0.03_-0.02 and kT_e = 27.81^+78.85_-09.30 keV. From an analysis of same NuSTAR spectrum using relxillCP <cit.> obtained a lower limit of kT_e > 21 keV.IC 4329A: This Seyfert 1 galaxy was observed six times by NuSTAR, once in 2012 and the others during August 2021. We analysed here the NuSTAR spectrum taken in 2012. Fitting the spectrum using Model-2a we obtainedbest-fit values of Γ and kT_e as 1.83^+0.003_-0.003 and 64.16^+15.41_-11.63 keV respectively. This source has been studied extensively in the past.For example, <cit.> reported kT_e = 37±7 keV from fitting compTT for a slab geometry. <cit.> estimated kT_e = 71^+37_-15 keV using relxillCP model. <cit.> also found kT_e = 82^+16_-7 keV from xillverCP fit to the source spectrum.2MASXJ21355399+4728217: This Seyfert galaxy was observed by NuSTAR on September 2019. From the analysis of the source spectrum <cit.> reported E_cut = 55^+50_-19 keV. We analysed the same observation ID using Model-2a and found kT_e = 15.57^+12.24_-03.90 keV.IRAS 04124-0803: Analysis of the NuSTAR observations (done in September 2021) on this source is carried out for the first time. From fittingModel-2a to the source spectrum, we obtained best-fit values of Γ = 1.66^+0.03_-0.03 and kT_e = 14.88^+03.70_-02.57 keV.IRAS 09149-6206: Results on NuSTAR observations of this source are reported for the first time. This source was observed by NuSTAR twice between July and August, 2018. We modelled the Comptonizedspectrum (observed on August 2018) and estimated the best-fit value of kT_e using Model-2b. From the model fit to the spectrum we obtained Γ = 1.90^+0.11_-0.09 andkT_e = 18.09^+16.87_-04.07 keV. IRAS 05589+2828: This Seyfert 1 galaxy situated at z=0.02940 was observed by NuSTAR in April 2020. The temperature of the corona of the source is reported for the first time. From the physical model fit to the observed spectrum, we found values of Γ = 1.90^+0.11_-0.07 and kT_e = 42.90^+120.46_-23.92 keV. Mrk 1148: This Seyfert 1 galaxy was observed by NuSTAR in January, 2018. We carried out the spectral analysis using Model-2a. The best-fit values obtained using the model fit to the spectrum are Γ = 1.86^+0.02_-0.02 and kT_e = 24.04^+19.81_-06.76 keV. Recently, analysing the same spectrum, both <cit.> and <cit.> found values of kT_e > 18 keV. Mrk 509: NuSTAR observed this source two times between April and June 2015. In this work, we analysed the spectrum taken on April 2015. From fitting Model-2c to the observed spectrum we obtained Γ = 1.86^+0.01_-0.02 andkT_e = 35.78^+06.78_-05.72 keV. On analysis of the same spectrum using relxillCP model <cit.> reported kT_e = 24±2 keV.2MASXJ18560128+1538059: This Seyfert 1 galaxy was observed by NuSTAR in 2017, and from the analysis of the source spectrum using our Model-2a, we found kT_e = 12.32^+3.12_-2.36 keV. Using the same observation ID <cit.> reported E_cut = 43^+20_-11 keV. PG 0026+129: NuSTAR observed the Seyfert 1 galaxy once in January 2021 and results on the analysis of the observation is reported for the first time. FromModel-2a fit to the observed spectrum we obtained best-fit values of Γ = 1.89^+0.01_-0.01 and kT_e = 22.18^+08.88_-04.03 keV.SWIFTJ2127.4+5654: This sourceclassified as a narrow line Seyfert 1 galaxy, was observed by NuSTAR nine times between September 2012 and December 2018. We analysed the observations carried out by NuSTAR in September 2012 as it has the maximum exposure time. By fitting the observed spectrum using Model-2a, we obtained Γ = 1.96^+0.01_-0.01 and kT_e = 20.70^+03.36_-01.94 keV. From an analysis of the same spectrum, <cit.> reporteda kT_e of 21^+2_-2 keV. IGRJ19378-0617: This source is situated at z=0.0103. It was classified as a Seyfert 1 galaxy, observed six times by NuSTAR between 2015 and 2022. From fitting the source spectrum using Model-2a, we found kT_e = 49.35^+36.94_-13.04 keV. From the spectral analysis of the source spectrum <cit.> reported kT_e > 122 keV.Fairall 51: NuSTAR observed this Seyfert 1 galaxy 4 times between 2018 and 2021. We analysed the NuSTAR spectrum observed in June 2018. From fitting the source spectrum using Model-2a, we found kT_e = 19.48^+6.54_-1.83 keV.Mrk 279: This Seyfert 1 galaxy was observed 4 times by NuSTAR between 2019 and 2020. We analysed the August 2020 spectrum using Model-2c and found kT_e = 16.38^+1.72_-1.55 keV. By analysing the source spectrum taken in October 2019, <cit.> reported a lower limit of kT_e > 84 keV. ESO 323-G077: This source is classified as a Seyfert 1.5 galaxy <cit.>, situated at z = 0.0155. NuSTAR observed this source six times between August 2016 to February 2017. We analyzed January 2017 NuSTAR data. From the Model-2a fit to the source spectrum, we obtained kT_e = 35.21^+13.02_-11.89 keV. For this source, <cit.> reported a lower limit of kT_e > 34 keV. 3C 109: This Seyfert galaxy was observed by NuSTAR twice in August 2017. We analysed the one with the maximum exposure time. By fitting Model-2a to the source spectrum, we found kT_e = 18.09^+6.91_-2.72 keV.RBS0770:This source was observed four times between 2012 and 2021 by NuSTAR. By fitting Model-2a to the source spectrum we found kT_e = 17.71^+4.30_-2.38 keV. From the analysis of the same observation, <cit.> reported a lower limit for kT_e > 24 keV. CGCG229-015: This nearby Seyfert 1 galaxy was observed once by NuSTAR on February 2018. From an analysis of the same observation ID <cit.> reported E_cut = 54^+13.02_-11.89 keV. From the Model-2a fit to the source spectrum, we obtained kT_e = 17.00^+41.62_-5.61 keV.3C 382: This Seyfert galaxy was observed 7 times between 2012 and 2016. We analysed the 2013 spectrum and reported kT_e = 33.07^+16.81_-7.76 keV. From the analysis of the same observation <cit.> reported E_cut = 132.75^+98.32_-39.98 keV. SDSS J114921.52+532013.4: This Seyfert 1 galaxy was observed once in 2016. From the Model-2a fit to the source spectrum, we found kT_e = 6.50^+1.25_-0.97 keV.aasjournal
http://arxiv.org/abs/2310.18196v1
{ "authors": [ "Indrani Pal", "Anju A.", "H. Sreehari", "Gitika Rameshan", "C. S. Stalin", "Claudio Ricci", "Stefano Marchesi" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231027151325", "title": "On the properties of X-ray corona in Seyfert 1 galaxies" }
Reduction of Sufficient Conditions in Variational Obstacle Avoidance Problemsfootnoteinfo [ January 14, 2024 =========================================================================================The purpose of this work is to build a framework that allows for an in-depth study of various generalisations to inhomogeneous space of models of Borodin-Ferrari <cit.>, Dieker-Warren <cit.>, Nordenstam <cit.>, Warren-Windridge <cit.> of interacting particles in interlacing arrays, both in discrete and continuous time, involving both Bernoulli and geometric jumps. The models can in addition be either time-inhomogeneous or particle-inhomogeneous. We show that the correlation functions of these models are determinantal and compute explicitly the correlation kernel for the fully-packed initial condition. Using these formulae we prove a short-time asymptotic for these dynamics, in general inhomogeneous space, to the discrete Bessel determinantal point process with parameter depending on the inhomogeneous environment only through a kind of average. We moreover prove a number of closely related results including the following. We prove that the autonomous, inhomogeneous in space and time, TASEP-like and pushTASEP-like particle systems on the left and right edge of the array respectively have explicit transition kernels and that from any deterministic initial condition their distributions are marginals of a signed measure with determinantal correlation functions. We reinterpret the distribution of these dynamics in arrays in terms of coherent sequences of measures on a natural inhomogeneous generalisation of the celebrated Gelfand-Tsetlin graph <cit.> and prove that all of them are extreme points in the convex set of coherent measures. We connect some of our constructions of non-intersecting paths to independent walks with location-dependent jumps conditioned to never intersect by computing explicitly the intersection probabilities. We prove a novel duality relation between dynamics in inhomogeneous space and dynamics with inhomogeneities on the level of the array (particle inhomogeneities). We extend the work of Nordenstam <cit.> on the shuffling algorithm for domino tilings of the Aztec diamond and its relation to push-block dynamics in interlacing arrays, from the uniform weight to general weights on the tilings. We then connect this, for a special class of weights, back to our previous results. We also consider non-intersecting walks in inhomogeneous space and time with fixed starting and end points and obtain a formula for their correlation functions, involving among other ingredients, an explicit Riemann-Hilbert problem, generalising some of the results of Duits and Kuijlaars <cit.>. We then prove a limit theorem for the bottom lines in this line-ensemble, under some technical conditions, generalising some of the results of Berggren and Duits <cit.>. The main computational tool throughout this work is a natural generalisation of a Toeplitz matrix, that we call inhomogeneous Toeplitz-like matrix 𝖳_𝐟 with (a possibly matrix-valued) symbol 𝐟.§ INTRODUCTIONThe purpose of this work is an in-depth study of various integrable probabilistic models, of statistical mechanical nature, including systems of interacting particles, measures on non-intersecting paths and random tiling models in which the randomness depends in an inhomogeneous way on the underlying space. As far as we know, there is no precise definition of what an integrable probabilistic model really is. But an informal working definition could go as follows: a model that enjoys explicit formulae for the expectations of some class of observables that can be used to analyse the model further, especially asymptotically. In some cases this class of observables is very restricted, while in others one has explicit knowledge of workable formulae for all the correlation functions. Using these explicit formulae sometimes it is possible to completely analyse the asymptotic behaviour of the model, while sometimes one needs to combine this knowledge with the use of probabilistic or sometimes geometric arguments to perform such analysis. This has been particularly fruitful in studying models in the KPZ universality class <cit.> which has been the main drive in the search for integrable models in the last two decades. We note however that not all models in the KPZ class are integrable, see for example <cit.> for a model where explicit formulae are completely absent, and vice-versa not all integrable models have necessarily KPZ behaviour.In this paper we introduce and study space-inhomogeneous generalisations of, much-studied in the integrable probability literature, models of Borodin-Ferrari <cit.>, Dieker-Warren <cit.>, Nordenstam <cit.> and Warren-Windridge <cit.> of interacting (through so-called push-block dynamics) particles in interlacing arrays, both in discrete and continuous time, involving both Bernoulli and geometric jumps. We also study in detail their one-dimensional Markovian marginals of TASEP-like and pushTASEP-like systems and related models of non-intersecting paths and models of tilings. A more detailed review of our contributions will follow shortly. The main message is that, after building the right tools, a lot of the integrable structures that exist in the homogeneous models can be shown to exist in the space inhomogeneous setting as well. Unsurprisingly, the explicit formulae are more involved but nevertheless still useful. Employingthem, certain scaling limits of these models are analysed already in this paper, although a general study of asymptotic behaviours is beyond the scope of the present work. Finally, some other models of interacting particles in inhomogeneous space have been studied in recent years <cit.>, some of them very closely related (in fact special cases) to our models, and a more detailed literature review will follow in Section <ref> where we state our results precisely.We now turn to our main tool. The use of Toeplitz matrices, with both scalar and matrix symbols (in which case they are normally called block Toeplitz matrices), has been very important in the study of integrable probabilistic models, see <cit.>. A suitable generalisation plays an importable role here and is also the common thread throughout our work. A natural-looking generalisation of a Toeplitz matrix with scalar symbol f could be the following. Given two sequences of functions 𝐮(z)=(u_k(z))_k∈ℤ_+ and 𝐯(z)=(v_k(z))_k∈ℤ_+ and a contour ℭ_𝐮,𝐯 such that we have the “biorthogonality" relation,1/2πi∮_ℭ_𝐮,𝐯u_k(z)/v_j(z)dz= 1,ifk=j,0,otherwise,we could define the matrix [𝒯_f^𝐮,𝐯(x,y)]_x,y∈ℤ_+ associated to the symbol f by, with x,y∈ℤ_+,𝒯_f^𝐮,𝐯(x,y)=1/2πi∮_ℭ_𝐮,𝐯u_x(z)/v_y(z)f(z)dz.Clearly, the Toeplitz matrix with symbol f is the special case u_k(z)=z^k, v_k(z)=z^k+1 and ℭ_𝐮,𝐯={z∈ℂ:|z|=1}. Of course, the above is only interesting and useful if we can establish desirable properties for 𝒯_f^𝐮,𝐯 and find applications for it. With this in mind, the sequences of functions we take in this work will not be arbitrary, but rather they will be families of polynomials which depend on the inhomogeneous environment behind the probabilistic applications we are interested in. This allows us to define a generalisation of a Toeplitz matrix, that we call inhomogeneous Toeplitz-like matrix 𝖳_f associated to a symbol f, see Sections <ref> and <ref>, andfor matrix-valued symbols Section <ref>, and develop a framework for probabilistic applications around it in the rest of the paper. As it turns out, 𝖳_f is actually similar to a standard Toeplitz matrix, with symbol f(1-z), via an explicit, albeit complicated, change of basis matrix. However, we stress that we cannot just transfer results from the standard Toeplitz matrix setup to the inhomogeneous one through this similarity (with only some exceptions). See Section <ref> for more details.An important source of integrable probabilistic models, whose correlations functions can be computed explicitly, come from the Schur process <cit.> and Schur dynamics on them <cit.>. These constructions have their origin in symmetric function theory <cit.> and the Schur polynomials in particular <cit.>. A number of, but far from all,the results in our paper could be rephrased and proven in terms of factorial Schur polynomials <cit.>, a certain generalisation of Schur polynomials, but we will not attempt to do it here. In some sense, some of our results constitute a factorial Schur generalisation of the Schur process and Schur dynamics <cit.>. Instead, we have chosen to emphasize the inhomogeneous Toeplitz-like matrix/operator perspective, which seems to us better adapted for some of the probabilistic constructions we consider. For example, the formulae in Section <ref> for the so-called two-level couplings we do not know if they have have a natural interpretation in terms of factorial Schur polynomials, or any symmetric functions for that matter, but it would be very interesting if one existed. Finally, in <cit.> a general fully inhomogeneous six vertex model and its associated symmetric functions, which degenerate to the factorial Schur polynomials, were studied in detail.However, as far as we can tell, the results in our work do not follow as degenerations of results from <cit.>. In fact, the paper <cit.> considers a static model which is not evolving in time. It would be interesting though to understand whether and how the models here are related with the vertex model of <cit.>. The main novel contributions of this work, we believe, are the following. * We develop a framework, based on the inhomogeneous Toeplitz-like matrices 𝖳_𝐟 with (possibly matrix) symbol 𝐟 that we introduce, which allows us to prove intertwining relations between semigroups corresponding to non-intersecting paths and obtain explicit formulae for the distributions and correlation functions, including the explicit computation of correlation kernels, of the various types of dynamics we study in this paper, see Sections <ref>, <ref>, <ref> for an illustration of some of the results and Sections <ref>, <ref>, <ref>, <ref>, <ref> and <ref> for more details on the techniques.* We introduce couplings for the intertwined semigroups of non-intersecting paths mentioned above which have their origin in coalescing inhomogeneous in space and time Bernoulli and geometric walks, see Section <ref>. In the case of geometric walks, which is the most subtle, the dynamics coming from this coupling are different from what one gets if one uses a certain general recipe for couplings of intertwined semigroups developed by Borodin and Ferrari <cit.>. In particular, they have the desirable property that the projection on the left edge of the array is Markovian and is a kind of inhomogeneous-space (and time) geometric TASEP, unlike in <cit.> where the left-edge projection is not Markov, see Section <ref> for more details.* We obtain, via the use of intertwining relations, by developing analogues for inhomogeneous space of ideas of Dieker and Warren <cit.>[The paper <cit.> deals with the level/particle inhomogeneous setting.], explicit formulae for the transition probabilities of the autonomous TASEP-like and pushTASEP-like particle systems in inhomogeneous space and time on the left and right edge respectively of our dynamics on arrays. Moreover, from the very structure of these formulae we can show, essentially without any computations at all, that the distributions of these particle systems, starting from any deterministic initial condition, are marginals of explicit (signed) measures with determinantal correlation functions. See Sections <ref> and <ref> for more details.* We discover a novel duality relation for push-block dynamics in interlacing arrays which maps inhomogeneities in space to inhomogeneities of the level of the array (inhomogeneities on the particles) and vice versa, see Sections <ref> and <ref>.We also prove a number of other results, which to some readers may be more interesting than the above, including the following. * We extend the work of Nordenstam <cit.>, which dealt with the case of the uniform weight, explaining how the dynamics of the shuffling algorithm on domino tilings of the Aztec diamond with completely general weights are explicitly connected to Bernoulli push-block dynamics on interlacing arrays (with a certain time-shift). We then, for a special class of weights, connect this back to our earlier probabilistic results. See Sections <ref> and <ref> for more details.* We connect the measures coming from our dynamics on arrays to so-called coherent sequences of measures on a natural inhomogeneous generalisation of the classical Gelfand-Tsetlin graph <cit.> and prove that all these measures are extreme points in the convex set of coherent measures, see Sections <ref> and <ref>. The ultimate goal here would be to have a complete classification of such extreme points, see <cit.> for motivation for such problems, as was done in <cit.> for allied models, but this appears to be a difficult task. * In another direction, we prove that independent walks in inhomogeneous space, each with a different drift, under conditions on the relative strengths of the drifts, and which are conditioned to never intersect have explicit transition probabilities, see Sections <ref> and <ref>. We note that this model does not fall into the general framework of <cit.> and subsequent works for which the increments of the walks are not location-dependent. Moreover, this Markov process matches the evolution of a fixed level/row of certain dynamics in arrays. It would be of particular interest to be able[We cannot do this yet. However, we hope to return to this problem in future work.] to take all the walks to be identicalsince this would imply that the corresponding non-intersecting paths have a novel Gibbs resampling property in analogy to the Brownian Gibbs property <cit.>, but with Brownian motion replaced by general inhomogeneous walks. The Brownian Gibbs property has been a key tool in studying KPZ universality class models <cit.> and it would be interesting to see what could be done with the inhomogeneous-walk Gibbs property just alluded to.* Using the explicit formulae of the correlation functions for the models we study, we prove two limit theorems. First, a short-time asymptotic result for the continuous-time dynamics we consider, see Sections <ref> and <ref>. In the limit we obtain the discrete Bessel determinantal point process <cit.> whose parameter depends on the inhomogeneous environment only through a kind of average. Second, a limit theorem (under certain technical conditions) for the bottom paths of N non-intersecting random walks in inhomogeneous space and time, with fixed starting and end points, see Sections <ref> and <ref>. We strongly believe, that using our exact formulae, along with a more substantial analysis, more significant asymptotic theorems could be established, also in other scaling regimes, and the above results can be viewed as a proof of concept for this.In the next section we introduce the models we study, state our main results precisely and discuss some of the ideas and techniques and relevant literature in more detail. Acknowledgements I am very grateful to Sunil Chhita for very useful discussions on the shuffling algorithm for the Aztec diamond. I am very grateful to Mustazee Rahman for very useful discussions on the transition probabilities and determinantal point processes for the edge particle systems and on how to invert certain matrices in Section <ref>. A few of the ideas presented in that section arose from the discussions with Mustazee and I thank him very much. I am very grateful to Alexei Borodin for first telling me about the factorial Schur polynomials and for comments and pointers to the literature. § MODELS AND RESULTS§.§ General notation and terminologyWe introduce some notation and terminology that will be used throughout the paper. Let ℝ_+=[0,∞), ℤ_+={0,1,2,…} and for x_1,x_2∈ℤ_+, let x_1,x_2 ={x_1,x_1+1,…,x_2}. Let 1_𝒜 be the indicator function of a set or event 𝒜.The most basic data in this work is a sequence/field of inhomogeneities 𝐚=(a_x)_x∈ℤ_+ on ℤ_+. We assume throughout the paper that inf_x∈ℤ_+a_x>0and sup_x∈ℤ_+ a_x <∞. Associated to 𝐚 we define the sequence of “characteristic polynomials" p_x(z)=p_x(z;𝐚) indexed by x∈ℤ_+, having degree x, by p_0(z)=1 and p_x(z)=p_x(z;𝐚)=∏_k=0^x-1(1-z/a_k). We define the Weyl chamber 𝕎_N={𝐱=(x_1,x_2,…,x_N)∈ℤ_+^N:x_1 < x_2 <⋯ < x_N}.We say that 𝐱∈𝕎_N and 𝐲∈𝕎_N+1 interlace and denote this by 𝐱≺𝐲 if y_1≤ x_1 < y_2 ≤ x_2 < ⋯ < y_N≤ x_N < y_N+1. We also say that𝐱∈𝕎_N interlaces with 𝐲∈𝕎_N and abusing notation still write 𝐱≺𝐲 if x_1 ≤ y_1 < x_2 ≤ y_2 < ⋯ <x_N ≤ y_N.We define interlacing arrays of length N by𝕀𝔸_N={(𝐱^(1),𝐱^(2),…,𝐱^(N))∈𝕎_1×𝕎_2×⋯×𝕎_N:𝐱^(i)≺𝐱^(i+1), fori=1,…,N-1}.We call 𝐱^(N) the N-th level of the array and individual coordinates 𝗑_i^(N) particles. Denote the set of infinite interlacing sequences (𝐱^(N))_N≥ 1, 𝐱^(1)≺𝐱^(2)≺𝐱^(3)≺⋯, by 𝕀𝔸_∞. The following distinguished configuration will make its appearance often in the text. Define the fully-packed configuration in 𝕀𝔸_∞ as the configuration (𝐱^(N))_N≥ 1with 𝐱^(N)=(0,1,…,N-1) for all N≥ 1. Given a function f holomorphic in the half plane {z∈ℂ:(z)>-ϵ} for some ϵ>0 define 𝖳_f(x,y)=𝖳^𝐚_f(x,y) by𝖳_f(x,y)=-1/2πi1/a_y∮_𝖢_𝐚p_x(w)/p_y+1(w)f(w)dw, x,y ∈ℤ_+,where the positively oriented contour 𝖢_𝐚⊂{z∈ℂ:(z)>-ϵ} is assumed to encircle all the points {a_x}_x∈ℤ_+ (note that it does not contain any poles of f by the analyticity assumption in the half plane). Observe that, this is a special case of 𝒯_f^𝐮,𝐯 from (<ref>) with u_k(z)=p_k(z), v_k(z)=-a_kp_k+1(z), ℭ_𝐮,𝐯=𝖢_𝐚. When a_x=1, for all x ∈ℤ_+, then we can pick 𝖢_𝐚 to be the circle {z∈ℂ:|1-z|=1 } and [𝖳_f(x,y)]_x,y∈ℤ_+ is easily seen to be the Toeplitz matrix with symbol f(1-z). We will be mainly interested in three particular choices of f having probabilistic significance, f(z)=e^-tz or (1-α z) or (1+β z)^-1, see Section <ref> for more details. (𝖳_e^-tz)_t≥ 0 is the transition semigroup of a pure-birth chain with jump rate a_x when at location x∈ℤ_+. 𝖳_1-α z is the single-step transition probability of a Bernoulli random walk with probability of moving to x+1, when at x, given by αa_x and complementary probability (1-α a_x) for staying at x. 𝖳_(1+β z)^-1 is the transition probability an inhomogeneous geometric walk with single-step probability to go from x to y≥ x, given by (1+β a_y)^-1∏_k=x^y-1β a_k (1+β a_k)^-1. We normally denote a random configuration in 𝕀𝔸_∞, whose law will be explicitly specified or be clear from context, by (𝖷_i^(n))_1≤ i ≤ n;n≥ 1. Similarly, we normally denote a stochastic process in 𝕀𝔸_∞ either in discrete or continuous time, whose dynamics will be explicitly specified or be clear from context, by (𝖷_i^(n)(t);t ≥ 0)_1≤ i ≤ n;n≥ 1. At certain places, when there is no risk of confusion, we will drop the explicit dependence on time t to ease notation. We will always, unless otherwise explicitly stated, denote the underlying law of the various random elements we encounter (how they are dependent on each other will always be specified or be clear from context) by ℙ. Warning on notation In two sections of this introductory part, Section <ref> and <ref> only, and Sections <ref> and <ref> only, where the corresponding proofs can be found, it will be preferable, for reasons explained therein, to label levels of arrays starting from 0 (which has one particle) instead of 1 and coordinates of 𝕎_N starting from subscript 0 instead of 1. In particular, with this convention, the configuration of the N+1 particles at level N, which is in 𝕎_N+1, will have coordinates (x_0^(N),x_1^(N),…,x_N^(N)). We will remind the reader of this at the relevant places. §.§ Space-time inhomogeneous dynamics and correlation kernels We introduce three types of Markov dynamics in 𝕀𝔸_∞, one in continuous time and two in discrete time. We call these the continuous-time pure-birth push-block dynamics, sequential-update Bernoulli dynamics and Warren-Windridge geometric dynamics. These generalise the models considered in <cit.> to the space-inhomogeneous setting. Using our methods it would be possible to analyse other types of dynamics in discrete time as well, including inhomogeneous generalisations of the sequential-update Borodin-Ferrari geometric dynamics and parallel-update Bernoulli, see Section <ref> for more details.First, we define the dynamics in continuous time. This model first appeared in <cit.>. Each particle has an independent exponential clock with rate a_x if the particle is at spatial location x. When the clock of the particle rings, say of 𝖷_i^(n)=x, then it will attempt to jump to x+1. If 𝖷_i^(n-1)=x then the move is suppressed for otherwise the interlacing would break down. We say the particle is blocked. If 𝖷_i^(n-1)>x then 𝖷_i^(n+1) moves to x+1. If 𝖷_i+1^(n+1)=x+1, then 𝖷_i+1^(n+1) is instantaneously moved to x+2 (we say it is pushed) so that the interlacing remains true. This pushing is propagated instantaneously to higher levels. See Figure <ref> for an illustration. Finally, we say that associated to these dynamics we have the function f(z)=e^-tz.We now introduce the discrete-time dynamics. It is easy to see that we can also define these using certain recursive equations, which may be clearer to some readers compared to the descriptive definitions below, and we will make this explicit shortly. These dynamics are in discrete-time. Each time-step depends on an additional parameter 0≤α≤ (sup_x a_x)^-1 (which can change for each time-step). For each time-step locations are updated sequentially from lower levels to higher levels and from left to right within each level. Particle 𝖷_1^(1) moves as an inhomogeneous Bernoulli random walk with probability to go from x to x+1 given by α a_x and complementary probability 1-α a_x to stay at x. Suppose we have updated the first n-1 levels and the first i-1 particles on that level. Particle 𝖷_i^(n) checks if 𝖷_i^(n-1)=x in which case it is blocked and we move to update particle 𝖷_i+1^(n). Otherwise, it moves as an inhomogeneous Bernoulli walk with the above mentioned probabilities. If 𝖷_i+1^(n+1)=x+1 then it is instantaneously moved to x+2 so that the interlacing remains and this pushing is propagated to higher levels. Particles that have been pushed do not attempt to move again (for example in the above scenario 𝖷_i+1^(n+1) does not attempt to move again when we update level n+1). See Figure <ref> for an illustration. Finally, we say that associated to this single time-step of sequential-update Bernoulli dynamics with parameter α we have the function f(z)=1-α z.There is an alternative way to view the pushing move which may be more natural. There is no instantaneous pushing but instead when we try to update particle 𝖷_i^(n) at location x, in addition to checking if 𝖷_i^(n-1)=x, in which case it is blocked, we also check if 𝖷_i-1^(n-1)=x in which case 𝖷_i^(n) is moved to x+1. If neither possibility occurs, then 𝖷_i^(n) simply moves as an inhomogeneous Bernoulli random walk with the probabilities above. It is clear that the resulting configuration at the end of the time-step is the same as the one obtained from Definition <ref>.Particles move in discrete time and each time-step depends on an additional parameter β≥ 0 (which can change for each time-step). Particle locations are updated sequentially from lower levels to higher levels and from left to right. Particle 𝖷_1^(1) moves as an inhomogeneous geometric random walk with transition probability to go from x to x'≥ x given by (1+β a_x')^-1∏_k=x^x'-1β a_k(1+β a_k)^-1. Suppose 𝖷_1^(1) moves to x'. We then update the next level. 𝖷_1^(2)=y will attempt to jump to location y' with probability (1+β a_y')^-1∏_k=y^y'-1β a_k(1+β a_k)^-1. However, any jumps past location x will be suppressed. More precisely, with probability ∑_y'=x^∞ (1+β a_y')^-1∏_k=y^y'-1β a_k(1+β a_k)^-1 the new location of 𝖷_1^(2) will be x. We note that 𝖷_1^(2) is blocked by the location x of 𝖷_1^(1) at the beginning of the time-step (and notits updated location x'). We then update 𝖷_2^(2) which is at location z. If z>x', then 𝖷_2^(2) attempts to move to z'≥ z with inhomogeneous geometric probability (1+β a_z')^-1∏_k=z^z'-1β a_k(1+β a_k)^-1. If however x'≥ z then 𝖷_2^(2) is moved/pushed to the intermediate position x'+1. From there it attempts to move to z'≥ x'+1 with probability (1+β a_z')^-1∏_k=x'+1^z'-1β a_k(1+β a_k)^-1 Higher levels are updated in the same fashion. Note that, the push-block interactions ensure the process stays in 𝕀𝔸_∞. See Figure <ref> for an illustration of the dynamics. Finally, we say that associated to a single time-step of Warren-Windridge dynamics with parameter β we have the function f(z)=(1+β z)^-1. We note that in discrete time we can, and will, consider Markov processes which move at each time-step with either sequential-update Bernoulli or Warren-Windridge geometric dynamics and this will be conveniently encoded in the sequence of associated functions (f_s,s+1(z))_s=0^M, with possibly M=∞. We also observe that in all three types of dynamics the evolution of the first N levels in 𝕀𝔸_N is autonomous. In the rest of the paper we will consider generalisations/variations of these dynamics where the jump rates or transition probabilities of individual particles can depend in a more complicated way on time, space location and level of the particle. However, the interactions between particles will always be of exactly the form described in Definitions <ref>, <ref> and <ref> above.Recursive equations for discrete-time dynamics We now write down recursive equations describing the dynamics from Definitions <ref> and <ref>. We will not make explicit use of these equations in the proofs but it is instructive to present them. In particular, they make the connection to various first/last passage percolation models corresponding to the one-dimensional Markovian projections on the left and right edge of the array that we will discuss next much clearer. We need some more notation.For 0≤α≤ (sup_x a_x)^-1, define the random field 𝖡_α=(𝖡_α(x))_x∈ℤ_+, by: for x∈ℤ_+, 𝖡_α(x) are independent and taking values in {0,1} with probabilitiesℙ(𝖡_α(x)=1)=1-ℙ(𝖡_α(x)=0)=α a_x.For β≥ 0, define the random field 𝖦_β=(𝖦_β(x))_x∈ℤ_+, by: for x∈ℤ_+, 𝖦_α(x) are independent and taking values in ℤ_+ with probabilitiesℙ(𝖦_β(x)=n)=(1+β a_x+n)^-1∏_k=x^x+n-1β a_k (1+β a_k)^-1.For each triple (i,n,t) with 1≤ i ≤ n, n≥ 1, t∈ℤ_+ and parameters α,β satisfying the conditions above we denote by 𝖡^i,(n)_t,α and 𝖦_t,β^i,(n) an independent copy of the fields 𝖡_α and 𝖦_β respectively. Suppose we are given parameters (α_t)_t∈ℤ_+ and (β_t)_t∈ℤ_+ as above. Then, a process (𝖷_i^(n)(t);t≥ 0)_1≤ i ≤ n; n≥ 1 in discrete time following the sequential-update Bernoulli dynamics of Definition <ref> with the t-th step (namely the transition from time t to t+1) taken with parameter α_t satisfies, and is determined by, the recursive equations:𝖷_i^(n)(t+1)=min{𝖷_i^(n-1)(t+1), max{𝖷_i^(n)(t)+𝖡_t,α_t^i,(n)(𝖷_i^(n)(t)),𝖷_i-1^(n-1)(t+1)+1}}.Similarly, a process (𝖷_i^(n)(t);t≥ 0)_1≤ i ≤ n; n≥ 1 in discrete time following the Warren-Windridge geometric dynamics of Definition <ref> with the t-th step taken with parameter β_t satisfies, and is determined by, the recursive equations:𝖷_i^(n)(t+1)=min{𝖷_i^(n-1)(t),max{𝖷_i^(n)(t),𝖷_i-1^(n-1)(t+1)+1}+𝖦_t,β_t^i,(n)(max{𝖷_i^(n)(t),𝖷_i-1^(n-1)(t+1)+1})}.Of course, we can also take a mixture of Bernoulli and geometric steps and the recursive equations are modified in the obvious way. We note that the recursive equations (<ref>), (<ref>) above and in particular their marginals on the left and right edge (<ref>), (<ref>), (<ref>), (<ref>) below are reminiscent but different[In the space-homogeneous case the recursive equations (<ref>), (<ref>), (<ref>), (<ref>) for the left and right edge projections do boil down to the RSK recursions, see <cit.>.] to the recursive equations that come up in the Robinson-Schensted-Knuth (RSK) <cit.> correspondence. This combinatorial algorithm can also be used to study certain first/last passage percolation models in inhomogeneous space in a dynamical way in terms of interacting particles, see <cit.>. However, with dynamics driven by RSK the inhomogeneities of the environment always become time and particle/level inhomogeneities for the particle system. In particular, particles are not moving in inhomogeneous space as in the models we can study with our methods. One-dimensional Markov projections Each of the three dynamics we have considered has two one-dimensional Markovian projections when restricted to the left and right edge of the array, namely to coordinates (x_1^(n))_n≥ 1 and (x_n^(n))_n≥ 1 respectively.More precisely, the projection of the continuous-time dynamics on the right edge is given by inhomogeneous space push-TASEP <cit.> in continuous time. The projection to the left edge is an inhomogeneous space zero-range process <cit.>. Under the coordinate shift (x_1^(n))_n ≥ 1↦(x_1^(n)-n+1)_n ≥ 1,it becomes equivalent to an inhomogeneous variant of TASEP (but not the standard and much more well-known inhomogeneous TASEP of <cit.>).For the sequential-update Bernoulli dynamics the right edge is given by the inhomogeneous space discrete-time Bernoulli push-TASEP. Its homogeneous version was studied in <cit.>. The projection on the left-edge is a sequential-update zero range process in discrete time with inhomogeneous Bernoulli jumps. Finally, the recursive equations for the left and right edge can be readily read out from (<ref>) (observe that the equations are closed as in they do not depend on any of the other coordinates of the array),𝖷_1^(n)(t+1) =min{𝖷_1^(n-1)(t+1),𝖷_1^(n)(t)+𝖡_t,α_t^1,(n)(𝖷_1^(n)(t))}, 𝖷_n^(n)(t+1) =max{𝖷_n-1^(n-1)(t+1)+1,𝖷_n^(n)(t)+𝖡_t,α_t^n,(n)(𝖷_n^(n)(t))}.For the Warren-Windridge geometric dynamics the right edge is a geometric push-TASEP in inhomogeneous space with sequential update. The projection on the left edge is a parallel-update (not sequential!) zero-range process with inhomogeneous geometric jumps which under the above coordinate shift it becomes a parallel-update TASEP with inhomogeneous geometric jumps. The recursive equations for the left and right edge are easily seen from (<ref>) to be given by (again observe that they are closed):𝖷_1^(n)(t+1) =min{𝖷_1^(n-1)(t),𝖷_1^(n)(t)+𝖦_t,β_t^1,(n)(𝖷_1^(n)(t))}, 𝖷_n^(n)(t+1) =max{𝖷_n^(n)(t),𝖷_n-1^(n-1)(t+1)+1}+𝖦_t,β_t^n,(n)(max{𝖷_n^(n)(t),𝖷_n-1^(n-1)(t+1)+1}).The left-edge particle systems, in discrete time, are reminiscent, but as far as we can tell not quite the same, with the doubly geometric inhomogeneous corner growth model studied in <cit.>. The following theorem is our first main result.Suppose that, starting from the fully-packed configuration in 𝕀𝔸_∞, we perform M_1 steps of sequential-update Bernoulli dynamics with parameters α_1,…,α_M_1, M_2 steps of Warren-Windridge geometric dynamics with parameters β_1,…,β_M_2 and finally continuous-time pure-birth dynamics for time t. The parameters α_i,β_i satisfy 0≤α_i ≤ (sup_x∈ℤ_+a_x)^-1 and 0≤β_i <(sup_x∈ℤ_+a_x-inf_x∈ℤ_+a_x)^-1. We denote the resulting random configuration in 𝕀𝔸_∞ by (𝖷_i^(n))_1 ≤ i ≤ n;n≥ 1. Then, for any m≥ 1, and pairwise distinct points (n_1,x_1),…,(n_m,x_m) in ℕ×ℤ_+ we have:ℙ(∃ j_1,…,j_msuch that 𝖷_j_i^(n_i)=x_ifori=1,…,m)=(𝔎_f[(n_i,x_i);(n_j,x_j)])_i,j=1^m,where the correlation kernel 𝔎_f is given by𝔎_f[(n_1,x_1);(n_2,x_2)] =-1_n_2>n_11/a_x_11/2πi∮_𝖢_𝐚,0p_x_2(w)/p_x_1+1(w)w^n_2-n_1dw-1/a_x_11/(2πi)^2∮_𝖢_𝐚,0dw∮_𝖢_0dup_x_2(u)f(w)/p_x_1+1(w)f(u)w^n_1/u^n_21/w-u,with the function f givenf(w)=∏_i=1^M_1(1-α_iw)∏_i=1^M_2(1+β_iw)^-1exp(-tw)and where the contours 𝖢_𝐚,0 and 𝖢_0 are as explained in the caption of Figure <ref>.The order with which we perform the different types of dynamics is actually not important. For example we can perform in discrete-time either Bernoulli or geometric steps in any order and similarly the continuous-time dynamics. The law of the resulting configuration will be the same. This is not at all obvious a priori and is a consequence of our results. The statement of the theorem is of course equivalent to the point process {𝖷_i^(n)} being determinantal with correlation kernel 𝔎_f, see <cit.>.If we start the dynamics from certain more general initial conditions then the resulting point process will still have determinantal correlation functions. This follows from the results in the sequel. An explicit computation of the correlation kernel is more complicated however and we will not do it in this paper.The upper bound restriction on the β_i parameters istechnical and we believe can be removed. Of course, in the homogeneous case it is vacuous. On the other hand, the restriction on the α_i parameters is necessary in order for the corresponding inhomogeneous Bernoulli jump to be well-defined. Our next result is about the evolution of the projection on any single row of the array (𝖷_i^(N)(t);t≥ 0), either in discrete or continuous time, and its correlations in time. Consider a process (𝖷_k^(n)(t);t ≥ 0)_1≤ k ≤ n; n≥ 1 in 𝕀𝔸_∞, starting from the fully-packed configuration, either in discrete or continuous time t evolving as follows: * In discrete time moves with sequential-update Bernoulli or Warren-Windridge geometric steps with the k-th step determined by a function f_k,k+1(z) which is either of the form (1-α_i_kz) or (1+β_i_kz)^-1 satisfying 0≤α_i ≤(sup_x∈ℤ_+ a_x)^-1 and 0≤β_i <(sup_x∈ℤ_+ a_x-inf_x∈ℤ_+a_x)^-1.* In continuous time moves with pure-birth push-block dynamics.In discrete time we let f_s,t(z)=∏_i=s^t-1 f_i,i+1(z) and in continuous time f_s,t(z)=e^-(t-s)z. Then, for any N≥ 1, the stochastic process (𝖷_k^(N)(t);t ≥ 0)_1≤ k ≤ N is a Markov process in 𝕎_N with transition probabilities 𝔓_s,t^(N)=𝔓_f_s,t^(N), from time s to time t, from 𝐱 to 𝐲, given by𝔓_s,t^(N)(𝐱,𝐲)=(∂_w^i-1p_y_j(w)|_w=0)_i,j=1^N/(∂_w^i-1p_x_j(w)|_w=0)_i,j=1^N(𝖳_f_s,t(x_i,y_j))_i,j=1^N.Moreover, for any n≥ 1, and any pairwise distinct points (t_1,x_1),…, (t_n,x_n) in either ℤ_+ ×ℤ_+ or ℝ_+ ×ℤ_+, depending on whether time is discrete or continuous, we have ℙ(∃ j_1,…,j_nsuch that 𝖷_j_i^(N)(t_i)=x_ifor1 ≤ i ≤ n)= (𝒦_N[(t_i,x_i);(t_j,x_j)])_i,j=1^nwith the correlation kernel 𝒦_N given by 𝒦_N[(s,x_1);(t,x_2)]=-1_t>s𝖳_f_s,t(x_1,x_2)-1/a_x_21/(2 πi)^2∮_𝖢_𝐚,0 dw∮_𝖢_0 du p_x_1(u)f_0,t(w)/p_x_2+1(w)f_0,s(u)w^N/u^N1/w-u,where the contours 𝖢_𝐚,0 and 𝖢_0 must again satisfy the conditions in the caption of Figure <ref>.The fact that 𝔓_s,t^(N) is a transition probability is not immediately obvious from (<ref>) and will be proven in the sequel.For the homogeneous model, for continuous-time dynamics (Poisson in this case) or discrete-time Bernoulli only steps, the following probabilistic interpretation of 𝔓_s,t^(N), with f_s,t(z)=e^-(t-s)z or f_s,t(z)=(1-z)^t-s, is known, see <cit.>. Namely, 𝔓_s,t^(N) are the transition probabilities of N independent Poisson processes or homogeneous Bernoulli walks respectively conditioned to never intersect. The case of geometric only steps can be a little more subtle depending on what we mean by first collision/intersection time, see <cit.>, also Sections <ref> and <ref> for more details. By now there is a well-developed theory for studying collision times of random walks, equivalently exit times of random walks in cones/Weyl chambers,with general increments, see for example <cit.>. However, in these models the distribution of the increments does not depend on the position of the walk as in our setup. As far as we can tell, very little is known for space-inhomogeneous walks. We believe, but do not prove here, that for a generic inhomogeneity sequence 𝐚, 𝔓_s,t^(N) has an analogous interpretation as independent inhomogeneous walks conditioned to never intersect. We will prove instead, see Sections <ref> and <ref>, that if one adds different drifts to each walk, with some further assumptions on the relative strengths of the drifts (which make the problem easier), then a corresponding statement is true, see Theorem <ref>.Proving that 𝔓_s,t^(N) are the transition probabilities of independent inhomogeneous walks conditioned to never intersect would imply that the non-intersecting paths associated to 𝔓_s,t^(N) have a novel Gibbs resampling property analogous to the Brownian Gibbs property, see <cit.>, which has been instrumental in the study of KPZ universality class models <cit.>, but with Brownian motion replaced by the one-dimensional walk with transition probability 𝖳_f_s,t.In fact, the non-intersecting paths associated to the Markov process with transition probabilities 𝔓_s,t^(N) give rise to a determinantal point process for general deterministic initial conditions beyond (0,1,…,N-1). The explicit computation of the correlation kernel is an interesting problem. In the homogeneous case with only Bernoulli steps, this was done in <cit.> to prove a universality result for local statistics for non-colliding Bernoulli walks. This was subsequently employed in <cit.> to prove universality for local statistics of random uniformly distributed lozenge tilings. Observe that, if we identify the functions f_0,T(w) and f(w), from Theorem <ref> and (<ref>) respectively, we have𝒦_N[(T,x_1);(T,x_2)]=𝔎_f[(N,x_2);(N,x_1)].In particular, these correlation kernels give rise to the same point process on ℤ_+ as they should be (which is clear from the probabilistic construction of the corresponding point processes). We strongly believe that the theorems above can be used to investigate asymptotic questions regarding these dynamics. As an illustration of this, we prove next certain short-time asymptotics for the process (𝖷_i^(N)(t);t≥ 0)_1≤ i ≤ N in continuous-time, equivalently, by virtue of Theorem <ref>, for the process of non-intersecting paths with transition semigroup (𝔓^(N)_exp(-t))_t≥ 0. These asymptotics work for any inhomogeneity sequence 𝐚 as long as the average lim_N→∞ N^-1∑_k=0^N-1 a_k exists. The limit process is given by the discrete Bessel determinantal point process <cit.> whose parameters only depend on 𝐚 through the average above. We do not have an intuitive reason explaining why this averaging phenomenon happens for times of order N^-1 but it would be interesting if one exists. For example in other scaling regimes the limit should depend on 𝐚 in a more complicated way and the asymptotic analysis will also be more involved. Consider (𝖷_i^(n)(t);t≥ 0)_1≤ i ≤ n; n≥ 1 following the continuous-time pure-birth push-block dynamics in 𝕀𝔸_∞ starting from the fully-packed configuration. Assume that the following average a̅=lim_N→∞ N^-1∑_x=0^N-1a_x exists and let ζ>0. Then, we have the following convergence in distribution for all m≥ 1, (𝖷_N^(N)(ζ/N)-N,𝖷_N-1^(N)(ζ/N)-N,…,𝖷_N-m+1^(N)(ζ/N)-N)d⟶(𝔅_1^ζa̅,𝔅_2^ζa̅,…,𝔅_m^ζa̅),asN →∞,where, for σ > 0, 𝔅_1^σ>𝔅_2^σ>𝔅_3^σ>⋯ are the ordered points of the discrete Bessel determinantal point process 𝔅^σ on ℤ. This point process satisfies and is determined by, for all n≥ 1, and x_1,…,x_n ∈ℤ,ℙ(𝔅^σ containsx_1,…,x_n)=(𝐉_σ(x_i,x_j))_i,j=1^nwhere the correlation kernel 𝐉_σ(x,y) is given by, with x,y∈ℤ,𝐉_σ(x,y)=1/(2πi)^2∮_|z|=1^-∮_|v|=1^+z^x/v^y+1e^z^-1-v^-1+σ v-σ z1/v-zdz dv. Here, the notations {z∈ℂ:|z|=1^-} and {v∈ℂ:|v|=1^+} for the contours mean any fixed circles about the origin with radii slightly smaller, respectively bigger, than 1. In fact, we simply need the z-contour to be contained in the v-contour and to contain 0.In the homogeneous case a_x ≡ 1 the above result boils down to a result from <cit.>. Although there is no discussion of dynamics there, the point process studied therein is equivalent to our case with a_x ≡ 1. §.§ Space-level inhomogeneous dynamics, parameter symmetry and conditioned walks In this section we consider dynamics where the jump rates/transition probabilities of particles are both space-dependent and also depend on their vertical coordinate/level (particle-dependent) through yet another parameter sequence γ=(γ_i)_i=1^∞. However, the models will depend on time in a homogeneous way so in particular they do not generalise the ones from the previous section. It would be interesting to find a model which retains the integrability and involves inhomogeneities of space, level and time (this is also related to the discussion in the penultimate paragraph of Section <ref> on the domino shuffle connection) [An integrable vertex model (which is not determinantal) with three parameter sequences has recently been studied in <cit.>. Whether this model has a free fermion version, which presumably would give a TASEP-like model with three sequences of parameters, and whether this would be related to any of the inhomogeneous models of the present paper is an interesting question.]. Introducing particle-dependent parameters allows us, at least for a certain type of question, to go further in our analysis of the model, see Theorem <ref> and the discussion in Remark <ref>.We use the abbreviations pb, B, g for pure-birth, Bernoulli, geometric and the notation ∙∈{pb, B, g}. Suppose we are given a sequence γ=(γ_n)_n=1^∞ which satisfies[These parameter ranges ensure that the jump rates/probabilities encountered are well-defined. They are not optimal and can be extended. However, we stick to them for simplicity.], depending on the type of dynamics to be defined next,γ_n≥ 0,if ∙=pb, 0<γ_n≤ 1, if ∙=B, γ_n ≥ 1, if ∙=g,∀ n ≥ 1. We denote by (𝖷^∙(t);t≥ 0)=(𝖷_k^(n),∙(t);t ≥ 0)_1≤ k ≤ n; n ≥ 1 the stochastic process in 𝕀𝔸_∞ which evolves according to the following dynamics, in either continuous or discrete time, depending on the choice of ∙∈{pb, B, g}: * For ∙=pb, particles at level n at position x jump (in continuous time) to x+1 with rate a_x+γ_n, with interactions between particles being exactly as in Definition <ref>.* For ∙=B, particles at level n, at position x, jump (in discrete time) to x+1 with probability γ_n a_x+1-γ_n and stay at x with complementary probability γ_n-γ_n a_x, with interactions between particles being exactly as in Definition <ref>.* For ∙=g, particles at level n, at position x, jump (in discrete time) to y≥ x,with probability (γ_n+γ_na_y)^-1∏_k=x^y-1(γ_n(1+a_k)-1)(γ_n+γ_n a_k)^-1, with interactions between particles being exactly as in Definition <ref>.Here, we assume that the parameter sequence γ satisfies the corresponding conditions (<ref>) depending on the choice of ∙∈{pb, B, g}.Obviously the process (𝖷^∙(t);t≥ 0) depends on γ but we suppress it from the notation as we do with 𝐚. Write 𝖳_t^∙ for 𝖳_f_t^∙ with f_t^pb(z)=exp(-tz) or f_t^B=(1-z)^t or f_t^g(z)=(1+z)^-t. The following is the main result of this section.Consider the stochastic process (𝖷^∙(t);t≥ 0) in 𝕀𝔸_∞from Definition <ref> initialised from the fully-packed configuration. We assume that the sequence γ satisfies (<ref>) and moreover if ∙=B then sup_x∈ℤ_+a_x≤ 1 or if ∙=g then sup_x∈ℤ_+a_x-inf_x∈ℤ_+a_x<1. Then, for any N≥ 1, the stochastic process (𝖷_k^(N),∙(t);t ≥ 0)_1≤ k ≤ N is a Markov process in 𝕎_N with time-homogeneous transition semigroup (𝒫_t^γ,∙,N)_t≥ 0 determined by its explicit kernel,𝒫_t^γ,∙,N(𝐱,𝐲)=∏_j=1^N1/c_t,γ_j^∙×(h_γ_i^∙(y_j))_i,j=1^N/(h_γ_i^∙(x_j))_i,j=1^N(𝖳_t^∙ (x_i,y_j))_i,j=1^N,where the functions h_γ^∙ are given byh_γ^pb(x)=p_x(-γ),h_γ^B(x)=p_x(1-γ^-1), h_γ^g(x)=p_x(γ^-1-1),and the constants c_t,γ^∙ by,c_t,γ^pb=e^tγ, c_t,γ^B=γ^-t, c_t,γ^g=γ^t.In particular, from (<ref>) we obtain that the distribution of the projection on the N-th level of the array (𝖷^(N),∙(t);t≥ 0), as a process, is symmetric in the parameters γ_1,γ_2,…,γ_N.The fact that the distribution of a single row (𝖷^(N),∙(t);t≥ 0) is symmetric in the level parameters γ_1,γ_2,…,γ_N is non-trivial and we do not have an intuitive probabilistic reason explaining why this is true[Of course, most of these models are directly or indirectly related to symmetric functions and thus one may expect some sort of symmetry in the parameters to make its appearance but its exact form is hard to predict.]. An analogous result holds for interacting Brownian motions with different drifts in interlacing arrays, see for example <cit.> and the references therein.From our proof of Theorem <ref> it will also follow, although we will not make this explicit, that the underlying point process is determinantal <cit.>. Computing the correlation kernel explicitly is an interesting question that we will not pursue further in this paper. We now give a more probabilistic interpretation of the semigroup (𝒫_t^γ,∙, N)_t≥ 0. We require a definition which will also be useful later on.We define 𝖳_t,γ^∙ for ∙∈{pb, B, g}, with γ satisfying (<ref>), as follows: 𝖳_t,γ^pb is the transition probability in continuous time t of a pure-birth chain with jump rate a_k+γ at location k, while in the Bernoulli and geometric cases it is the transition probability in discrete time t of the walk with single-step transition probabilities given by,𝖳_1,γ^B(x,y) = (γ a_x+1-γ) 1_y=x+1+(γ-γ a_x)1_y=x, 𝖳_1,γ^g(x,y) =1/γ(1+a_y)∏_k=x^y-1γ(1+a_k)-1/γ(1+a_k)1_y≥ x.We then have:Let N≥ 1 be fixed and write γ=(γ_1,…,γ_N). Consider the stochastic process (𝖷^∙_γ(t);t≥ 0)=(𝗑_γ_1^∙(t),…,𝗑_γ_N^∙(t); t≥ 0), with 𝖷_γ(0)=𝐱∈𝕎_N, with coordinates (𝗑_γ_i^∙(t);t≥ 0) being independent and having transition probabilities (𝖳_t,γ_i^∙)_t≥ 0. Assume that the parameters γ=(γ_1,…,γ_N), in addition to (<ref>), satisfy the conditions, for i=1,…,N-1,γ_i+1-γ_i >sup_k∈ℤ_+ a_k-inf_k∈ℤ_+a_k,if ∙=pb,γ_i+1/γ_i <1-sup_k∈ℤ_+ a_k/1-inf_k∈ℤ_+a_k,if ∙=B, γ_i+1/γ_i >1+sup_k∈ℤ_+a_k/1+inf_k∈ℤ_+ a_k,if ∙=g.Consider the first collision or intersection time τ_col^∙ defined byτ_col^∙=inf{t>0:𝖷^∙_γ(t-)⊀𝖷^∙_γ(t)},where if ∙∈{B, g} then 𝖷^∙_γ(t-)=𝖷^∙_γ(t-1) while if ∙=pb then 𝖷^∙_γ(t-)=lim_s↑ t𝖷^∙_γ(s). Let (𝖷_γ^∙, n.c.(t);t≥ 0) be the process (𝖷^∙_γ(t);t≥ 0) conditioned on τ_col^∙=∞. Then, thetransition probabilities of (𝖷_γ^∙, n.c.(t);t≥ 0) are given by (𝒫_t^γ,∙, N)_t≥ 0 defined in (<ref>).The use of the terminology collision or intersection time for τ_col^∙ may not be immediately clear. It is easy to see that in case ∙=pb or ∙=B then τ_col^∙ is actually equal to the more standard definition:τ̃_col^∙=inf{t>0:𝖷_γ^∙(t)∉𝕎_N}.For geometric jumps τ_col^g andτ̃_col^g are however different, see <cit.>, and it is easy to see that τ_col^g≤τ̃_col^g. In fact, τ_col^g is in some sense more natural. If one views the geometric random walk paths as paths in the corresponding Lindstrom-Gessel-Viennot (LGV) graph <cit.>, see for example Figures <ref> and <ref>, then τ_col^g is exactly the first time these paths intersect.The prototypical result of the kind presented in Theorem <ref> is the explicit computation of the transition kernel of N independent Brownian motions with ordered drifts in the Weyl chamber conditioned to never intersect, see <cit.>. An analogous result holds for another special class of one-dimensional diffusions, the (generalised) squared Bessel process, see <cit.>. In the discrete-space setting (and discrete time), for the homogeneous model a_x≡ 1, the result above was first proven in <cit.>. As far as we can tell, this was the only known case previously. It is worth noting that the problem in discrete compared to continuous space is more amenable to analysis as we have proven the result for random walks with essentially arbitrary inhomogeneity 𝐚.The conditions (<ref>), (<ref>), (<ref>) are an artefact of our proof which goes via a coupling to the homogeneous case. We believe that for generic sequence 𝐚 these conditions are not necessary and moreover that we could then take the limit of the γ_i parameters being equal. In particular, this would answer the question raised in Remark <ref>.§.§ Transition kernels and determinantal processes for the edge particle systems for general initial condition In this section we consider the autonomous interacting particle systems, either in discrete or continuous time, (𝖷_1^(1)(t),𝖷_2^(2)(t),…,𝖷_N^(N)(t);t≥ 0) and (𝖷_1^(1)(t),𝖷_1^(2)(t),…,𝖷_1^(N)(t);t≥ 0) at the right and left edge of the interlacing array respectively, in the setting of Theorem <ref>. We follow the notations of that theorem. Since these systems are one-dimensional we actually only need a single index for each particle instead of both a subscript and superscript. However, we stick with the notation above to stress the connection with dynamics in arrays.It will be more convenient notation-wise to consider (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0) instead of (𝖷_1^(1)(t),𝖷_1^(2)(t),…,𝖷_1^(N)(t);t≥ 0). Towards this end, define the variation of the Weyl chamber 𝕎_N where coordinates can be equal by𝕎_N={𝐱=(x_1,x_2,…,x_N)∈ℤ_+^N:x_1 ≤ x_2≤⋯≤ x_N},and observe that this is the state space of (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0).Our interest here is to study these one-dimensional systems from arbitrary (deterministic) initial condition. Note that, the fully-packed configuration corresponds to the most special initial conditions (0,1,…,N-1) and (0,0,…,0) for (𝖷_1^(1)(t),𝖷_2^(2)(t),…,𝖷_N^(N)(t);t≥ 0) and (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0) respectively. There has been intense activity in the last few years around the study of such one-dimensional systems starting from general initial condition <cit.> beginning with the breakthrough work <cit.> on TASEP where the KPZ fixed point was first constructed, the central object in the KPZ universality class <cit.>. Despite this significant body of work on the topic, as far as we know, systems where the spatial motion of particles is inhomogeneous have only been studied, for general initial condition, in <cit.> where certain special interacting diffusions related to the classical ensembles of random matrices and classical orthogonal polynomials were considered. Theorem <ref> below is a first step in developing the analogous framework of <cit.> for ageneral class of one-dimensional space-inhomogeneous interacting particle systems.The following theorem, informally stated, is the main result of this section. The precise statement can be found in Theorems <ref> and <ref> in Section <ref>. In order to prove these results we make essential use of the framework built to establish Theorems <ref> and <ref> and some additional non-trivial arguments that we briefly comment on in Section <ref>. In the setting of Theorem <ref>, with the notations and assumptions therein, the transition kernels 𝔈_f_s,t,r^(N) and 𝔈_f_s,t,l^(N) of the autonomous systems (𝖷_1^(1)(t),𝖷_2^(2)(t),…,𝖷_N^(N)(t);t≥ 0) and (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0) on the right and left edge of the array respectively,𝔈_f_s,t,r^(N)(𝐱,𝐲) =ℙ((𝖷_1^(1)(t),…,𝖷_N^(N)(t))=𝐲|(𝖷_1^(1)(s),…,𝖷_N^(N)(s))=𝐱), 𝐱,𝐲∈𝕎_N, 𝔈_f_s,t,l^(N)(𝐱,𝐲) =ℙ((𝖷_1^(N)(t),…,𝖷_1^(1)(t))=𝐲|(𝖷_1^(N)(s),…,𝖷_1^(1)(s))=𝐱),𝐱,𝐲∈𝕎_N,are explicit. Moreover, for any 𝐱∈𝕎_N and 𝐲∈𝕎_N, and fixed s≤ t, the probability measures 𝔈_f_s,t,r^(N)(𝐱,·) and 𝔈_f_s,t,l^(N)(𝐲,·)can be written as marginals of certain signed measures on 𝕀𝔸_N with determinantal correlation functions.A few days after this paper was first posted on 𝖺𝗋𝖷𝗂𝗏 the interesting and completely independent work <cit.> appeared. Among other things, the authors in <cit.> also study the space-inhomogeneous edge particle systems and obtain an explicit formula for their transition probabilities, namely an analogue of our Theorem <ref>. Their formula is different from ours and is given in terms of Grothendieck polynomials, which are symmetric functions used to study the K-theory of the Grassmannian<cit.>. Their approach to obtain such a formula is also different and goes via the algebraic combinatorics of the Grothendieck polynomials. It is interesting that the only symmetric functions which appear explicitly in our paper, and for another task altogether, are the factorial Schur polynomials and not the Grothendieck polynomials. Nevertheless, we believe there should be deep connections between the two works and it would be interesting to understand them. We do not attempt here to compute explicitly the correlation kernel behind the determinantal correlations functions. This would involve solving a certain biorthogonalisation problem. In the level/particle inhomogeneous setting this is done systematically in the series of two papers <cit.>, see also <cit.>. Developing the analogous systematic theory for the space-inhomogeneous setting we have been considering would be a substantial task and we leave it for future work.Moreover, the explicit transition kernels could possibly be used to analyse asymptotic multi-time distributions of these systems, see <cit.> for example where this is done in the space-homogeneous[The paper <cit.> studies polynuclear growth in an inhomogeneous geometric environment. We note that, as mentioned earlier, in terms of the corresponding particle system the inhomogeneity transforms into inhomogeneity of the particles and time and not of space as it is here.] setting. Again, developing the analogous arguments in inhomogeneous space would require substantial work and we leave this for future investigations.§.§ Extremal measures for the inhomogeneous Gelfand-Tsetlin graph In this introductory section and its continuation in Section <ref> we will connect the previous constructions of interacting particle systems to a seemingly disparate object, an inhomogeneous generalisation of the famous Gelfand-Tsetlin graph <cit.> for which we prove a non-trivial structural result in Theorem <ref>. We need some notation and definitions.We define the weighted graded graph that we call the (non-negative) inhomogeneous Gelfand-Tseltin graph, with parameters 𝐚, and denote by 𝐆𝐓_+(𝐚) as follows. It has vertexset given by ⊔_N=1^∞𝕎_N. Two vertices 𝐱∈𝕎_N and 𝐲∈𝕎_N+1 are connected by an edge if and only if they interlace 𝐱≺𝐲. Associated to each pair (𝐲,𝐱)∈𝕎_N+1×𝕎_N we have a weight we_N+1,N(𝐲,𝐱), which has non-zero value only if 𝐲 and 𝐱 are connected by an edge and is given bywe_N+1,N(𝐲,𝐱)=∏_i=1^N 1/a_x_i1_𝐱≺𝐲. In the homogeneous case a_x≡ 1, 𝐆𝐓_+(𝐚) is equivalent (after a shift of the co-ordinates on each level to go from 𝕎_N to so-called non-negative signatures <cit.>) to the non-negative Gelfand-Tsetlin graph, see <cit.>. We denote the homogeneous graph by 𝐆𝐓_+. This graph is of central importance in algebraic combinatorics and representation theory. It describes (more precisely the full graph where the co-ordinates of the vertices can also take negative values) the branching of irreducible representations of the inductive chain of unitary groups 𝕌(1) ⊂𝕌(2) ⊂⋯⊂𝕌(N) ⊂⋯, see <cit.>. We introduced the inhomogeneous graph 𝐆𝐓_+(𝐚) in <cit.>. Define the dimension dim_N(𝐲) of a vertex 𝐲∈𝕎_N by dim_N(𝐲)=∑_𝐱^(1)≺𝐱^(2)≺⋯≺𝐱^(N-1)≺𝐱^(N)=𝐲∏_i=1^N-1we_i+1,i(𝐱^(i+1),𝐱^(i)).Define the Markov kernels Λ_N+1,N^𝐆𝐓_+(𝐚) from the vertex set 𝕎_N+1 to the vertex set 𝕎_N by, with 𝐱∈𝕎_N, 𝐲∈𝕎_N+1,Λ_N+1,N^𝐆𝐓_+(𝐚)(𝐲,𝐱)=dim_N(𝐱)we_N+1,N(𝐲,𝐱)/dim_N+1(𝐲).We say that a sequence of probability measures (μ_N)_N=1^∞, with μ_N a probability measure on 𝕎_N, is coherent (or consistent) if for all N≥ 1,μ_N+1Λ_N+1,N^𝐆𝐓_+(𝐚)(𝐱)=μ_N(𝐱), ∀𝐱∈𝕎_N.Sequences of coherent probability measures form a convex set. We say that a sequence is extremal if it is an extreme point in this set. For an infinite sequence of parameters:ω=((α_i)_i=1^∞,(β_i)_i=1^∞,t) ∈ℝ_+^2∞+1,such that α_1≥α_2 ≥⋯ , β_1 ≥β_2≥⋯, ∑_i=1^∞(α_i+β_i)<∞ and α_1≤(sup_x∈ℤ_+a_x)^-1 and β_1<(sup_k∈ℤ_+a_k-inf_k∈ℤ_+a_k)^-1,consider the function f_ω(z) given by f_ω(z)=e^-tz∏_i=1^∞1-α_i z/1+β_i z.This is holomorphic in {z∈ℂ:(z)>-β_1^-1}. Moreover, for each N≥ 1, we define the following measure on 𝕎_N corresponding to f_ω:ℳ_N^ω(𝐱;𝐚)=dim_N(𝐱)/a_0^N-1a_1^N-2⋯ a_N-2(-1/a_x_j1/2πi∮_𝖢_𝐚p_i-1(z)f_ω(z)/p_x_j+1(z)dz)_i,j=1^N, 𝐱∈𝕎_N.We have the following result. Consider the inhomogeneous Gelfand-Tsetlin graph 𝐆𝐓_+(𝐚) with inf_x∈ℤ_+ a_x≥ 1. Then, the sequence of measures (ℳ^ω_N(·;𝐚))_N=1^∞ forms an extremal coherent sequence of probability measures on 𝐆𝐓_+(𝐚). The result above answers a question from our previous paper <cit.>, where we showed that the special case of measures corresponding to ω=(0,0,t), equivalently f_ω(z)=e^-tz, is coherent and asked whether it was in fact extremal.As we shall observe in Section <ref>, the connection with the dynamics considered in this paper comes from the following equalityℳ_N^ω(𝐱;𝐚)=𝔓_f_ω^(N)((0,1,…,N-1),𝐱),𝐱∈𝕎_N,where we define 𝔓_f_ω^(N) by the right hand side of (<ref>) with f_s,t replaced by f_ω. In probabilistic terms, see Remark <ref>, this means we run a mixture of sequential-update Bernoulli, geometric Warren-Windridge and continuous-time pure-birth dynamics (possibly an infinite number of discrete-time steps as long as the parameters α_i,β_i are summable), starting from the fully-packed configuration, and then look at the distribution of the N-th level of the array which is given by ℳ_N^ω(·;𝐚). It will follow from the results in the sequel that these measures are coherent and more remarkably actually extremal which establishes Theorem <ref>.The problem of classifying in an explicit way all coherent sequences of measures is known as the problem of determining the boundary of the graph, see <cit.> for general background and motivation. In the case of the Gelfand-Tsetlin graphit is equivalent to the classification of extreme characters of the infinite-dimensional unitary group 𝕌(∞) and also the classification of totally non-negative Toeplitz matrices, see <cit.>. The fact that an explicit classification sometimes exists is remarkable and has been achieved only for a handful of models, see for example <cit.>. In the case of 𝐆𝐓_+, recall a_x≡ 1, all such measures are given by (<ref>), see <cit.>. It appears that the inhomogeneous graph 𝐆𝐓_+(𝐚) may be another example where an explicit classification is possible. In particular, we believe that subject to some generic conditions on 𝐚, and most likely after removing the condition on β_1, all extremal coherent sequences of measures for 𝐆𝐓_+(𝐚) should be of the form (<ref>). We state this as an open problem. Problem Classify extremal coherent sequences of measures on 𝐆𝐓_+(𝐚). §.§ Duality via a height functionIn this section we prove a new type of duality for the dynamics we have been considering[In fact, having even more general rates/transition probabilities.] via the application of a certain deterministic map. This map is closely related to the notion of a height function for an interlaced particle system <cit.> (in the case of the autonomous left-edge particle system viewed as a growth model this is exactly the standard height function, see Figure <ref>). However, as as far as we can tell, the specific choice below has not been used before. From a probabilistic standpoint this mapping essentially swaps space and level/particle inhomogeneities.In order to present our results in their most symmetric and natural form, for this section and Section <ref> where the proofs are given, we will be labelling levels of arrays starting from level 0 instead of 1 and coordinates of 𝕎_N using indices 0, 1,…,N-1. In particular, the configuration of the N+1 particles at level N, which is in 𝕎_N+1, will have coordinates (x_0^(N),x_1^(N),…,x_N^(N)). Moreover, with this notation convention, for a configuration (x_k^(n))_0≤ k ≤ n;n≥ 0 in 𝕀𝔸_∞ we will call (x_k^(n))_n≥ k the k-th column of the configuration.Define the set of configurations 𝕀𝔸_∞^* by:𝕀𝔸_∞^*={(x_k^(n))_0≤ k ≤ n; n ≥ 0∈𝕀𝔸_∞: ∀ i ≥ 1, ∃ j_i< ∞ such thatx_i^(n)=i,forn≥ j_i}.In other words, particle configurations in 𝕀𝔸_∞^* are the ones that can be obtained from the fully-packed configuration by moving, for each column, a finite number of particles. Define the following map 𝖧𝗀𝗍 (for height function) by,𝖧𝗀𝗍:𝕀𝔸_∞^*→𝕀𝔸_∞^*, (x_i^(j))_0≤ i ≤ j; j ≥ 0 ↦(𝗁_i(j)+i)_0≤ i ≤ j; j ≥ 0,where 𝗁_i(j), for a given configuration (x_i^(j))_0≤ i ≤ j; j ≥ 0∈𝕀𝔸_∞^*, is defined by 𝗁_i(j)=#{n≥ i:x_i^(n)>j}. It is easy to see that 𝖧𝗀𝗍 is well-defined on 𝕀𝔸_∞^* and that 𝖧𝗀𝗍 maps the fully-packed configuration to itself. In fact, as we will prove in Section <ref>, 𝖧𝗀𝗍 is an involution. The map 𝖧𝗀𝗍:𝕀𝔸_∞^*→𝕀𝔸_∞^* is an involution. In particular, since 𝖧𝗀𝗍 is a bijection, given any Markov process (𝖷(t);t≥ 0), in discrete or continuous time, in 𝕀𝔸_∞^* the stochastic process (𝖧𝗀𝗍(𝖷(t));t≥ 0), again taking values in 𝕀𝔸_∞^*, is also Markovian.Now, suppose that we are given a function θ:ℤ_+ ×ℤ_+→ (0,∞), that we call the environment, satisfying in the case of the continuous-time model in Definition <ref> belowinf_x,y∈ℤ_+θ(x,y)>0and sup_x,y∈ℤ_+θ(x,y)<∞and for the discrete-time models in Definition <ref>,inf_x,y∈ℤ_+θ(x,y)>0and sup_x,y∈ℤ_+θ(x,y)<1.Given such a function θ we define the following dynamics.We say that a process in 𝕀𝔸_∞, in continuous-time, satisfies the pure-birth push-block dynamics in environment θ if the jump rate of a particle at space location x, at level y, is given by θ(x,y), with interactions between particles being exactly as in Definition <ref>. We say that a process in 𝕀𝔸_∞, in discrete-time, satisfies the sequential-update Bernoulli push-block dynamics in environment θ if the jump probability of a particle at space location x, at level y, to position x+1 is given by θ(x,y) and the probability to stay put at x is 1-θ(x,y), with interactions between particles being exactly as in Definition <ref>.We say that a process in 𝕀𝔸_∞, in discrete-time, satisfies the Warren-Windridge geometric push-block dynamics in environment θ if the jump probability of a particle at space location x, at level y, towards position x+n, for n ∈ℤ_+, is given by θ(x,y)θ(x+1,y)⋯θ(x+n-1,y)(1-θ(x+n,y)), with interactions between particles being exactly as in Definition <ref>.Observe that, in continuous-time with θ(x,y)=a_x we get back the dynamics from Definition <ref>. In discrete-time, for Bernoulli jumps with θ(x,y)=α a_x we get back the dynamics from Definition <ref>, while for geometric jumps with θ(x,y)=β a_x(1+β a_x)^-1 we get back the dynamics from Definition <ref>. The following says that the dynamics above, if initialised in 𝕀𝔸_∞^*, stay in 𝕀𝔸_∞^*. Let 𝐱∈𝕀𝔸_∞^*. Assume the stochastic process (𝖷(t);t≥ 0), with initial condition 𝖷(0)=𝐱, evolves according to either the continuous time pure-birth, or discrete-time sequential-update Bernoulli or Warren-Windridge geometric push-block dynamics in environment θ satisfying the corresponding conditions (<ref>), (<ref>) above. Then, almost surely, for all t≥ 0, 𝖷(t)∈𝕀𝔸_∞^*.The following is the main result of this section.Let 𝐱∈𝕀𝔸_∞^* and environment θ satisfying the corresponding conditions (<ref>), (<ref>) above. Suppose that the process (𝖷(t);t ≥ 0) evolves according to one of the following push-block dynamics: * continuous-time pure-birth,* sequential-update Bernoulli,* Warren-Windridge geometric, in environment θ with initial condition 𝖷(0)=𝐱. Then, the process (𝖧𝗀𝗍(𝖷(t));t≥ 0)follows respectively the push-block dynamics: * continuous-time pure-birth,* Warren-Windridge geometric,* sequential-update Bernoulli,in environment θ̂, where θ̂(x,y)=θ(y,x), and initial condition 𝖧𝗀𝗍(𝐱). In words, under 𝖧𝗀𝗍 the environment θ always gets transformed to the environment θ̂, continuous-time pure-birth dynamics stay of the same form and sequential-update Bernoulli dynamics get mapped to Warren-Windridge geometric dynamics and vice-versa. Although the statement of Theorem <ref> is very simple, as far as we can tell, it is new even in the homogeneous case of constant θ (at least we have not been able to locate any explicit statement in the literature). In the special case θ(x,y)=a_y, Theorem <ref> shows how to map level/particle inhomogeneities to space inhomogeneities. For this special choice of θ, in the case of continuous time pure-birth dynamics with fully-packed initial configuration, and when projecting to the right-edge particles of the array, the correspondence above can be shown to be equivalent to a certain mapping used by Petrov in <cit.> to obtain an expicit formula for the distribution of the height function of inhomogeneous space push-TASEP for the fully-packed initial condition. We note that level/particle-inhomogeneous dynamics and space-inhomogeneous dynamics are not equivalent for finite systems. For example, even the motion of a single particle in the space-inhomogeneous setting can involve infinitely many parameters 𝐚=(a_x)_x∈ℤ_+. Thus, we would need infinitely many particles with particle-dependent and space-independent jump probabilities/rates to hope for any kind of correspondence. We could have taken the environment θ to depend on time t and the statement and proof of Theorem <ref> would remain the same. We could also consider Bernoulli dynamics with parallel-update, see <cit.>, also Section <ref>. Under the map 𝖧𝗀𝗍 the Bernoulli parallel-update dynamics in environment θ get mapped to Bernoulli parallel-update dynamics in environment θ̂. The situation in the parallel-update setting is a little more subtle and we will not pursue it further in this paper. §.§ The domino tiling shuffling algorithm dynamics connection In this part, and its expansion in Section <ref>, we explain how a famous statistical mechanics model, domino tilings of the Aztec diamond with general (non-interacting) domino weights, is intimately connected to certain Bernoulli push-block dynamics with general parameters. We first give an example of the kind of result we can prove.The object of study is the Aztec diamond, introduced in <cit.>, which is a certain region in the square lattice with sawtooth boundary as in Figure <ref> and <ref>, see Section <ref> for precise definitions. We can colour the squares of the Aztec diamond in black/white checkerboard fashion as in Figures <ref> and <ref>. The Aztec diamond can be covered 1× 2 and 2× 1 dominos, see Figure <ref> for an illustration. There are four types[The domino tiling of the Aztec diamond is equivalent to a certain dimer model on the corresponding Aztec diamond graph. Then, the terminology north, south, east, west becomes much more intuitive, see Section <ref>.] of dominos called north, south, east and west as shown in Figure <ref>. We put a certain coordinate system on the Aztec diamond as shown in Figures <ref>, <ref> (the specific choice is so that it is consistent with the otherresults in this paper), see Section <ref> for more details. We can associate to each domino in a tiling a certain weight (the most general weights will be discussed in Section <ref>). Towards this end, suppose we are given two sequences 𝐳^(1)=(z_x^(1))_x∈ℤ_+,𝐳^(2)=(z_x^(2))_x∈ℤ_+∈ (0,∞)^ℤ_+ and assume that z_x^(1)/z_x^(1)+z_x^(2)=a_x,for allx ∈ℤ_+.Given the two sequences 𝐳^(1) and 𝐳^(2) as above, we define the following weight of a domino tiling of the Aztec diamond, of any fixed size k≥ 1, as follows. East and north dominos get weight 1. West dominos at horizontal location x get weight z_x^(1), while south dominos at horizontal location x get weight z_x^(2). The weight of the whole tiling is simply the product of weights of all individual dominos contained in the tiling. For any k≥ 1, this weighting gives rise, in the obvious way of normalising by the partition function, to a probability measure, that we denote by ℙ^(k),𝐳^(1),𝐳^(2), on domino tilings of the Aztec diamond of size k. Now, given a domino tiling of the Aztec diamond (of any size k≥ 1) we can associate to it a particle configuration as follows. We put a particle whenever we see a south or east domino as in Figure <ref>. The particles inherit the coordinates of the dominoes. It is a combinatorial fact that there are exactly n particles on level n. We note that the particle configuration is not one-to-one with the domino tiling since some information is lost. However, an extension of this map can be made into a bijection by introducing an extra set of particles, see for example <cit.>, but we will not do it here.We have the following theorem.Let 𝐚 be such that inf_k∈ℤ_+a_k>0 and sup_k∈ℤ_+a_k<1. Consider theprobability measures ℙ^(k),𝐳^(1),𝐳^(2) ondomino tilings of the Aztec diamond of size k defined above satisfying (<ref>). Then, there exists a coupling ℙ of the ℙ^(k),𝐳^(1),𝐳^(2), for all k≥ 1, such that the following happens. If we denote by 𝗑_i^(j)(m), for m≥ j, the location of the i-th south or east domino (equivalently particle) on level j of the random tiling of the size k Aztec diamond distributed according to ℙ^(k),𝐳^(1),𝐳^(2) in this coupling, thenfor all N≥ 1, each discrete-time stochastic process (𝗑_1^(N)(t+N),𝗑_2^(N)(t+N),…,𝗑_N^(N)(t+N);t ≥ 0)evolves as a Markov process in 𝕎_N, starting from (0,1,…,N-1), with transition probabilities from time t_1 to time t_2 given by 𝔓^(N)_(1-z)^t_2-t_1. In particular, for any N≥ 1 and pairwise distinct time-space points (t_1,x_1),…,(t_n,x_n) in ℤ_+×ℤ_+,ℙ(∃ j_1,…,j_nsuch that 𝗑_j_i^(N)(t_i+N)=x_ifor1 ≤ i ≤ n)= (𝒦_N[(t_i,x_i);(t_j,x_j)])_i,j=1^nwhere f_s,t(z)=(1-z)^t-s in the definition of 𝒦_N from (<ref>). The probabilistic statement in the theorem above, in that (<ref>) is a Markov process with explicit transition probabilities, is new. The only known case before was the homogeneous one, z^(1)_x≡ z_x^(2)≡ 1, for all x∈ℤ_+, which follows from the work of Nordenstam <cit.>. On the other hand, the fact that the model is determinantal (for general domino weights even) is well-known and follows from the classical work of Kasteleyn, see <cit.>. However, an explicit computation of the correlation kernel in a form that is amenable to further analysis is highly non-trivial, see for example <cit.>, and for more general recent results (using different methods from ours for the computations, in fact related to the techniques of the next subsection) see <cit.>.The desired coupling in the theorem is obtained via the so-called domino shuffling algorithm, which was introduced in <cit.> for the uniform weight and generalised in <cit.>. This algorithm allows one to sample a random tiling of the fixed size Aztec diamond, corresponding to a weighting 𝒲 of the dominos, via a sequence of local moves starting from an Aztec diamond of size 1. How this algorithm works precisely is explained in detail in Section <ref>. Then, by extending the work of Nordenstam <cit.>,we show that for any weighting 𝒲 of the dominos the induced dynamics of the shuffling algorithm on the corresponding particle configuration is given by certain Bernoulli push-block dynamics on interlacing arrays with a time-shift (this time shift can already be anticipated from the form of Theorem <ref>). Remarkably, the dependence of the weighting 𝒲 only comes in the parameters of the 0-1 Bernoulli random variables governing the jumps and the actual interactions between particles are always the same (independent of 𝒲). See Section <ref> for more details. In order to obtain the coupling for all Aztec diamonds of different sizes the key, and as far as we can tell new, notion is that of a sequence of consistent weightings of Aztec diamonds of different sizes, see Section <ref>. This notion can be considered analogous to coherent sequences of probability measures on 𝐆𝐓_+(𝐚) from Section <ref>. This analogy is not perfect though. In the 𝐆𝐓_+(𝐚) graph case consistency is with respect to an application of the Markov kernels Λ_N+1,N^𝐆𝐓_+(𝐚) while in the Aztec diamond case consistency is in terms of a certain deterministic dynamical system induced by maps (𝒰ℛ_k^n)_k≤ n called urban renewal, see Section <ref> for more details.As mentioned above, the connection to Bernoulli push-block dynamics on arrays works for any weighting 𝒲, including the 2-periodic and k-periodic weightings that have been much-studied in the past decade, see <cit.>. The only thing that changes are the parameters of the 0-1 Bernoulli random variables. Their dependence on time and space is different from α_t a_x (so they do not fall into the class of models that we studied in Section <ref>) but a straightforward computation shows that these parameters are still relatively simple. It would be interesting to obtain probabilistic results (recall the determinantal property is always there) like the one in Theorem <ref> for 2-periodic weightings. It may be possible that this can be done by extending the results of Sections <ref>, <ref> and <ref> to inhomogeneous Toeplitz-like matrices with matrix symbol 𝐟. A very small number of such results can be extended to the matrix symbol setting and this is used in Section <ref> but it is not obvious how to do this for all of them (the naive extensions do not work). Another possibility is to extend the Schur dynamics formalism of Borodin <cit.> which is based on symmetric function theory. We think that the right variant of Schur functions for this extension may be the loop Schur functions <cit.>, in part because of their connection to totally-positive block Toeplitz matrices <cit.> which come up in the study of the 2-periodic Aztec diamond, see <cit.>, also Section <ref>. We will investigate this in the future. Finally, it is worth mentioning that dynamics coming from the shuffling algorithm on dimer coverings of the full-plane ℤ^2 with general weights have been studied in detail in <cit.> but there is essentially no overlap in terms of results between those papers and ours. §.§ Line ensembles with fixed starting and final positions in discrete inhomogeneous space In previous sections, see in particular Theorem <ref>, we studied N non-intersecting,for all times, random walks with inhomogeneous Bernoulli or geometric steps starting at time t=0 from fixed consecutive locations (0,1,…,N-1). Now, we would like to study the model where the walks are conditioned to end at some fixed consecutive locations (M,M+1,…,M+N-1), for some M∈ℤ_+, at some fixed time t=L (hence this model is only defined for times 0≤ t ≤ L). We can also think of this model as non-intersecting random walk bridges. Compared to the setting of walks non-intersecting for all times this is in general harder to study. Beyond its intrinsic probabilistic interest there is some significant motivation coming from statistical mechanics to study the above model. Namely, such Bernoulli walk paths and a mixture of Bernoulli and geometric walk paths are in bijection with lozenge tilings of the hexagon and domino tilings of the Aztec diamond respectively, see for example <cit.>. In particular, an arbitrary probability measure on such tilings gets mapped to a corresponding probability measure on such non-intersecting paths which is better suited for further analysis.Recently, Duits and Kuijlaars <cit.> and in subsequent work Berggren and Duits <cit.> have developed a theory that allows one to study such measures in great generality. Their main object of study is a probability measure given as a product of determinants involving (block) Toeplitz matrices with a matrix symbol 𝐟. This is basically the probability measure on non-intersecting paths with fixed starting and final positions alluded to above. Via a connection to matrix-valued orthogonal polynomials and matrix-valued Riemann-Hilbert problems they are able to analyse this measure, both for finite N and asymptotically.The purpose of this part of the paper is to extend some of their results to the setting where we replace the Toeplitz matrices with matrix symbol[To be precise 𝐟(1-z).] 𝐟 by an inhomogeneous Toeplitz-like matrix with matrix symbol 𝐟, see Definition <ref>. In particular, the type of measure we study is defined in (<ref>). We note that this is indeed a more general setting compared to measures given by Toeplitz matrices with matrix symbols. It is an interesting question to understand for which matrix-valued 𝐟 the measure (<ref>) has probabilistic meaning. Of course, for 𝐚=1^ℤ_+ we are back to the block Toeplitz matrix setting and such functions have been classified, see <cit.> and the references therein. More generally, if 𝐚 is very close to 1^ℤ_+ (in a suitable sense), by continuity in the parameters, all the functions 𝐟 that give rise to positive measures in <cit.> also do so in our inhomogeneous setting as well. For general 𝐚 the answer is not clear (of course for general 𝐚 but scalar 𝐟 then products of e^-tz, (1-α z), (1+β z)^-1 would work). Our main results are Theorems <ref> and <ref> in Section <ref>. As these are far too technical to present in this introductory section, let us instead state a typical asymptotic result that can be proven within this framework (which in fact only requires scalar symbols 𝐟). As with Section <ref>, it will be notationally more convenient, and in order to be consistent with the works <cit.>, to again label particles in 𝕎_N starting with index 0 instead of 1. Namely, the coordinates of an element in 𝕎_N will be denoted by (x_0^(N),x_1^(N),…,x_N-1^(N)). Similarly for random variables.Let L_1,L_2 ∈ℤ_+ with L_1+L_2=L. Let f_r,r+1(z) for r=0,1,…,L-1, be such that L_1 of them are of the form 1-α_iz for some parameters α_1,…,α_L_1 and L_2 of them are of the form (1+β_i z)^-1 with parameters β_1,…,β_L_2. Consider N independent identically distributed discrete-time random walks, with either inhomogeneous Bernoulli or geometric steps, with fixed inhomogeneity sequence 𝐚, with the step at time s following the transition probability 𝖳_f_s,s+1, starting from locations (0,1,…,N-1) at time 0 and ending at locations (M,M+1,…,M+N-1) at time L and conditioned to not intersect in the intervening times t=1,2,…,L-1. Denote this stochastic process, which by construction stays in 𝕎_N, by(𝖷_0^N,L,M(t), 𝖷_1^N,L,M(t), …,𝖷_N-1^N,L,M(t);1≤ t ≤ L-1).See Figure <ref> for an illustration of the above setup. Then, we have the following limit theorem for the bottom paths in this path ensemble.In the setting of Definition <ref>, assume that the parameter sequence 𝐚 satisfiesinf_x∈ℤ_+a_x≥1-𝔠 and sup_x∈ℤ_+a_x≤1+𝔠,for some 0≤𝔠 <1/3. Suppose there exist exactly M indices i_1,i_2,…,i_M such that the corresponding Bernoulli parameters satisfy(2-2𝔠)^-1<α_i_j<(1+𝔠)^-1, j=1,…,M,(in particular L_1 ≥ M) and for l≠ i_j we have α_l<1-2𝔠/2-2𝔠. Finally, assume the geometric parameters satisfy β_l<1/2𝔠-1. Then, for any m≥ 1, we have the following convergence in distribution for the bottom m paths of the path ensemble (<ref>), as N →∞,(𝖷_0^N,L,M(t), …,𝖷_m-1^N,L,M(t);1≤ t ≤ L-1)d⟶(𝖷_0^∞,L,M(t),…,𝖷_m-1^∞,L,M(t); 1≤ t ≤ L-1),where the limiting process ((𝖷_i^∞,L,M(t))_i=0^∞; t=1,…,L-1) isdetermined through its determinantal correlation functions: for any n≥ 1 and pairwise distinct time-space points (t_1,x_1), …, (t_1,x_n) in 1,L-1 ×ℤ_+, we have ℙ(∃ j_1,…,j_nsuch that 𝖷_j_i^∞,L,M(t_i)=x_ifori=1,…,n)=(𝖪^L,M_∞[(t_i,x_i);(t_j,x_j)])_i,j=1^n,where the kernel 𝖪^L,M_∞ is given by:𝖪^L,M_∞[(r,m);(r',k)]=-1_r>r'1/2πi1/a_m∮_|z|=1p_k(1-z)/p_m+1(1-z)∏_l=r^r'-1f_l,l+1(1-z)dz-1/(2πi)^21/a_k∮_|z|=1^-∮_|w|=1^+∏_j=1^M 1/1-α_i_j+α_i_jw∏_l=1;l≠ i_j^L_11/1-α_l+α_l z∏_l=1^L_2(1+β_l-β_l z) ×∏_l=r'^L-1f_l,l+1(1-w)∏_l=0^r-1f_l,l+1(1-z)p_k(1-w)/p_m+1(1-z)dz dw/z-w. §.§ Relation to previous works, ideas and techniques We now go section by section and discuss briefly some of the ideas and techniques used therein and what seems to be the most directly relevant literature. Given the range of topics studied in this paper this appears to be the most organised way of doing this. For the same reason a complete literature review is unfortunately unfeasible.We begin with Section <ref>. The transition semigroup of a general pure-birth chain, as mentioned earlier, can be written as (𝖳_e^-tz)_t≥ 0 which has a nice contour integral expression. This form of the transition semigroup was the impetus behind our previous paper on the topic which only dealt with the continuous-time push-block dynamics <cit.>. We then realised that the fundamental object is not actually the transition probability of a pure-birth chain but rather the 𝖳_f matrix/operator for general function f. In Section <ref> we establish some basic properties of 𝖳_f. Most of them are intuitive except maybe the most non-trivial property which is the duality relation from Lemma <ref>. This will be especially important in the multidimensional developments that appear in later sections. The results in Section <ref> are basically all that is needed[We also include a couple of more results and comments about 𝖳_f which are of interest in themselves but not used subsequently.] to make subsequent computations work. Although additional ideas are required in each section, in terms of computations, the majority of them boil to down to properties established here. As already mentioned, 𝖳_f, for general 𝐚, is in fact similar to a standard Toeplitz matrix. However, even in the one-dimensional setting, with some exceptions, we cannot simply transfer over results for standard Toeplitz matrices (and in the multidimensional setting it is unclear whether this similarity is of any use at all). Finally, although the matrices/operators 𝖳_f are natural we have not been able to locate them in the Toeplitz matrix/operator literature (however the literature is truly vast so we may have missed something) and so surprisingly seem to be new. More importantly though, and this is the main message of this work, their probabilistic significance beyond the one-dimensional setting is, as far as we can tell, novel.In Section <ref> we introduce the transition kernels of the multidimensional versions of the one-dimensional dynamics we studied previously. These come from the Karlin-McGregor <cit.> and Lindstrom-Gessel-Viennot (LGV) <cit.> formula and give rise to non-intersecting paths. We then prove that the transition kernels of N and N+1 particles are intertwined. This result generalises the setup of <cit.> which deals with transition probabilities coming from Toeplitz matrices. The key ingredient in the computation is the one-dimensional duality relation from Lemma <ref>. We also introduce in Section <ref> a more general setup for intertwinings of kernels that are given in terms of determinants and explain how our previous result fits into this framework. For some other intertwining relations that involve determinants, appearing in a different context, see for example <cit.>.In Section <ref> we introduce couplings between the intertwined transition kernels from Section <ref> which are in some sense well-adapted to the intertwining. These couplings have their origin in coalescing random walks. The fact that there is a close[Although this connection is not really highlighted there.] connection between coalescing one-dimensional stochastic processes and dynamics in interlacing arrays originates with the work of Warren <cit.> on Brownian motion. This was later developed in <cit.> for more general one-dimensional diffusions and in <cit.> for birth and death and pure-birth chains. This section can be viewed as the correct discretisation in time (and space) of the results of <cit.>. We note that discrete time hides some subtleties, for example when it comes to how particles are updated, and the explicit computations are trickier[On the other hand, technical issues such as well-posedness of the dynamics, existence and uniqueness of solutions to the corresponding Kolomogorov equations, are not present.]. We briefly explain what we do. Given a function f, so that 𝖳_f has probabilistic meaning, we build an explicit kernel 𝖰_f^N,N+1 on two-level interlacing configurations which comes from coalescing random walks (with motion governed by 𝖳_f). Using the coalescing walk connection we can prove certain properties of 𝖰_f^N,N+1 including some intertwining relations from which the intertwining of Section <ref> also follows (thus giving a different proof). However, as far as we can tell, the exact dynamics described by 𝖰_f^N,N+1 cannot be seen from the coalescing random walk representation. In the case of Bernoulli-only dynamics we prove directly by means of some recursive equations (the discrete-time Kolmogorov equation) that 𝖰_f^N,N+1, with f=1-α z, describes a sequential-update Bernoulli dynamics step. In the case of geometric walks, which is the most subtle, we need to take a different approach altogether by developing the original idea of Warren and Windridge <cit.> to inhomogeneous space jumps. We believe[We have verified this explicitly by tediously checking all possibilities for N=1.], but do not prove here, that 𝖰_f^N,N+1, with f=(1+β z)^-1, does describe a Warren-Windridge geometric dynamics step. Finally, we discuss connections with other related couplings of intertwined semigroups from the literature, see Section <ref>.In Section <ref>, we put these two-level couplings together in a consistent inductive fashion to consider multilevel dynamics in interlacing arrays in Propositions <ref> and <ref>. This proves Theorem <ref>, the probabilistic statement of Theorem <ref> and allows for the computation of the explicit correlation functions in Theorems <ref> and <ref> in the next section. The induction (given the two-level couplings), making use of the Markov functions theory of Rogers-Pitman <cit.>, by virtue of the intertwinings obtained previously, is standard and variants thereof can be found in multiple places in the literature <cit.> . The space-level inhomogeneous setting of Theorem <ref> is a little more involved to handle in a systematic way but still the main work was done in the preceding sections.The computation of the correlation functions from Theorem <ref> and <ref> is done in Section <ref>. The fact that the point processes in question have determinantal correlation functions is a consequence of the results of Section <ref> and the celebrated Eynard-Mehta theorem <cit.>. The explicit computation of the correlation kernel then boils down to solving a certain biorthogonalisation problem. To do this we use in an essential way the contour integral formulae, for all the quantities involved, in terms of the polynomials p_x(z). Making use of the results from Section <ref> all the computations become rather neat. An analogous but simpler computation, in fact a special case, was performed in <cit.>. That paper deal with continuous-time dynamics only and its main result is the case f(w)=e^-tw of Theorem <ref>. Finally, the literature on different biorthogonalisation problems arising from the Eynard-Mehta theorem is vast, we list a very small sample <cit.> which seems most relevant. In Section <ref>, we prove the probabilistic representation of (𝒫_t^γ,∙, N)_t≥ 0 as independent walks conditioned to never intersect using a soft argument. In the special case of the space being homogeneous this boils down to essentially equivalent arguments (although presented a bit differently), which first appeared in <cit.>. The proof relies in an essential way on the fact that walks with different drifts (realised through a Doob transform) become asymptotically ordered with probability one depending on the relative strength of their drifts, see (<ref>). When the walks are identical this is of course no longer true and the argument breaks down. One would hope though that the model of identical walks can be recovered as the limit of removing the drifts, under possibly additional conditions on 𝐚. There are some subtle points that need to be taken care of to make this rigorous and we leave it for future work. When the increments of the walks are not location-dependent there is vast literature on non-intersection probabilities, see <cit.> for a small sample, but the idea used therein of coupling with Brownian motion does not appear useful here.In Section <ref>, we prove a precise version of Theorem <ref>. First, to get the explicit form of the transition kernels 𝔈_f_s,t,r^(N) and 𝔈_f_s,t,l^(N) presented in Theorem <ref>, we develop a space-inhomogeneous extension of the original idea of Dieker and Warren <cit.>, which itself deals with the level-inhomogeneous setting, see also <cit.>. This approach makes use of intertwinings and inverting certain Markov kernels (when viewed as stochastic matrices) that are given as determinants. A non-intersecting path model and the LGV formalism <cit.> to obtain equivalent combinatorial expressions for these kernels is essential for our argument. The resulting explicit formula is of what is usually referred to in the literature as Schutz-type, in reference to Schutz's original work on the transition probabilities of TASEP <cit.>. Schutz's original derivation <cit.> of the formula for TASEP used the Bethe ansatz instead. Once one has an explicit formula of this type, then to obtain determinantal correlation functions one goes through a procedure usually referred to as Sasamoto's trick <cit.> of rewriting this formula as a sum over interlacing arrays. By virtue of the intertwining origins of our formula from Theorem <ref> such determinantal correlations are essentially an immediate consequence as shown in Theorem <ref>. This is both conceptually more satisfying and computationally cleaner compared to the rewriting involved in the Sasamoto's trick approach <cit.> but on the other hand requires a lot of machinery to have been developed already.In Section <ref>, we establish Theorem <ref>. We first prove consistency for (ℳ^ω_N(·;𝐚))_N=1^∞ and that they are indeed probability measures by observing the connection (<ref>) with the dynamics we have been studying. Then, to prove extremality we make use of a well-adapted generating function for the measures (ℳ^ω_N(·;𝐚))_N=1^∞ and De-Finetti's theorem <cit.>. As far as we can tell, in this context, this type of argument originates with <cit.> and variants of it have been employed in the literature a few times. However, a number of rather special ingredients need to be present in order for it to work, as can be seen from the rather long proof, and the fact that it does in our case as well is not obvious a-priori. This section is also the only place where the factorial Schur functions <cit.> make their appearance explicitly. In fact, a number of, although far from all, the results in this paper can be phrased and proven in terms of factorial Schur functions which would be more in the spirit of developing the factorial Schur analogue of the Schur process <cit.> and Schur dynamics <cit.> framework. Instead we wanted to emphasize the inhomogeneous Toeplitz-matrix perspective which is somewhat more probabilistic in nature. For example, as far as we can tell, the couplings coming from coalescing walks in Section <ref> (for example 𝖰_f^N,N+1) do not have a natural interpretation in terms of symmetric functions. In Section <ref>, we prove Theorem <ref>. As far as we can tell, no results of this kind have appeared in the literature before; even the fully homogeneous case appears to be new. Our initial motivation stemmed from the following. In an interesting paper <cit.>, Petrov showed how one could obtain determinantal formulae for the inhomogeneous-space push-TASEP in continuous-time with fully-packed initial condition via a connection to the original level-inhomogeneous (but space-homogeneous) model of Borodin-Ferrari <cit.>. We then wanted to understand whether a duality of sorts between space and level inhomogeneities extended to dynamics on full arrays (and not just the right edge), for more general initial conditions and also for other types of dynamics in discrete time. This led to Theorem <ref>. The proof itself is elementary and, as long as one takes the right perspective, not difficult. Our results in Section <ref> generalise the work of Nordenstam <cit.>, where for the first time the dynamics of the shuffling algorithm on Aztec diamonds, see <cit.>, in the case of uniform weights, were found to be connected to interlacing particle systems with push-block dynamics. In order to do this we reinterpret and combine the aforementioned work of Nordenstam <cit.> and Propp <cit.> where the shuffling algorithm was introduced for general weights 𝒲 on Aztec diamonds. Then, to prove Theorem <ref> we first show that the weights from Definition <ref> are consistent[The reader will have surely noticed by now that consistency of different types is an overarching theme of this paper.] in the sense of Definition <ref> and then make use of our previous results. An extension of the work of Nordenstam in the language of line ensembles (equivalent to tilings of the Aztec diamond, see <cit.>, also Sections <ref> and <ref>) to some other weights beyond the uniform one can be found in <cit.>. However, general weights 𝒲 on Aztec diamonds, see Section <ref>, or even the more special weight from Theorem <ref>, as far as we can tell, are not considered in <cit.>. In particular, our results cannot be obtained from <cit.>. It would be interesting to consider dynamics on line ensembles as in <cit.> induced by a general weight 𝒲 but we do not do it here. Finally, as already mentioned, tilings of Aztec diamonds and also dynamics coming from shuffling-type algorithms have been much studied in the literature <cit.> but the techniques and results of all these papers are different from what we do here. Our work in Section <ref> on line ensembles with fixed starting and end points follows closely the methods of <cit.>. We show how some of their results can be generalised to the inhomogeneous Toeplitz-like matrix setting with a matrix symbol in Theorems <ref> and <ref>, from which Theorem <ref> easily follows. The exact fixed N results from <cit.> allow for a rather clean adaptation, while the asymptotics relevant for the limit of the bottom paths are more involved due to the inhomogeneity 𝐚. The main tools in this study are matrix-valued orthogonal polynomials and the analysis of the associated Riemann-Hilbert problems. This part of the paper is only a small step in developing the relevant theory (a proof of concept that something can be done) and many questions remain wide open: for example the probabilistic significance of 𝖳_𝐟 for general matrix-valued 𝐟, whether various multidimensional constructions for scalar f studied earlier in the paper have analogues for matrix 𝐟, and also asymptotics of the line ensembles in different scaling regimes which are considered in <cit.> (corresponding to the gas/smooth phase of the two-periodic Aztec diamond for example). There are also very interesting subsequent works <cit.> which use some of the machinery of <cit.>. There might be suitable adaptations of those results to our setup but this is purely speculative at present.In Section <ref>, the short-time convergence of the continuous-time dynamics, having general inhomogeneity 𝐚, to the discrete Bessel determinantal point process is proven using some standard asymptotic analysis of the correlation kernel. The only noteworthy point is that, in this particular scaling, the rescaled polynomials p_x(z) converge to an exponential function depending on 𝐚 only through the averagea̅, see (<ref>), which is really what makes the proof work.Organisation of the paperIn Section <ref> we develop some theory for inhomogeneous Toeplitz-like matrices and the corresponding one-dimensional Markov dynamics. In Section <ref> we discuss dynamics for non-intersecting paths (single levels of the arrays) and prove a key intertwining relation. In Section <ref> we consider various couplings for two sets of non-intersecting paths (two levels of the array). In Section <ref> we put everything together to consider consistent dynamics on arrays and also prove Theorem <ref> on parameter symmetry. In Section <ref> we compute the correlation kernels from Section <ref>. In Section <ref> we prove Theorem <ref> on conditioned walks. In Section <ref> we prove a precise version of Theorem <ref>, for the edge particle systems, see Theorems <ref> and <ref>. In Section <ref> we prove Theorem <ref> on extremal coherent measures. In Section <ref> we prove the duality results from Section <ref>. In Section <ref> we explain the connection between the shuffling algorithm for sampling arbitrary weightings of the Aztec diamond and certain Bernoulli push-block dynamics and prove Theorem <ref>. In Section <ref>, we develop a framework generalising some of the results of <cit.> and <cit.> in order to prove Theorem <ref>. Finally, in Section <ref> we prove Theorem <ref> on convergence to the discrete Bessel point process.§ ONE-DIMENSIONAL DYNAMICS AND INHOMOGENEOUS TOEPLITZ-LIKE MATRICES§.§ Inhomogeneous Toeplitz-like matrices Given R>0 we use the following notation for the half plane ℍ_-R={z∈ℂ: (z)>-R}. We write 𝖧𝗈𝗅(ℍ_-R) for the set of functions which are holomorphic in ℍ_-R. Recall that a function f is entire if it is holomorphic in the whole of ℂ.For f∈𝖧𝗈𝗅(ℍ_-R) we define the following quantities, where recall that the counterclockwise contour 𝖢_𝐚⊂ℍ_-R encircles all the points of the sequence 𝐚,𝖳_f(x,y) =-1/2πi1/a_y∮_𝖢_𝐚p_x(w)/p_y+1(w)f(w)dw, x,y ∈ℤ_+, 𝖳_f(x) =𝖳_f(0,x)=-1/2πi1/a_x∮_𝖢_𝐚f(w)/p_x+1(w)dw, x∈ℤ_+.Clearly, 𝖳_f depends on 𝐚 but we supress it from the notation since 𝐚 is fixed. Observe that, in the homogeneous case a_x≡ 1,[𝖳_f(x,y)]_x,y∈ℤ_+ is nothing but the Toeplitz matrix with symbol f(1-z).For this reason we shall call 𝖳_f the inhomogeneous Toeplitz-like matrix/operator with symbol[We prefer this terminology to the somewhat more precise “... with symbol f(1-z)".] f. We note that the only poles for the integrand in the definition of 𝖳_f come from the inhomogeneity 𝐚. In particular, we note that 𝖳_f(x,y)≡ 0, for x>y, namely the matrix [𝖳_f(x,y)]_x,y∈ℤ_+ is upper-triangular. From a probabilistic standpoint, for certain special choices of f, 𝖳_f will be the transition probability, from x to y, of a Markov chain on ℤ_+ which moves only to the right.More general functions f, which are not in 𝖧𝗈𝗅(ℍ_-R), can be used to define 𝖳_f and most of the results below have corresponding analogues. We restrict to f∈𝖧𝗈𝗅(ℍ_-R) for simplicity since they suffice for the probabilistic applications we have in mind. For functions g:ℤ_+ →ℂ (equivalently sequences in ℂ^ℤ_+), for which the sum converges, we write 𝖳_f g(x)=∑_y=0^∞𝖳_f(x,y)g(y). It is easy to see that the operator 𝖳_f is well-defined on bounded sequences but it is actually defined on a somewhat larger space, see Proposition <ref>. In fact, as we show in Proposition <ref>, the matrix [𝖳_f(x,y)]_x,y∈ℤ_+, with general sequence 𝐚, is similar to the standard Toeplitz matrix (namely with a_x≡ 1) with symbol f(1-z) using an explicit change of basis matrix. However, even in the one-dimensional setting of this section it is not possible, using this similarity, to simply translate results from the standard Toeplitz matrix setting to the inhomogeneous one, see Remark <ref>. In the multidimensional setting of Sections <ref>, <ref> and <ref> it is unclear whether this matrix similarity is of any use at all but it would be interesting if something can be done using it. We finally define the adjoint kernel by 𝖳_f^*(x,y)=𝖳_f(y,x) and corresponding operator 𝖳_f^*g(x)=∑_y=0^∞𝖳_f(y,x)g(y). Clearly, by its lower-triangular structure, 𝖳_f^* is well-defined on the whole of ℂ^ℤ_+. From a probabilistic standpoint 𝖳_f^* evolves measures on ℤ_+ according to the dynamics of the Markov chain governed by 𝖳_f. The following constant will appear often throughout the paper. Given an inhomogeneity sequence 𝐚, define R(𝐚) byR(𝐚)=sup_k∈ℤ_+a_k-inf_k∈ℤ_+a_k.We have the following expansion of functions f in terms of the polynomials p_x(z). When a_x ≡ 1, x∈ℤ_+, this is simply the Taylor expansionof the function f around 1.Let f ∈𝖧𝗈𝗅(ℍ_-R) and 𝒰 be a compact set in ℍ_-R for some R>0. Suppose there exists a (counterclockwise) contour 𝖢_𝐚⊂ℍ_-R containing {a_x}_x∈ℤ_+ such thatsup_k∈ℤ_+sup_u∈𝒰|u-a_k|/inf_w∈𝖢_𝐚|w-a_k|=r<1.Then, the following expansion converges absolutely and uniformly for u ∈𝒰,f(u)=∑_x=0^∞ p_x(u) 𝖳_f(x). If f is entire then for any compact 𝒰⊂ℂ a contour 𝖢_𝐚satisfying (<ref>) exists and so (<ref>) converges uniformly on compact sets in ℂ. Finally, suppose R>R(𝐚). Then, for ϵ>0 small enough and 𝒰=𝒰_ϵ given by the rectangle𝒰={u∈ℂ:-ϵ≤u≤sup_k∈ℤ_+a_k, -ϵ≤(u) ≤ϵ}a contour 𝖢_a satisfying (<ref>) exists. Observe that,-1/a_y1/2πi∮_𝖢_𝖺p_x(w)/p_y+1(w)dw=1_x=y.Thus, we get that with f(u)=p_y(u), and more generally by linearity for f(u) any polynomial, the expansion (<ref>), which is a finite sum, converges uniformly and absolutely for all u∈ℂ.We now assume that 𝖢_𝐚 is picked as in the statement. Recall that f is analytic in ℍ_-R. Take f_N(u) to be the truncated degree N Taylor polynomial in the Taylor expansion of f(u) about a point u_*∈ℝ_+. This expansion converges in the open disk of radius R+u_* centred at u_*. By taking u_* large enough we can get this disk to contain both 𝒰 and 𝖢_𝐚 and thus for the expansion to converge on 𝒰 and 𝖢_𝐚. Moreover note that, from the observation in the paragraph above we have,∑_x=0^∞ p_x(u) 𝖳_f_N(x)=∑_x=0^∞ -1/a_x1/2πip_x(u)∮_𝖢_𝐚f_N(w)/p_x+1(w)dw=f_N(u).Let u∈𝒰 be fixed. We then have the following bound, for any N ∈ℕ and(x,w)∈ℤ_+×𝖢_𝐚, by using (<ref>) and the fact that f_N converges uniformly to f in 𝖢_𝐚 and thus it is uniformly bounded,|1/a_xp_x(u)f_N(w)/p_x+1(w)|≲∏_k=0^x |u-a_k|/|w-a_k|≲ r^x.Thus, by the dominated convergence theorem we have, for fixed u ∈𝒰,∑_x=0^∞ p_x(u)𝖳_f_N(u) N →∞⟶∑_x=0^∞ p_x(u)𝖳_f(u).On the other hand, we know that f_N(u) → f(u) uniformly for u∈𝒰 and this proves the expansion for fixed u∈𝒰. We now show that the series converges uniformly and absolutely in 𝒰. We can bound for any x∈ℤ_+and for all u∈𝒰, using (<ref>) and the fact that f is uniformly bounded in 𝒰,|p_x(u)𝖳_f(x)| ≲∮_𝖢_𝐚∏_k=0^x |u-a_k|/|w-a_k| dw ≲ r^x.By the Weirstrass M-test the desired conclusion follows.When f is entire then for any compact 𝒰⊂ℂ we can take 𝖢_𝐚 a circle of very large radius centred at the origin and (<ref>) is seen to hold.Finally, suppose R>R(𝐚) and 𝒰=𝒰_ϵ is as in (<ref>). We claim that taking 𝖢_𝐚⊂ℍ_-R a rectangular contour with sides parallel to the real and imaginary axes with the left side being part of the line (w)=-R+ϵ, the right side part of the line (w)=M and the upper and lower sides parts of the lines (w)=± M respectively for some large M works. See Figure <ref> for an illustration of this contour. For such 𝖢_𝐚 we have, by taking ϵ small enough,sup_k∈ℤ_+sup_u∈𝒰|u-a_k|/inf_w∈𝖢_𝐚|w-a_k|≤sup_k∈ℤ_+a_k+𝒪(ϵ)/inf_k∈ℤ_+a_k+R+𝒪(ϵ)<sup_k∈ℤ_+a_k/inf_k∈ℤ_+a_k+R(𝐚)=1.This concludes the proof. We have the following composition property for the 𝖳_f kernels. In the probabilistic setting this is simply the Chapman-Kolmogorov equation. Suppose f,g ∈𝖧𝗈𝗅(ℍ_-R) with R>R(𝐚). Then, we have 𝖳_f 𝖳_g=𝖳_fg. We compute, by deforming the u contour from 𝖢_𝐚 to a contour 𝖢̃_𝐚⊂𝒰 where 𝒰 is the region defined in (<ref>), which can be done without crossing any poles, and making use of Lemma <ref> and then deforming back to 𝖢_𝐚,𝖳_f𝖳_g(x,y) =-1/a_y1/2πi∮_𝖢̃_𝐚g(u)/p_y+1(u)∑_m=0^∞ p_m(u)∮_𝖢_𝐚p_x(w)f(w)/p_m+1(w)dwdu=-1/a_y1/2πi∮_𝖢_𝐚p_x(u)/p_y+1(u) f(u)g(u) du=𝖳_fg(x,y),as desired. We have the following normalisation result for 𝖳_f. Let f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0. For any x∈ℤ_+ we have∑_y=0^∞𝖳_f(x,y)=f(0). Observe that since f has no poles in 𝖢_𝐚, 𝖳_f(x,y)=0 for y≤ x. Then, we deform 𝖢_𝐚 to a contour 𝖢̃_𝐚⊂ℍ_-ϵ such that sup_w∈𝖢̃_𝐚sup_k∈ℤ_+|a_k/a_k-w|=r<1.Such a contour is always possible to find. We can take a rectangular contour with sides parallel to the real and imaginary axes with the left side being part of the line (w)=-ϵ' for some 0<ϵ'<ϵ, the right side part of the line (w)=M and the upper and lower sides parts of the lines (w)=± M respectively for some very large M. See Figure <ref> for an illustration. Then, we have uniformly for w∈𝖢̃_𝐚,∑_y>x1/a_m p_y+1(w)=-1/wp_x+1(w).This follows, after some relabelling, by taking the x →∞ limit in the elementary identity, by virtue of the bound (<ref>) above for w∈𝖢̃_𝐚,∑_k=0^x1/a_k∏_i=0^ka_i/a_i-w=-1/w(1-∏_i=0^xa_i/a_i-w).Thus we obtain, where we deform to the 𝖢_𝐚,0 contour without crossing any poles,∑_y=0^∞𝖳_f(x,y)=∑_y=x^∞𝖳_f(x,y) =1/2πi∮_𝖢_𝐚,0[1/wp_x+1(w)-1/a_xp_x+1(w)]p_x(w)f(w)dw=1/2πi∮_𝖢_𝐚,0f(w)/wdw.Evaluating the integral by picking the residue at 0 gives the result. For g:ℤ_+ →ℂ define the forward and backward discrete derivatives ∇^+ and ∇^- by∇^+g(x)=g(x+1)-g(x), ∇^- g(x)=g(x-1)-g(x).We have the following relation for 𝖳_f that we call the duality relation (the terminology is because this relation is somewhat reminiscent to dualities for Markov processes <cit.> and in particular the Siegmund duality <cit.>; however it is actually not a case of a Siegmund duality or any Markov duality for that matter).Let f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0. We then have-a_x/a_y∇_x^+𝖳_f1_ 0,y (x)=𝖳_f(x,y). Observe that, from Lemma <ref> we have∇_x^+𝖳_f1_ 0,y (x)=-∇_x^+∑_m>y𝖳_f(x,m).On the other hand, using identity (<ref>), after deforming first to the contour 𝖢̃_𝐚 as in the proof of Lemma <ref> and then to 𝖢_𝐚,0, we get that the last display is equal to-∇_x^+ 1/2πi∮_𝖢_𝐚,0p_x(w)f(w)/wp_y+1(w)dw.Bringing the discrete derivative ∇^+_x inside the integral, using thata_x∇_x^+p_x(w)/w=-p_x(w)and then deforming back to 𝖢_𝐚 gives the result. Finally, we note that, for fixed λ, the function x↦ p_x(λ) is an eigenfunction of 𝖳_f with explicit eigenvalue. Let f ∈𝖧𝗈𝗅(ℍ_-R) with R>R(𝐚) and λ∈𝒰 the set defined in (<ref>). Then, the function h_λ(x)=p_x(λ) is an eigenfunction of 𝖳_f with eigenvalue f(λ),𝖳_fh_λ(x)=f(λ)h_λ(x), ∀ x ∈ℤ_+.If f is entire then the above holds for all λ∈ℂ.This is an application of Lemma <ref> with the function f(z)h_z(x)=f(z)p_x(z). In the rest of this subsection we collect some general results and comments about the operator/matrix 𝖳_f. These will not be used in any of our probabilistic applications but they are interesting in their own right so we present them here. Define the following spaces ℓ_exp,M(ℤ_+), where M ∈ (1,∞), and ℓ_exp(ℤ_+) by,ℓ_exp,M(ℤ_+) ={(g(k))_y∈ℤ_+∈ℂ^ℤ_+:∃C_M<∞ such that for ally∈ℤ_+, |g(y)|≤ C_M M^y }, ℓ_exp(ℤ_+) ={(g(k))_y∈ℤ_+∈ℂ^ℤ_+:∃M ∈ (0,∞)such that (g(k))_y∈ℤ_+∈ℓ_exp,M(ℤ_+)}.Suppose f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ. Then, there exists r<1 and C_f,r<∞ such that for all x,y∈ℤ_+ we have|𝖳_f(x,y)|≤ C_f,r r^|y-x|.In particular, 𝖳_f is well-defined acting on ℓ_exp,M(ℤ_+) for M<r^-1. If f is entire then for any 0<δ<1 there exists C_f,δ<∞ such that for all x,y∈ℤ_+,|𝖳_f(x,y)|≤ C_f,δδ^|y-x|,and so in this case 𝖳_f is well-defined acting on ℓ_exp(ℤ_+).For the first statement we can deform the 𝖢_𝐚 contour to the rectangle from the proof of Lemma <ref>, see also Figure <ref>. By taking the absolute value inside the integral we obtain the desired conclusion. When f is entire we can deform 𝖢_𝐚 to a very large circle from which, after bringing the absolute value inside the integral, the result follows.We give a more abstract interpretation of 𝖳_f^* viewed as an operator from a certain sequence space to itself when f is entire. Define the space ℓ_hol by,ℓ_hol={(g(k))_y∈ℤ_+∈ℂ^ℤ_+: ∀0<r< 1,∃C_r<∞ such that for ally∈ℤ_+, |g(y)|≤ C_r r^y }.Observe that, we have the inclusions, with M,q ≥ 1, where ℓ^q are the usual sequence spaces,ℓ_hol(ℤ_+)⊂ℓ^q(ℤ_+) ⊂ℓ_exp,M(ℤ_+) ⊂ℓ_exp(ℤ_+).Now, we consider the map 𝒱 given by, where 𝖧𝗈𝗅(ℂ) denotes the space of entire functions,𝒱:ℓ_hol(ℤ_+)→𝖧𝗈𝗅(ℂ), (g(y))_y∈ℤ_+ ↦ h(z)=∑_y∈ℤ_+g(y) p_y(z),which by the preceding results of this section it is well-defined and moreover it is in fact a bijection with inverse given by,𝒱^-1:𝖧𝗈𝗅(ℂ)→ℓ_hol(ℤ_+),h↦(-1/a_y1/2πi∮_𝖢_𝐚h(z)/p_y+1(z)dz)_y∈ℤ_+=(𝖳_h(y))_y∈ℤ_+.The fact that this sequence has the correct decay properties again follows by deforming 𝖢_𝐚 to a very large circle. Given f∈𝖧𝗈𝗅(ℂ) define the multiplication operator 𝖬𝗎𝗅𝗍_f:𝖧𝗈𝗅(ℂ)→𝖧𝗈𝗅(ℂ) by 𝖬𝗎𝗅𝗍_f(h)=fh. Then, we can see that we have the following representation of 𝖳_f^*:ℓ_hol(ℤ_+) →ℓ_hol(ℤ_+):𝖳_f^*=𝒱^-1𝖬𝗎𝗅𝗍_f𝒱. We now prove that the matrix [𝖳_f(x,y)]_x,y∈ℤ_+, with general sequence 𝐚, is similar to the standard Toeplitz matrix (namely with a_x≡ 1) with symbol f(1-z) using an explicit change of basis matrix. We restrict to entire f for simplicity. The result below can also be derived from (<ref>) but we give a direct computational proof which is instructive. Let f be entire. Let 𝖳_f(x,y) be as in (<ref>) with general and fixed 𝐚 and denote by 𝖳̃_f(x,y) the homogeneous case (standard Toeplitz case) of (<ref>):𝖳̃_f(x,y)=𝖳̃_f(y-x)=-1/2πi∮_|1-z|=1f(z)(1-z)^x/(1-z)^y+1dz=1/2πi∮_|z|=1f(1-z)z^x-y-1dz.Moreover, consider the matrix 𝐀(𝐚) with entries given by, with k,m ∈ℤ_+,𝐀_km(𝐚)=(-1)^k-m(∏_l=0^k-1a_l^-1)e_k-m(1-a_0,1-a_1,…,1-a_k-1)where e_l is the l-th elementary symmetric polynomial:e_l(z_1,z_2,…,z_N)= ∑_1≤ j_1< j_2 < ⋯ < j_l ≤ N z_j_1 z_j_2⋯ z_j_l.Then, for any x,y ∈ℤ_+, we have𝖳_f(x,y)=[𝐀(𝐚)𝖳̃_f𝐀^-1(𝐚)](x,y). Observe that, both {(1-z)^x}_x∈ℤ_+ and {p_x(z)}_x∈ℤ_+ are bases in the ring of polynomials ℂ[z]. By virtue of Vieta's formulae and (<ref>) we can see that 𝐀(𝐚) is actually the change of basis matrix, and in particular also invertible,p_k(z) =∑_m∈ℤ_+𝐀_km(𝐚)(1-z)^m,(1-z)^k =∑_m ∈ℤ_+𝐀^-1_km(𝐚)p_m(z).Then, using (<ref>), we can write𝖳_f(x,y)=-1/2πi1/a_y∮_𝖢_𝐚f(z)p_x(z)/p_y+1(z)dz=∑_m ∈ℤ_+𝐀_xm(𝐚)(-1/2πi1/a_y∮_𝖢_𝐚f(z)(1-z)^m/p_y+1(z)dz).By virtue of Lemma <ref>, since f is entire, expanding f(z)(1-z)^m as a series in two waysf(z)(1-z)^m =∑_k∈ℤ_+(-1/2πi∮_|1-z|=1f(z)(1-z)^m/(1-z)^k+1dz) (1-z)^k=∑_k∈ℤ_+𝖳̃_f(m,k)(1-z)^k, f(z)(1-z)^m =∑_k∈ℤ_+(-1/2πi1/a_k∮_𝖢_𝐚f(z)(1-z)^m/p_k+1(z)dz)p_k(z),using (<ref>) and comparing coefficients for p_y(z) we get-1/2πi1/a_y∮_𝖢_𝐚f(z)(1-z)^m/p_y+1(z)dz=∑_l∈ℤ_+𝖳̃_f(m,l)𝐀_ly^-1(𝐚)and this completes the proof.Using Proposition <ref> and assuming the corresponding result for the standard Toeplitz setting, namely a_x≡ 1, we can give quick alternative proofs of Lemmas <ref> and <ref> for general 𝐚 (and entire f). It is also possible to prove Lemma <ref> for general 𝐚 in the same way if we observe that ∑_m=0^∞𝐀_xm^-1(𝐚)=1, for all x∈ℤ_+. However, the important (later on) duality formula from Lemma <ref> does not seem to follow along these lines. §.§ One-dimensional dynamics and Markov transition kernels The following choices for the function f in 𝖳_f in Lemmas <ref>, <ref>, <ref> below are the basic building blocks for the models we study. They correspond to transition probabilities for an inhomogeneous space Bernoulli walk, geometric walk and continuous time pure-birth chain. Let f(z)=1-α z. We have, with x,y∈ℤ_+,𝖳_f(x,y)=α a_x 1_y=x+1+(1-α a_x)1_y=x.Immediate evaluation of the contour integral using the residue formula. Note that, for the above expression to be positive and thus have probabilistic meaning we need 0≤α≤(sup_k a_k)^-1. Let f(z)=(1+β z)^-1. We have, with x,y ∈ℤ_+,𝖳_f(x,y)=1/1+β a_y∏_k=x^y-1β a_k/1+β a_k1_y≥ x.This is again a direct derivation but a little less trivial so we give the details. For the computation we assume that all the a_k's are distinct and then remove this restriction by continuity. After evaluating the contour integral in terms of residues (with all the a_k's distinct) and some relabelling, in order to prove (<ref>), we are required to show that∑_k=0^n ∏_j≠ k1/a_j-a_k1/1+β a_k=β^n ∏_k=0^n1/1+β a_k.Clearing the denominators we need to show∑_k=0^n(-1)^k∏_m>l, l≠ k (a_m-a_l) ∏_l ≠ k (1+β a_l)=β^n ∏_m>l (a_m-a_l).Now, observe that the left hand side can be written as the determinant[ ∏_l≠ 0 (1+β a_l)1a_0⋯a_0^n-2;⋮⋮⋮⋱⋮; ∏_l≠ n (1+β a_l)1a_n⋯a_n^n-2 ].Replace a_n by a variable w and consider the following polynomial of degree n-1 in ww↦[ ∏_l≠ 0^n-1 (1+β a_l)(1+β w) 1 a_0 ⋯ a_0^n-2; ⋮ ⋮ ⋮ ⋱ ⋮; ∏_l=0^n-1 (1+β a_l) 1 w ⋯ w^n-2 ].It has roots at w=a_0, a_1, …, a_n-1. So it is equal to C_n^β∏_i=0^n-1(w-a_i)where C_n^β is the coefficient of the w^n-1 term. This term is obtained from w^n-2[ ∏_l≠ 0 (1+β a_l)(1+β w) 1 a_0 ⋯ a_0^n-2; ⋮ ⋮ ⋮ ⋱ ⋮; ∏_l≠ n-1 (1+β a_l)(1+β w) 1 w ⋯ w^n-2 ]=w^n-2(1+β w)[ ∏_l≠ 0 (1+β a_l)1a_0⋯a_0^n-3;⋮⋮⋮⋱⋮; ∏_l≠ n-1 (1+β a_l)1a_n-1⋯a_n-1^n-3 ].Hence, we get C_n^β=β[ ∏_l≠ 0 (1+β a_l)1a_0⋯a_0^n-3;⋮⋮⋮⋱⋮; ∏_l≠ n-1 (1+β a_l)1a_n-1⋯a_n-1^n-3 ]and the claim follows by induction. Again, for the above expression for 𝖳_f(x,y) to be positive we need β≥ 0. The probability that a pure-birth chain having jump rate a_k at location k∈ℤ_+ goes from x∈ℤ_+ to y∈ℤ_+ in time t∈ℝ_+ is given by 𝖳_f(x,y) with f(z)=e^-tz. This is well-known. One easily shows that 𝖳_e^-tz(x,y) solves the corresponding Kolmogorov equation. A derivation can be found, for example, in <cit.>. § AN INTERTWINING§.§ Intertwining for non-intersecting paths In this section, in a sense that will be clearer in the sequel, we prove that the dynamics on single levels of the interlacing arrays are consistent from N to N+1. We need some definitions and notation.Given f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0 define the following kernel 𝖯^(N)_f on 𝕎_N by𝖯_f^(N)(𝐱,𝐲)=(𝖳_f(x_i,y_j))_i,j=1^N. We define the following non-negative kernel Λ_N+1,N from 𝕎_N+1 to 𝕎_N by Λ_N+1,N(𝐲,𝐱)=∏_i=1^N1/a_x_i1_𝐱≺𝐲, 𝐱∈𝕎_N, 𝐲∈𝕎_N+1.We have the following alternative description of Λ_N+1,N using the well-known fact that 1_𝐱≺𝐲 can be written as a determinant with indicator function entries <cit.>. To do this, we will use the standard notational device in the setting of extending the set ℤ_+ by an extra symbol that we denote by 𝗏𝗂𝗋𝗍. Let N≥ 1. Define the function ϕ: (ℤ_+∪{𝗏𝗂𝗋𝗍})×ℤ_+ by, with y∈ℤ_+,ϕ(x,y)= -a_x^-11_y>x,x∈ℤ_+1,x=𝗏𝗂𝗋𝗍.Then, with x_N+1=𝗏𝗂𝗋𝗍 we haveΛ_N+1,N(𝐲,𝐱)=(ϕ(x_i,y_j))_i,j=1^N+1.We observe the composition property. Let f,g ∈𝖧𝗈𝗅(ℍ_-R) with R>R(𝐚). Then, we have 𝖯_f^(N)𝖯_g^(N)=𝖯_fg^(N).This is a direct application of the Cauchy-Binet formula and Lemma <ref>. For certain choices of functions f we obtain that 𝖯_f^(N) has a probabilistic interpretation, for any N≥ 1, in terms of non-intersecting paths. Of course, for N=1 this is already a consequence of Lemmas <ref>, <ref> and <ref>. Let f(z) be a (possibly infinite) product of factors of the form 1-α_iz, (1+β_i z)^-1, e^-tz, where 0≤α_i ≤(sup_k a_k)^-1, 0≤β_i<∞ and t≥ 0, such that f ∈𝖧𝗈𝗅(ℍ_-R). Then, 𝖯^(N)_f is non-negative.If f(z)=e^-tz, then 𝖯^(N)_f, by virtue of Lemma <ref>, corresponds to the Karlin-McGregor <cit.> transition kernel of independent pure-birth chains killed when they intersect. If instead f(z) is a finite product of factors of the form 1-α_iz, (1+β_i z)^-1 satisfying 0≤α_i ≤(sup_k a_k)^-1, 0≤β_i<∞, then 𝖯^(N)_f, by virtue of Lemma <ref> and Lemma <ref> is given by the Lindstrom-Gessel-Viennot (LGV) formula for non-intersecting walks on a weighted directed acyclic graph <cit.>, see Figure <ref> for an illustration. It is the transition probability of independent walks, taking either Bernoulli or geometric steps, in discrete-time, killed when they intersect. Finally, the general product case follows from Proposition <ref> and a limit in the number of factors.The following intertwining relation between the transition kernels(although note that at this stage positivity is not yet required) 𝖯_f^(N) and 𝖯_f^(N+1) is at the heart of many of our results. We give a direct proof but different arguments for a proof will also be presented in the sequel. Let N≥ 1. Let f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0, with f(0)=1. Then, we have the intertwining 𝖯^(N+1)_fΛ_N+1,N=Λ_N+1,N𝖯^(N)_f.We first compute the left hand side using Cauchy-Binet formula to obtain, where we use Lemma <ref> and the fact that f(0)=1,𝖯^(N+1)_fΛ_N+1,N(𝐲,𝐱) =∑_𝐳∈𝕎_N+1(𝖳_f(y_i,z_j))_i,j=1^N+1(ϕ(x_i,z_j))_i,j=1^N+1=[ 𝖳_fϕ^*(y_1,x_1) ⋯ 𝖳_fϕ^*(y_1,x_N) 1; ⋮ ⋮ ⋮ ⋮; 𝖳_fϕ^*(y_N+1,x_1) ⋯ 𝖳_fϕ^*(y_N+1,x_N) 1 ]. Here, and below, ϕ^*(x,y)=ϕ(y,x) and ϕ^* 𝖳_f and 𝖳_fϕ^* denotes convolution (not multiplication). We now compute the right hand side, by expanding along the last column and using Cauchy-Binet againΛ_N+1,N𝖯^(N)_f(𝐲,𝐱) =∑_𝐳∈𝕎_N(ϕ(z_i,y_j))_i,j=1^N+1(𝖳_f(z_i,x_j))_i,j=1^N=∑_l=0^N+1 (-1)^N+1-l(ϕ^* 𝖳_f(y_i,x_j))_i=1,…,N+1,i≠ l;j=1,…,N=[ ϕ^*𝖳_f(y_1,x_1) ⋯ ϕ^*𝖳_f(y_1,x_N) 1; ⋮ ⋮ ⋮ ⋮; ϕ^*𝖳_f(y_N+1,x_1) ⋯ ϕ^*𝖳_f(y_N+1,x_N) 1 ].We note that from Lemma <ref> we have a_y^-1𝖳_f(y,x)=a_x^-1∇_y^+ ∑_z>x𝖳_f(y,z)and thus by summing over y we obtainϕ^*𝖳_f(y,x)=𝖳_fϕ^*(y,x)-𝖳_fϕ^*(0,x).Now, using the identity above and column operations we obtain the desired equality. We now go on to obtain an analogous intertwining for the normalised versions of 𝖯_f^(N) and Λ_N+1,N to be defined shortly. We need the following definition. For any N≥ 1 we define the following strictly positive function 𝔥_N(𝐱)=𝔥_N(𝐱;𝐚) for 𝐱∈𝕎_N recursively by 𝔥_1(x)=1 and 𝔥_N+1(𝐱)=Λ_N+1,N𝔥_N(𝐱).Observe that, by comparing the definitions of the two functions we getthat 𝔥_N(𝐱)=dim_N(𝐱) from Section <ref> on the inhomogeneous Gelfand-Tsetlin graph.From Theorem <ref> we immediately obtain the following. Let N≥ 1.Let f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0, with f(0)=1. Then 𝖯_f^(N)𝔥_N=𝔥_N. Thus, by a Doob h-transform, see <cit.>, by 𝔥_N we can define the following kernels 𝔏_N+1,N and 𝔓_f^(N) from 𝕎_N+1 to 𝕎_N and from 𝕎_N to itself respectively. By construction, 𝔏_N+1,N is Markov. Similarly, as long as f is so that 𝖯_f^(N) is non-negative, then 𝔓_f^(N) is also Markov.Let N≥ 1.Let f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0, with f(0)=1. Define 𝔓_f^(N)(𝐱,𝐲) =𝔥_N(𝐲)/𝔥_N(𝐱)𝖯_f^(N)(𝐱,𝐲),𝐱,𝐲∈𝕎_N, 𝔏_N+1,N(𝐲,𝐱) =𝔥_N(𝐱)/𝔥_N+1(𝐲)Λ_N+1,N(𝐲,𝐱), 𝐱∈𝕎_N, 𝐲∈𝕎_N+1. Here, we are slightly abusing notation as we have already defined 𝔓^(N)_f(𝐱,𝐲) for special choices of f=f_s,t in (<ref>). By virtue of Lemma <ref> the expressions (<ref>) and (<ref>) are one and the same. Observe that, the following theorem is then a direct consequence of Theorem <ref> and the above definitions. Let N≥ 1.Let f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0, with f(0)=1. Then,𝔓^(N+1)_f𝔏_N+1,N=𝔏_N+1,N𝔓^(N)_f. §.§ Intertwinings for determinantal kernels In this subsection we introduce a slightly more general framework for intertwinings involving determinants which may be of independent interest. As we explain below, from this setup it would be possible to anticipate Theorem <ref>, although we do not give the details required for a complete independent proof. Fix N≥ 1. We are given the following data: for 1≤ j ≤ N+1, parameters λ_j ∈ℂ, λ_j ≠λ_k and corresponding functions h_λ_j:ℤ_+→ℂ, for i=1,2,3, 4, kernelsA^(i):ℤ_+ × (ℤ_+∪{𝗏𝗂𝗋𝗍})→ℂ, such that, with the series assumed to be converging absolutely,A^(i)h_λ_j(x)=∑_y=0^∞ A^(i)(x,y)h_λ_j(y)=c_λ_j^(i)h_λ_j(x)and moreover A^(i)(x,𝗏𝗂𝗋𝗍)=h_λ_N+1(x). Finally, assume that for any 𝐱∈𝕎_n, with 1≤ n≤ N,(h_λ_j(x_i))_i,j=1^n≠ 0.Given the data in the above paragraph, define the kernels, for i=1,2,3,4,A_n^(i)(𝐱,𝐲) =∏_j-1^n1/c_λ_j^(i)(h_λ_j(y_i))_i,j=1^n/(h_λ_j(x_i))_i,j=1^n(A^(i)(x_i,y_j))_i,j=1^n, 𝐱,𝐲∈𝕎_n, n=1,…, N+1,A_N+1,N^(i)(𝐲,𝐱) =∏_j=1^N-11/c_λ_j^(i)(h_λ_j(y_i))_i,j=1^N/(h_λ_j(x_i))_i,j=1^N+1(A^(i)(y_i,x_j))_i,j=1^N+1, 𝐱∈𝕎_N,𝐲∈𝕎_N+1.Then, we have the following proposition. Assume the above, then we have, for i=1,2,3,4,∑_𝐲∈𝕎_nA_n^(i)(𝐱,𝐲)=1, ∑_𝐱∈𝕎_NA^(i)_N+1,N(𝐲,𝐱)=1,where we write x_N+1=𝗏𝗂𝗋𝗍. Moreover, if we assume, for x,y∈ℤ_+, A^(1)A^(2)(x,y)=A^(3)A^(4)(x,y)+h̃(y)h_λ_N+1(x)for some (possibly identically zero) function h̃:ℤ_+→ℂ we have the intertwining A_N+1^(1)A^(2)_N+1,N=A^(3)_N+1,NA^(4)_N.In the case of A_n^(i), the normalization (<ref>) is a direct consequence of the Cauchy-Binet formula and the eigenfunction relation (<ref>). In the case of A_N,N-1^(i) we expand along the last column, use Cauchy-Binet and the eigenfunction relation (<ref>), from which (<ref>) follows.For the intertwining, we compute the left hand side first, with x_N+1=𝗏𝗂𝗋𝗍 using the Cauchy-Binet formula [A_N+1^(1)A_N+1,N^(2)](𝐲,𝐱)=∏_j=1^N+11/c_λ_j^(1)∏_j=1^N1/c_λ_j^(2)(h_λ_j(x_i))_i,j=1^N/(h_λ_j(y_i))_i,j=1^N+1(A^(1)A^(2)(y_i,x_j))_i,j=1^N+1.Now, we work on the right hand side, we expand along the last column of the size-(N+1) determinant, and use Cauchy-Binet to obtain[A_N+1,N^(3)A_N^(4)](𝐲,𝐱)= ∏_j=1^N+11/c_λ_j^(3)∏_j=1^N1/c_λ_j^(4)(h_λ_j(x_i))_i,j=1^N/(h_λ_j(y_i))_i,j=1^N+1(A^(3)A^(4)(y_i,x_j))_i,j=1^N+1.Now, observe that the last column of the determinants on the left and right hand sides involving the kernels A^(1)A^(2) and A^(3)A^(4) has entries in the i-th row given by c_λ_N+1^(2)h_λ_N+1(x_i) and c_λ_N+1^(3)h_λ_N+1(x_i) respectively. Using relation (<ref>) and column operations we then obtain(A^(1)A^(2)(y_i,x_j))_i,j=1^N+1= c_λ_N+1^(1)/c_λ_N+1^(3)(A^(3)A^(4)(y_i,x_j))_i,j=1^N+1.Thus, we get [A_N+1^(1)A_N+1,N^(2)](𝐲,𝐱) = ∏_j=1^Nc_λ_j^(3)c_λ_j^(4)/c_λ_j^(1)c_λ_j^(2)[A_N+1,N^(3)A_N^(4)](𝐲,𝐱).Since, as we have proven earlier, both sides sum over 𝐱∈𝕎_N to 1 we obtain that the ratio of constants must be equal to 1 (for example, when h̃≡ 0 then this is trivial to see from the eigenfunction relation) and this gives the desired intertwining.Of course, for the above to have any probabilistic meaning one needs to address the non-trivial question of positivity of the various kernels. Wenow explain, without giving the rigorous details, how to get Theorem <ref> from this framework. Let f∈𝖧𝗈𝗅(ℍ_-R), with R>R(𝐚) and f(0)=1, be arbitrary. Take A^(1)=A^(4)=𝖳_f. Let g_λ_N+1(w)=(w-λ_N+1)^-1. Take A^(2)(x,y)=A^(3)(x,y)=-1/a_y1/2πi∮_𝖢_𝐚,0p_x(w)g_λ_N+1(w)/p_y+1(w)dw.Note that, g_λ_N+1∈𝖧𝗈𝗅(ℍ_λ_N+1) and observe that A^(2)=A^(3)=𝖳_g_λ_N+1 for λ_N+1 large and negative (since then the pole at λ_N+1 is not contained in the contour 𝖢_𝐚,0). Take, as h_λ(x)=p_x(λ) which, from Lemma <ref>, is an eigenfunction of the A^(i) with eigenvalues c^(1)_λ_=c^(4)_λ=f(λ) and c^(2)_λ=c_λ^(3)=(λ-λ_N+1)^-1. Finally, from Lemma <ref> all the A^(i) operators commute when λ_N+1<-R(𝐚).We now take λ_1,…, λ_N+1→ 0. First, note that h_λ_j(x) → 1 and as λ_N+1→ 0, A^(2)(x,y)=A^(3)(x,y) →ϕ^*(x,y)=ϕ(y,x) using the representation of ϕ in Lemma <ref>. Moreover, although the operators A^(i) no longer commute if λ_N+1 is small (the reason is that we have an extra residue coming from the pole of g_λ_N+1 at λ_N+1 in 𝖢_𝐚,0) we still get a relation of the form (<ref>) which essentially becomes (<ref>) in the limit. Furthermore, ratios of functions (h_λ_j(x_i))_i,j=1^n combined with the eigevalues c^(i)_λ converge to ratios of functions 𝔥_n, by virtue of the representation of the 𝔥_n given in Lemma <ref>. Putting everything together, and after some manipulations we formally obtain the intertwining in Theorem <ref>.§ SOME COUPLINGSIn this section we introduce certain couplings between the intertwined semigroups from Section <ref> that give rise to the push-block type dynamics introduced in Section <ref>. These constructions have their origin, in some sense, in the most basic coalescing random walk model that we discuss next. The following space of two-level interlaced configurations will make its appearance often:𝕎_N,N+1={(𝐱,𝐲)∈𝕎_N×𝕎_N+1:𝐱≺𝐲}.§.§ Couplings from coalescing random walksSuppose we are given a sequence of functions (f_t,t+1(z))_t=0^∞, with either f_t,t+1(z)=1-α_t z or f_t,t+1(z)=(1+β_tz)^-1, where for all t≥ 0, the parameters satisfy 0 ≤α_t≤ (sup_x∈ℤ_+a_x)^-1 and 0≤β_t <∞. In particular, 𝖳_f_t,t+1 corresponds to either an inhomogeneous Bernoulli or geometric jump.For s ≤ t, define the function f_s,t (with f_t,t=1) byf_s,t(z)=f_s,s+1(z)⋯ f_t-1,t(z).At each space-time point (x,t)∈ℤ_+^2 we put an independent Bernoulli random variable 𝖴_x,t which can be one of two kinds depending on f_t,t+1:𝖴_x,t=𝖴_x,t^(α), iff_t,t+1(z)=1-α_tz, 𝖴_x,t^(β), iff_t,t+1(z)=(1+β_tz)^-1,where the Bernoulli variables 𝖴^(α)_x,t, 𝖴^(β)_x,t satisfyℙ(𝖴_x,t^(α)=1) =1-ℙ(𝖴_x,t^(α)=0)=α_t a_x, ℙ(𝖴_x,t^(β)=1) =1-ℙ(𝖴_x,t^(β)=0)=β_t a_x/1+β_t a_x.Then, we do the following: * If 𝖴_x,t^(α)=1 we put an arrow going from (x,t) to (x+1,t+1).* If 𝖴_x,t^(α)=0 we put an arrow going from (x,t) to (x,t+1).* If 𝖴_x,t^(β)=1 we put an arrow going from (x,t) to (x+1,t).* If 𝖴_x,t^(β)=0 we put an arrow going from (x,t) to (x,t+1). Note that, every space-time point has exactly one outgoing arrow but possibly multiple incoming arrows. See Figure <ref> for an illustration. For any times s ≤ t,we define the random map 𝒵_s,t:ℤ_+→ℤ_+ as follows: 𝒵_s,t(x) is the location at time t of the (deterministic) motion starting from x∈ℤ_+ at time s which follows the random arrows in the above construction. See Figure <ref> for an illustration of 𝒵_s,t. Observe that, by construction, almost surely for all t_1 ≤ t_2 ≤ t_3 we have𝒵_t_2,t_3(𝒵_t_1,t_2(x))=𝒵_t_1,t_3(x), ∀ x ∈ℤ_+.We can thus think of (𝒵_s,t)_s≤ t as a stochastic flow of maps. Fixing the starting time s and locations x_1,…,x_N the following process is called the N-point motion of the flowt↦(𝒵_s,t(x_1),…,𝒵_s,t(x_N)),t≥ s.By its very construction this motion is distributed as N random walks starting from locations x_1,…,x_N at time s, moving independently with Bernoulli or geometric jumps (depending on the corresponding functions f_u,u+1 for u≥ s) until any two walks meet at a space-time point (x,t) from which time onwards they move together (as the same random variables are used to drive both their evolutions).We can compute the distribution of this process explicitly.Let N≥ 1 and s≤ t. Let x_1≤⋯≤ x_N and y_1≤⋯≤ y_N. Then,ℙ(𝒵_s,t(x_1)≤ y_1,…,𝒵_s,t(x_N)≤ y_N)=(𝖳_f_s,t1_ 0,y_j(x_i)-1_i<j)_i,j=1^N. We first assume 𝐱=(x_1,…,x_N)∈𝕎_N and then remove this restriction below. Observe that by summing over the LGV-formula we obtainℙ(𝒵_s,t(x_1)≤ y_1<𝒵_s,t(x_2)≤ y_2<⋯ <𝒵_s,t(x_N)≤ y_N)=(𝖳_f_s,t1_ 0,y_j(x_i))_i,j=1^N.Observe that this formula would have simply been zero and thus not immediately useful if the x_i were not distinct. Then, by writingthe indicator1_𝒵_s,t(x_1)≤ y_1,…,𝒵_s,t(x_N)≤ y_Nin terms of indicators1_𝒵_s,t(x_i_1)≤ y_j_1<𝒵_s,t(x_i_2)≤ y_i_2<⋯<𝒵_s,t(x_i_k)≤ y_j_kfor increasing sequences i_1,…,i_k and j_1,…,j_k as explained in detail in Proposition 9 of <cit.> we obtain the desired formula for 𝐱∈𝕎_N. We now explain how to remove the restriction that the coordinates of 𝐱 are distinct. Suppose we have m distinct valuesx_1=⋯=x_i_1-1, x_i_1=⋯=x_i_2-1,…, x_i_m-1=⋯ =x_N.By the coalescence property we have, with i_0=1,ℙ(𝒵_s,t(x_i)≤ y_i,for1 ≤ i ≤ N)=ℙ(𝒵_s,t(x_i_j-1)≤ y_i_j-1,for1 ≤ j ≤ m). Thus, we need to show that the size N determinant on the right hand side of (<ref>) boils down to the corresponding size m determinant. Using the special structure of the determinant this is easy to prove. For j=0,…,m-1 subtract row i_j from rows i_j+1,…,i_j+1-1. Then, by swapping consecutive rows bring rows i_0,i_1,…,i_m-1 to positions 1,…, m and similarly by swapping consecutive columns bring columns i_0,i_1,…,i_m-1 to positions 1,…,m. We then get the determinant of a block triangular matrix whose top-left corner is the m× m matrix of interest and whose bottom-right corner is the identity matrix. The conclusion follows. The following definition is central in our argument. Consider the following kernel, defined as the (2N+1)× (2N+1) determinant, with s≤ t and (𝐱,𝐲),(𝐱',𝐲') ∈𝕎_N,N+1,𝒬_s,t^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]=[ 𝒜_s,t(𝐲,𝐲') ℬ_s,t(𝐲,𝐱'); 𝒞_s,t(𝐱,𝐲') 𝒟_s,t(𝐜,𝐱') ], where the entries of the matrices 𝒜_s,t(𝐲,𝐲'),ℬ_s,t(𝐲,𝐱'), 𝒞_s,t(𝐱,𝐲') and 𝒟_s,t(𝐱,𝐱') of sizes (N+1)× (N+1), (N+1)× N, N× (N+1) and N× N respectively are given by 𝒜_s,t(𝐲,𝐲')_ij =-∇_y_j'^-𝖳_f_s,t1_0,y_j'(y_i)=𝖳_f_s,t(y_i,y_j), ℬ_s,t(𝐲,𝐱')_ij =1/a_x_j'(𝖳_f_s,t1_ 0,y_j'(y_i)-1_j≥ i), 𝒞_s,t(𝐱,𝐲')_ij =a_x_i∇_x_i^+∇_y_j'^-𝖳_f_s,t1_0,y_j'(x_i), 𝒟_s,t(𝐱,𝐱')_ij =-a_x_i/a_x_j'∇_x_i^+𝖳_f_s,t1_0,x_j'(x_i)=𝖳_f_s,t(x_i,x_j'). One could replace the function f_s,t in the definition above by a general f∈𝖧𝗈𝗅(ℍ_-ϵ) for some ϵ>0 to define a kernel 𝒬_f^N,N+1. Then, from an analytic standpoint the results in Propositions <ref> and <ref> extend to this setting as well. What is not clear however is whether they have any probabilistic meaning for more general f. We have, where (𝐱,𝐲),(𝐱',𝐲') ∈𝕎_N,N+1,𝒬_s,t^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]=∏_i=1^Na_x_i/a_x_i'(-∇_x_i^+)∏_i=1^N+1(-∇_y_i'^-)ℙ(𝒵_s,t(y_i)≤ y_i', 𝒵_s,t(x_j)≤ x_j',for alli,j).This follows by virtue of Proposition <ref> and the explicit Definition <ref> of 𝒬_s,t^N,N+1and careful inspection of the two formulae.Let s≤ t. Viewed as a kernel from 𝕎_N,N+1 to itself, 𝒬^N,N+1_s,t is sub-Markov:𝒬_s,t^N,N+1[(𝐱,𝐲),(𝐱',𝐲')] ≥ 0, ∀ (𝐱,𝐲), (𝐱',𝐲')∈𝕎_N,N+1, ∑_(𝐱',𝐲')∈𝕨_N,N+1𝒬_s,t^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]≤ 1, ∀ (𝐱,𝐲)∈𝕎_N,N+1.Proving non-negativity directly from the determinant formula is actually tricky. We use the connection to the flow 𝒵_s,t via Proposition <ref> instead. Observe that, by the very construction of the coalescing flow 𝒵_s,t the events {𝒵_s,t(x_i)≤ x_i',𝒵_s,t(y_i)≤ y_j', for alli,j} are increasing as the variables x_i decrease and the variables y_j' increase. The claim then follows from the representation given in Proposition <ref> of 𝒬^N,N+1_s,t in terms of the probability of such events. We now move to the second item of the statement. We show that ∑_𝐲':(𝐱',𝐲')∈𝕎_N,N+1𝒬_s,t^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]=(𝒟_s,t(x_i,x_j))_i,j=1^N=𝖯^(N)_f_s,t(𝐱,𝐱').Note that, this is nothing else but the LGV formula <cit.> for non-intersecting paths discussed in Proposition <ref> so it is sub-Markov and hence the sum over 𝕎_N,N+1 of 𝒬^N,N+1_s,t is indeed less than 1. The above claim can be seen in two ways (which are essentially equivalent). First, by direct computation. Use multinearity to bring the sum over 𝐲' inside the determinant definition of 𝒬^N,N+1_s,t and then use the relations (we use the convention x_N+1'≡∞ and x_0'≡ -1)∑_y_j'=x_j-1'+1^x_j'𝒜_s,t(𝐲,𝐲')_ij =𝖳_f_s,t1_ 0,x_j'(y_i)-𝖳_f_s,t1_ 0,x_j-1'(y_i), ∑_y_j'=x_j-1'+1^x_j'𝒞_s,t(𝐱,𝐲')_ij =-a_x_i∇_x_i^+𝖳_f_s,t1_ 0,x_j'(x_i) +a_x_i∇_x_i^+𝖳_f_s,t1_ 0,x_j-1'(x_i).By some simple row-column operations the claim follows. Second, using the flow representation in Proposition <ref> take the sum over 𝐲' to get rid of the discrete derivatives in y_j', then use the formula in Proposition <ref> and conclude by virtue of Lemma <ref>. A more involved argument, that introduces an inverse 𝒵_s,t^-1 to the flow 𝒵_s,t can be used to establish the semigroup property 𝒬^N,N+1_t_1,t_2𝒬^N,N+1_t_2,t_3=𝒬^N,N+1_t_1,t_3, for t_1≤ t_2 ≤ t_3. The actual dynamics of the induced process on 𝕎_N,N+1, as far as we can tell, cannot be obtained directly using this connection and we need to take a different approach in the next subsections. We believe that 𝒬_s,t^N,N+1 should correspond to the transition probabilities of two-level dynamics taking sequential-update Bernoulli steps or Warren-Windridge geometric steps of certain parameters depending on whether each factor f_u,u+1(z) is given by (1-α_uz) or (1+β_uz)^-1. We will prove this in the case of Bernoulli jumps in Proposition <ref> and take yet another approach to consider geometric jumps in Section <ref>.Before discussing such dynamics further we prove a number of intertwining relations (which in fact hold beyond the probabilistic setting, see Remark <ref>). Towards this end, define the projection Π_N:𝕎_N,N+1→𝕎_N by projecting on the 𝐱-coordinate. In particular, for a function g on 𝕎_N we thus define a function Π_Ng(𝐱,𝐲)=g(𝐱) on 𝕎_N,N+1. Let s≤ t. We have[𝒬_s,t^N,N+1Π_N]((𝐱,𝐲),𝐱') =[Π_N𝖯_f_s,t^(N)]((𝐱,𝐲),𝐱'), (𝐱,𝐲)∈𝕎_N,N+1, 𝐱'∈𝕎_N, [𝖯_f_s,t^(N+1)Λ_N+1,N](𝐲,(𝐱',𝐲'))=[Λ_N+1,N𝒬_s,t^N,N+1](𝐲,(𝐱',𝐲')), 𝐲∈𝕎_N+1, (𝐱',𝐲')∈𝕎_N,N+1, where we view Λ_N+1,N given in Definition <ref> as a kernel from 𝕎_N+1 to 𝕎_N,N+1 in the obvious way Λ_N+1,N(𝐲,(𝐱,𝐤))=∏_i=1^N 1/a_x_i1_𝐱≺𝐤1_𝐤=𝐲. The equality (<ref>) is shown using the computation at the end of the proof of Proposition <ref>. For equality (<ref>), we need to sum over 𝐱 such that 𝐱≺𝐲. We use multinearity to bring the sum inside the determinant and the relations∑_x_i=y_i^y_i+1-11/a_x_i𝒞_s,t(𝐱,𝐲')_ij =∇_y_j'^+𝖳_f_s,t1_0,y_j'(y_i+1)-∇_y_j'^+𝖳_f_s,t1_0,y_j'(y_i), ∑_x_i=y_i^y_i+1-11/a_x_i𝒟_s,t(𝐱,𝐱')_ij =-1/a_x_j'𝖳_f_s,t1_0,x_j'(y_i+1)+1/a_x_j'𝖳_f_s,t1_0,x_j'(y_i).Then, the statement follows from simple row-column operations. Observe that, by a combination of displays (<ref>) and (<ref>) we obtain yet another proof of Theorem <ref>. Let s≤ t. The kernel defined by 𝖰^N,N+1_s,t[(𝐱,𝐲),(𝐱',𝐲')]=𝔥_N(𝐱')/𝔥_N(𝐱)𝒬^N,N+1_s,t[(𝐱,𝐲),(𝐱',𝐲')]is a Markov kernel from 𝕎_N,N+1 to itself.By combining Propositions <ref> and <ref> we obtain [𝒬^N,N+1_s,tΠ_N𝔥_N](𝐱,𝐲)=Π_N𝔥_N(𝐱,𝐲),from which the result follows. If by abusing notation we view 𝔏_N+1,N as a Markov kernel from 𝕎_N+1 to 𝕎_N,N+1 as we did with Λ_N+1,N, by combining the results above we readily obtain the intertwinings of Markov kernels. Let s≤ t. Then, we have [𝖰_s,t^N,N+1Π_N]((𝐱,𝐲),𝐱') =[Π_N𝔓_f_s,t^(N)]((𝐱,𝐲),𝐱'), (𝐱,𝐲)∈𝕎_N,N+1, 𝐱'∈𝕎_N, [𝔓_f_s,t^(N+1)𝔏_N+1,N](𝐲,(𝐱',𝐲'))=[𝔏_N+1,N𝖰_s,t^N,N+1](𝐲,(𝐱',𝐲')), 𝐲∈𝕎_N+1, (𝐱',𝐲')∈𝕎_N,N+1.§.§ Sequential-update Bernoulli coupling Given a discrete time process 𝖷(t)∈ℤ_+^N with initial condition 𝖷(0)∈𝕎_Nτ=inf{n:𝖷(n-1)⊀𝖷(n)}.Note that, in case of Bernoulli jumps we haveτ= inf{n:𝖷(n)∉𝕎_N}.This is no longer the case for geometric jumps. There is a subtle difference between the two stopping times although they are connected, see for example [], []. In this section we prove that the formula for 𝒬_s,t^N,N+1 given in Definition <ref> with f_u,u+1(z)=1-α_u z, where 0≤α_i ≤(sup_k a_k)^-1, actually corresponds to the sequential-update Bernoulli push-block dynamics. More precisely, we have the following result. Observe that, in the definition of the stochastic process (𝖷(t),𝖸(t);t≥ 0) below the interaction of the 𝖸-components with the 𝖷-components is exactly given by the interaction between consecutive levels in the sequential-update Bernoulli dynamics of Definition <ref>. Consider the following stochastic process (𝖷(t),𝖸(t);t≥ 0) in 𝕎_N×𝕎_N+1 in discrete time initialised at (𝐱,𝐲)∈𝕎_N,N+1. The component (𝖷(t);t≥ 0) evolves autonomously as N independent Bernoulli walks, each with transition probabilities from time s to time s+1 given by 𝖳_(1-α_s z). Then, given the updated component 𝖷(s+1) we update 𝖸(s) to 𝖸(s+1) as follows: * If 𝖸_i(s)=x and 𝖷_i(s+1)=x then 𝖸_i(s+1)=x (block).* If 𝖸_i(s)=x and 𝖷_i-1(s+1)=x then 𝖸_i(s+1)=x+1 (push).* Otherwise, 𝖸_i moves as a Bernoulli random walk with transition probability 𝖳_(1-α_s z) independent of the other coordinates 𝖸_j.The whole process is killed at the first collision time τ=inf{t≥ 0:𝖷(t)∉𝕎_N} of the 𝖷 coordinates, so that in particular (𝖷(t),𝖸(t))∈𝕎_N,N+1 for all t<τ. Then, the transition probabilities from time s to time t of (𝖷(t),𝖸(t);t≥ 0) killed when it exits 𝕎_N,N+1 are given by 𝒬_s,t^N,N+1 with the underlying functions given by f_i,i+1(z)=1-α_i z, where 0≤α_i ≤(sup_k a_k)^-1.Take a test function g:𝕎_N,N+1→ℝ_+. Fix an arbitrary time T≥ 0. Define for any 0≤ t ≤ T, the functions (we drop dependence on N)F(t,T,(𝐱,𝐲)) =∑_(𝐱',𝐲')∈𝕎_N,N+1𝒬^N,N+1_t,T[(𝐱,𝐲),(𝐱',𝐲')]g(𝐱',𝐲'),G(t,T,(𝐱,𝐲)) =𝔼[g(𝖷(T),𝖸(T))1_T<τ|(𝖷(t),𝖸(t))=(𝐱,𝐲), t≤τ].We now show, by backward induction in t, that F and G are equal from which the proposition follows. Observe that,F(T,T,(𝐱,𝐲))=G(T,T,(𝐱,𝐲))=g(𝐱,𝐲), (𝐱,𝐲)∈𝕎_N,N+1.Then, observe that by the way our dynamics work G satisfies the following set of equations (the last three equations (<ref>), (<ref>), (<ref>) define the value of G(t+1,T,(𝐱+δ,𝐲+ϵ)) in (<ref>) below when (𝐱+δ,𝐲+ϵ)∉𝕎_N,N+1), with the first equation holding for 0≤ t≤ T-1 and the rest for 0≤ t ≤ T:G(t,T,(𝐱,𝐲))=∑_δ_i,ϵ_i ∈{0,1}∏_i=1^N 𝖳_f_t,t+1(x_i,x_i+δ_i)∏_i=1^N+1𝖳_f_t,t+1(y_i,y_i+ϵ_i)G(t+1,T,(𝐱+δ,𝐲+ϵ)),∇_y_i^+G(t,T,(𝐱,𝐲))|_y_i=x_i=0(i-th particle is blocked),∇_y_i+1^+G(t,T,(𝐱,𝐲))|_y_i+1=x_i=0((i+1)-th particle is pushed),G(t,T,(𝐱,𝐲))|_x_i=x_i+1=0 (process killed at τ).These determine G(t,T,(𝐱,𝐲)) starting from G(T,T,(𝐱,𝐲))=g(𝐱,𝐲). We now show that F(t,T,(𝐱,𝐲)) satisfies the same equations which completes the proof. First, it readily follows from the form of the entries 𝒞_s,t(𝐱,𝐲')_ij and 𝒟_s,t(𝐱,𝐱')_ij that, for 0≤ t ≤ T:F(t,T,(𝐱,𝐲))|_x_i=x_i+1=0.The boundary condition ∇_y_i^+F(t,T,(𝐱,𝐲))|_y_i=x_i=0,follows from the relations∇_y_i^+𝒜_t,T(𝐲,𝐲')_ij|_y_i=x_i =-a_x_i^-1𝒞_t,T(𝐱,𝐲')_ij, ∇_y_i^+ℬ_t,T(𝐲,𝐱')_ij|_y_i=x_i =-a_x_i^-1𝒟_t,T(𝐱,𝐱')_ij,while the boundary condition∇_y_i+1^+F(t,T,(𝐱,𝐲))|_y_i+1=x_i=0,follows from the relations∇_y_i+1^+𝒜_t,T(𝐲,𝐲')_i+1j|_y_i+1=x_i =-a_x_i^-1𝒞_t,T(𝐱,𝐲')_ij, ∇_y_i+1^+ℬ_t,T(𝐲,𝐱')_i+1j|_y_i+1=x_i =-a_x_i^-1𝒟_t,T(𝐱,𝐱')_ij.Finally, the equation F(t,T,(𝐱,𝐲))=∑_δ_i,ϵ_i ∈{0,1}∏_i=1^N 𝖳_f_t,t+1(x_i,x_i+δ_i)∏_i=1^N+1𝖳_f_t,t+1(y_i,y_i+ϵ_i)F(t+1,T,(𝐱+δ,𝐲+ϵ))follows from multinearity of the determinant and the following set of equations for the individual entries𝒜_t,T(𝐲,𝐲')_ij =∑_ϵ∈{0,1}𝖳_f_t,t+1(y_i,y_i+ϵ)𝒜_t+1,T(𝐲+ϵ,𝐲')_ij, 𝒟_t,T(𝐱,𝐱')_ij =∑_ϵ∈{0,1}𝖳_f_t,t+1(x_i,x_i+ϵ)𝒟_t+1,T(𝐱+ϵ,𝐱')_ij, ℬ_t,T(𝐲,𝐱')_ij =a_x_j'^-1(𝖳_f_t,T1_0,x_j'(y_i)-1_j≥ i)= a_x_j'^-1(∑_ϵ∈{0,1}𝖳_f_t,t+1(y_i,y_i+ϵ)𝖳_f_t+1,T1_0,x_j'(y_i)-∑_ϵ∈{0,1}𝖳_f_t,t+1(y_i,y_i+ϵ)1_j≥ i)=∑_ϵ∈{0,1}𝖳_f_t,t+1(y_i,y_i+ϵ)ℬ_t+1,T(𝐲+ϵ,𝐱')_ij, 𝒞_t,T(𝐱,𝐲')_ij =a_x_i∇_x_i^+∇_y_j^-𝖳_f_t,T1_0,y_j'(x_i)=-∇_y_j'^-[a_y_j'𝖳_f_t,T(x_i,y_j')] =-∇_y_j'^-[a_y_j'∑_ϵ∈{0,1}𝖳_f_t,t+1(x_i,x_i+ϵ)𝖳_f_t+1,T(x_i+ϵ,y_j')]=∑_ϵ∈{0,1}𝖳_f_t,t+1(x_i,x_i+ϵ) 𝒞_t+1,T(𝐱+ϵ,𝐲')_ij.This completes the proof.§.§ Warren-Windridge geometric dynamics coupling In section we take a different approach to study the transition probabilities of the Warren-Windridge geometric dynamics. We take f_0,1(z)=(1+β z)^-1. To ease notation we write 𝖳(x,y) for 𝖳_f_0,1(x,y) and 𝖯^(N) and 𝔓^(N) for 𝖯_f_0,1^(N) and 𝔓_f_0,1^(N) respectively. We write 𝒢^N,N+1[(𝐱,𝐲),(𝐱',𝐲')] for the transition probabilities of the following single-step discrete-time stochastic process (𝖷(t),𝖸(t);t=0,1)in 𝕎_N,N+1. The 𝖷-component moves autonomously, with 𝖷(1) given 𝖷(0)=𝐱 distributed as 𝔓^(N)(𝗑,·). Given the updated component 𝖷(1), we update component 𝖸 as follows: * Coordinate 𝖸_i is first moved to the intermediate position max{𝖸_i(0),𝖷_i-1(1)+1} (push).* 𝖸_i subsequently attempts to jump to z≥max{𝖸_i(0),𝖷_i-1(1)+1} with the inhomogeneous geometric probability 𝖳(max{𝖸_i(0),𝖷_i-1(1)+1},z). * All jumps to z≥𝖷_i(0)=x_i are suppressed and in such case 𝖸_i(1) is taken to be x_i (block). Of course, the process (𝖷(t),𝖸(t);t=0,1) can be extended to all times t≥ 0. For the s-th step we simply replace f_0,1(z) by f_s,s+1(z)=(1+β_s z)^-1 and all other quantities are defined analogously using f_s,s+1. All the results that follow go through with the obvious notational modification.Observe that, in the definition of the stochastic process (𝖷(t),𝖸(t);t=0,1) above the interaction of the 𝖸-components with the 𝖷-components is exactly given by the interaction between consecutive levels in the Warren-Windridge geometric dynamics of Definition <ref>. As mentioned previously we believe[We have shown this for N=1 by tediously checking all possibilities.] that 𝖰_0,1^N,N+1, with f_0,1(z)=(1+β z)^-1, should correspond to Warren-Windridge geometric dynamics step with parameter β.In particular, we believe that the question mark on the equality below should be removed𝒢^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]?=𝖰_0,1^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]. However, when geometric jumps are involved, a proof along the lines of the one given for Proposition <ref> above turns out to be tricky. So we take a different route inspired by the work of Warren-Windridge <cit.> in the homogeneous case. Observe that, we have𝒢^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]=𝔓^(N)(𝐱,𝐱') ℛ^N,N+1(𝐲';𝐱',𝐱,𝐲)where we writeℛ^N,N+1(𝐲';𝐱',𝐱,𝐲)=ℙ(𝖸(1)=𝐲'|𝖷(1)=𝐱,𝖷(0)=𝐱,𝖸(0)=𝐲). Our starting point is the following explicit expression for ℛ^N,N+1. We have, with (𝐱,𝐲),(𝐱',𝐲')∈𝕎_N,N+1,ℛ^N,N++1(𝐲';𝐱',𝐱,𝐲)=𝖡𝗅𝗄(y_1',y_1,x_1)∏_n=2^N𝖻(x_n,y_n')𝖯𝗌𝗁(y_n',y_n,x_n-1') 𝖯𝗌𝗁(y_N+1',y_N+1,x_N'),where the factors are defined by 𝖡𝗅𝗄(y',y,x) =𝖳(y,y')𝖻(x,y'), 𝖯𝗌𝗁(y',y,x') =𝖳(y,y')1_x'<y+𝖳(x'+1,y')1_x'≥ y, 𝖻(x,y') =1_y'<x+(1+β a_y')1_y'=x. The 𝖯𝗌𝗁 factor is a direct translation in symbols of the pushing mechanism. The fact that the 𝖡𝗅𝗄 factor corresponds to blocking can be seen by multiplying out 𝖳(y,y') with 𝖻(x,y') and using the identity, with y≥ x,∑_m=x^∞𝖳(y,m)=𝖳(y,x)(1+β a_x).This follows from the trivial to check identity, for y≤ x ≤ m,𝖳(y,m)/𝖳(y,x)=(1+β a_x)𝖳(x,m).Similarly, the fact that the factor with subscript n in the product corresponds to the interaction of 𝖸_n with 𝖷_n-1(1) by pushing and 𝖷_n(0) by blocking can be seen by multiplying out 𝖯𝗌𝗁(y',y,x') with 𝖻(x,y') and the relation ∑_m=x^∞𝖯𝗌𝗁(m,y,x')=𝖯𝗌𝗁(x,y,x')(1+β a_x),which is again a consequence of the previous identity.We now go on to prove an analogue of Proposition <ref> with 𝖰_0,1^N,N+1 replaced by 𝒢^N,N+1 (clearly, if we knew equality (<ref>) were true there would be nothing to show). The following identity is the key ingredient in the computation. We have ∑_x_i=y_i'^x_i'∧ (y_i+1-1)1/a_x_i𝖳(x_i,x_i')𝖡𝗅𝗄(y_i',y_i,x_i) 𝖯𝗌𝗁(y_i+1',y_i+1,x_i')=1/a_x_i'𝖳(y_i,y_i')𝖳(y_i+1,y_i+1') 1_y_i≤ y_i'<y_i+1≤ y_i+1'1_y_i'≤ x_i'<y_i+1'. Observe that, the left hand side is zero if we do not have y_i'<y_i+1 or if we do not have y_i'≤ x_i'<y_i+1'. Thus, we can restrict to this scenario in which case the indicator functions on the right are both 1. By using 𝖡𝗅𝗄(y_i',y_i,x_i)=𝖳(y_i,y_i')𝖻(x_i,y_i') and cancelling out 𝖳(y_i,y_i') it will suffice to establish the following claim∑_x_i=y_i'^x_i'∧ (y_i+1-1)a_x_i'/a_x_i𝖳(x_i,x_i')(1_y_i'<x_i+(1+β a_y_i')1_x_i=y_i')(𝖳(y_i+1,y_i+1')1_x_i'<y_i+1+𝖳(x_i'+1,y_i+1')1_x_i'≥ y_i+1) =𝖳(y_i+1,y_i+1').Let us denote by ℰ(y_i') the left hand side above as a function of y_i'. We will now show that ℰ(x_i'∧ (y_i+1-1))=𝖳(y_i+1,y_i+1') and that ℰ(m)=ℰ(m-1) from which the claim follows. Suppose x_i'<y_i+1 then x_i'∧ (y_i+1-1). It is immediate then that ℰ(x_i'∧ (y_i+1-1))=𝖳(y_i+1,y_i+1'). If instead we suppose x_i'≥ y_i+1 then x_i=y_i'=y_i+1-1. After some elementary manipulations using the explicit formulae we again obtain ℰ(x_i'∧ (y_i+1-1))=𝖳(y_i+1,y_i+1'). We now show ℰ(m)=ℰ(m-1). We observe that the sums ∑_x_i=m+1^x_i∧ (y_i+1-1) on both sides of the desired identity ℰ(m)=ℰ(m-1) are equal (the summands are exactly equal). We thus need to check that the sum ∑_x_i=m^m on the left hand side and the sum ∑_x_i=m-1^m are equal. After some manipultions this boils down to proving the identity1/a_m𝖳(m,x_i')(1+β a_m)=1/a_m𝖳(m,x_i')+1/a_m-1𝖳(m-1,x_i')(1+β a_m-1).Elementary manipulations readily establish this. We finally prove the required intertwinings.We have the intertwinings [𝒢^N,N+1Π_N]((𝐱,𝐲),𝐱') =[Π_N𝔓^(N)]((𝐱,𝐲),𝐱'), (𝐱,𝐲)∈𝕎_N,N+1, 𝐱'∈𝕎_N, [𝔓^(N+1)𝔏_N+1,N](𝐲,(𝐱',𝐲'))=[𝔏_N+1,N𝒢^N,N+1](𝐲,(𝐱',𝐲')), 𝐲∈𝕎_N+1, (𝐱',𝐲')∈𝕎_N,N+1. The proof of (<ref>) is immediate fromthe definition of 𝒢^N,N+1. For identity(<ref>) we make repeated use of Proposition <ref>. The right hand side of (<ref>) is equal to𝔥_N(𝐱')/𝔥_N+1(𝐲)∑_𝐱≺𝐲∏_i=1^N 1/a_x_i∏_i=1^N 𝖳(x_i,x_i')1_𝐱≺𝐱'ℛ^N,N+1(𝐲';𝐱',𝐱,𝐲).Now, recall that 𝖯^(N)(𝐱,𝐱')=(𝖳(x_i,x_j'))_i=1^N=∏_i=1^N 𝖳(x_i,x_i')1_𝐱≺𝐱'.Moreover, we can restrict the sum above to a sum over x_i∈ y_i',x_i'∧ (y_i+1-1) for otherwise the summand is zero and use the form of ℛ^N,N+1 from Proposition <ref> to obtain that the right hand side of (<ref>) is equal to𝔥_N(𝐱')/𝔥_N+1(𝐲)∑_x_i∈ y_i',x_i'∧ (y_i+1-1) ∏_i=1^N 1/a_x_i∏_i=1^N 𝖳(x_i,x_i')𝖡𝗅𝗄(y_1',y_1,x_1) ×∏_n=2^N𝖻(x_n,y_n')𝖯𝗌𝗁(y_n',y_n,x_n-1') 𝖯𝗌𝗁(y_N+1',y_N+1,x_N').Consider the factor1/a_x_1𝖳(x_1,x_1')𝖡𝗅𝗄(y_1',y_1,x_1)𝖯𝗌𝗁(y_2',y_2,x_1').Summing over x_1 ∈ y_1',x_1'∧ (y_2-1) using Proposition <ref> we get that this is equal to 1/a_x_1'𝖳(y_1,y_1')𝖳(y_2,y_2')1_y_1≤ y_1'<y_2≤ y_2'1_y_1'≤ x_1'<y_2'.We can then combine the factor 𝖳(y_2,y_2') with 𝖻(x_2,y_2') to get 𝖡𝗅𝗄(y_2',y_2,x_2) and perform the sum over x_2 ∈ y_2',x_2'∧ (y_3-1) using Proposition <ref>. If we keep iterating this procedure of using Proposition <ref> we obtain that the right hand side of (<ref>) is equal to 𝔥_N(𝐱')/𝔥_N+1(𝐲)∏_i=1^N1/a_x_i'∏_i=1^N+1𝖳(y_i,y_i') 1_𝐲≺𝐲'1_𝐱'≺𝐲'=[𝔓^(N+1)𝔏_N+1,N](𝐲,(𝐱',𝐲')),which concludes the proof. §.§ Continuous-time pure-birth chain coupling We have the following continuous-time analogue for pure-birth chains of our previous discrete-time results. This result was proven in <cit.>. Consider the continuous-time stochastic process (𝖷(t),𝖸(t);t≥ 0)in 𝕎_N,N+1. The 𝖷-component evolves autonomously according to the semigroup (𝔓^(N)_exp(-tz))_t≥ 0. The 𝖸-component evolves as N+1 independent pure-birth chain with jump rate a_x at x∈ℤ_+ with the following interactions with the 𝖷-component: * If the clock of 𝖸_i rings for it to jump and 𝖸_i=𝖷_i then nothing happens (block).* If 𝖷_i-1=x and 𝖸_i=x+1 and 𝖷_i-1 jumps to x+1 then 𝖸_i instantaneously moves to x+2 (push).Then, the time-homogeneous transition probabilities of (𝖷(t),𝖸(t);t≥ 0) from (𝐱,𝐲)∈𝕎_N,N+1 to (𝐱',𝐲')∈𝕎_N,N+1 in time t are given by:𝖰_exp(-tz)^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]=𝔥_N(𝐱')/𝔥_N(𝐱)𝒬_exp(-tz)^N,N+1[(𝐱,𝐲),(𝐱',𝐲')],where 𝒬_f^N,N+1 is defined as in Definition <ref> and Remark <ref>. Moreover, we have the intertwinings: [𝖰_exp(-tz)^N,N+1Π_N]((𝐱,𝐲),𝐱') =[Π_N𝔓_exp(-tz)^(N)]((𝐱,𝐲),𝐱'), (𝐱,𝐲)∈𝕎_N,N+1, 𝐱'∈𝕎_N, [𝔓_exp(-tz)^(N+1)𝔏_N+1,N](𝐲,(𝐱',𝐲'))=[𝔏_N+1,N𝖰_exp(-tz)^N,N+1](𝐲,(𝐱',𝐲')), 𝐲∈𝕎_N+1, (𝐱',𝐲')∈𝕎_N,N+1. Observe that, in the definition of the stochastic process (𝖷(t),𝖸(t);t≥ 0) above the interaction of the 𝖸-components with the 𝖷-components is exactly given by the interaction between consecutive levels in the pure-birth push-block dynamics of Definition <ref>. An analogous result holds for 𝒬_exp(-tz)^N,N+1 without the Doob-transform by 𝔥_N, except that now the autonomous 𝖷-component evolves as N independent pure-birth chains killed when they collide (the whole process is killed as well).§.§ Couplings of Borodin-Ferrari-Olshanski Given two Markov semigroups which are intertwined such as 𝔓_f^(N) and 𝔓_f^(N+1) (with an appropriate choice of f) there are different interesting ways to couple the corresponding stochastic processes. We now briefly survey, and compare to our setting, couplings developed[We note that in these constructions the intertwining from Proposition <ref> between 𝔓_f^(N) and 𝔓_f^(N+1) is taken as input. In the coalescing random walk framework the intertwining can be arrived at without apriori knowledge, see for example the discussion before Proposition <ref>.] by Borodin, Ferrari and Olshanski which have been made use of in many works <cit.>, see also <cit.> for further developments and generalisations. Although the origin[These couplings have their origin in an idea of Diaconis and Fill <cit.> from their study of strong stationary times <cit.> for convergence to equilibrium for Markov chains.] of these constructions is very different from the coalescing random walk framework we have developed, it turns out that these couplings in the case of pure-birth and Bernoulli dynamics coincide with ours. On the other hand, in the case of geometric jumps their coupling is different from the Warren-Windridge geometric dynamics, which are in some sense preferable since they have more Markovian projections. §.§.§ Sequential-update Consider the Markov kernel, see <cit.>,𝔓_f^(N,N+1),Seq[(𝐱,𝐲),(𝐱',𝐲')]=𝔓_f^(N)(𝐱,𝐱')𝔓_f^(N+1)(𝐲,𝐲')𝔏_N+1,N(𝐲',𝐱')/[𝔏_N+1,N𝔓_f^(N)](𝐲,𝐱'),[𝔏_N+1,N𝔓_f^(N)](𝐲,𝐱')>0,0,otherwise.on the state space 𝕊^N,N+1_Seq={(𝐱,𝐲)∈𝕎_N×𝕎_N+1|𝔏_N+1,N(𝐲,𝐱)>0}=𝕎_N,N+1.By virtue of the intertwining from Theorem <ref> it is immediate that this is correctly normalised. Moreover, the conditional distribution of the projection on 𝕎_N given the projection on 𝕎_N+1 is given by 𝔏_N+1,N, see <cit.>. It can be checked that in the case of Bernoulli jumps, namely with f(z)=1-α z, the dynamics arising from (<ref>) are exactly the sequential-update push-block Bernoulli dynamics we have been considering. In particular, (<ref>) with f(z)=1-α z is equal to (<ref>) with s=0,t=1 and f_0,1(z)=f(z). In the case of geometric jumps however, the coupling obtained from (<ref>) is different from the Warren-Windridge type coupling we have been considering in this work. It can be checked that its projection on the right edge is still the geometric push-TASEP but the projection on the left edge is not even Markovian (and the evolution of the rest of the array is indeed different). §.§.§ Parallel-update Consider the Markov kernel, see <cit.>, 𝔓_f^(N,N+1),Par[(𝐱,𝐲),(𝐱',𝐲')]=𝔓_f^(N)(𝐱,𝐱')𝔓_f^(N+1)(𝐲,𝐲')𝔏_N+1,N(𝐲',𝐱)/[𝔏_N+1,N𝔓_f^(N)](𝐲,𝐱)on the more complicated state space:𝕊^N,N+1_Par={(𝐱,𝐲)∈𝕎_N×𝕎_N+1|[𝔏_N+1,N𝔓_f^(N)](𝐲,𝐱)>0}.Again, by virtue of the intertwining from Theorem <ref> it is immediate that this correctly normalised. Also, the conditional distribution of the projection on 𝕎_N given the projection on 𝕎_N+1 is now given by 𝔏_N+1,N𝔓_f^(N), see <cit.>.In the case of Bernoulli jumps this gives rise, as the projection on the left edge, to parallel-update Bernoulli TASEP. It would be possible to obtain analogous results to the ones in this paper (show the existence of determinantal correlations, compute the correlation kernel and so on) for the parallel-update model but the situation is more involvedand we will not pursue the details here.§.§.§ Continuous-time For continuous time and countable state space an analogous framework for coupling intertwined semigroups was developed[In the earlier paper of Borodin-Ferrari <cit.> the continuous-time dynamics were obtained as a scaling limit from discrete-time.] by Borodin and Olshanski in <cit.>. The two-level dynamics are now defined through their explicit infinitesimal jump rates, see <cit.>, instead of their transition kernels which in continuous time are almost never explicit (unlike the explicit kernels (<ref>), (<ref>) for discrete-time). It can be checked that the dynamics coming from the coupling of <cit.>, associated to the intertwining from Theorem <ref>, actually match the pure-birth push-block dynamics we have been considering in this paper. So in particular, 𝖰^N,N+1_exp(-tw) is a rare example of an explicit transition kernel for the coupling framework developed in <cit.>.§ DYNAMICS ON ARRAYS§.§ The space-time inhomogeneous case Combining our previous results we prove the following.Let N≥ 1 be arbitrary. Consider an initial configuration in 𝕀𝔸_N distributed according to μ_N(𝐱^(N))𝔏_N,N-1(𝐱^(N),𝐱^(N-1))⋯𝔏_2,1(𝐱^(2),𝐱^(1)), for some probability measure μ_N on 𝕎_N. Then, perform M_1 steps of sequential-update Bernoulli dynamics with parameters α_1,…,α_M_1, M_2 steps of Warren-Windridge geometric dynamics with parameters β_1,…,β_M_2 and finally continuous-time pure-birth dynamics for time t. The parameters α_i,β_i are assumed to satisfy 0≤α_i ≤ (sup_x∈ℤ_+a_x)^-1 and 0≤β_i <(sup_x∈ℤ_+a_x-inf_x∈ℤ_+a_x)^-1. Then, the distribution of the resulting configuration in 𝕀𝔸_N is given by [μ_N𝔓^(N)_∏_i=1^M_1(1-α_iw)∏_i=1^M_2(1+β_iw)^-1exp(-tw)](𝐱^(N)) 𝔏_N,N-1(𝐱^(N),𝐱^(N-1))⋯𝔏_2,1(𝐱^(2),𝐱^(1)). Finally, consider the process (𝖷_i^(n)(t);t≥ 0)_1≤ i ≤ n; 1≤ n ≤ N in 𝕀𝔸_N defined in Theorem <ref>. Assume it is initialised according to (<ref>). Then, for any 1≤ n ≤ N, the projection on the n-th row evolves as a Markov process with transition probabilities 𝔓_s,t^(n)=𝔓_f_s,t^(n) from (<ref>).The proof is by induction, making use of Propositions <ref>, <ref> and <ref>, by virtue of the Markov functions theory of Rogers-Pitman <cit.>. This type of argument, and variations, have been documented in many places <cit.> and we do not repeat all details. The main point is that under such dynamics, the Gibbs-type property of the initial distribution of the array, namely the fact that the law of the first N-1 rows of the array given the N-th row being equal to 𝐲∈𝕎_N is given by:𝔏_N,N-1(𝐲,𝐱^(N-1)) 𝔏_N-1,N-2(𝐱^(N-1),𝐱^(N-2))⋯𝔏_2,1(𝐱^(2),𝐱^(1)),is preserved for all times t>0, and the projections on single levels are Markovian with the desired transition probabilities 𝔓_s,t^(n)=𝔓_f_s,t^(n). Then, the measure μ_N on level N is evolved accordingly,[μ_N𝔓^(N)_(1-α_1 w)⋯𝔓^(N)_(1-α_M_1 w)𝔓^(N)_(1+β_1w)^-1⋯𝔓^(N)_(1+β_M_2w)^-1𝔓^(N)_exp(-tw)](𝐱)= [μ_N𝔓^(N)_∏_i=1^M_1(1-α_iw)∏_i=1^M_2(1+β_iw)^-1exp(-tw)](𝐱),by virtue of Proposition <ref>, which is where we need the (technical) assumption on the β_i (recall the assumption on the α_i parameters is there for the transition probabilities to be well-defined). Observe that, the order in which we take the jumps is not important.We note that, the deterministic law of the fully-packed configuration is of the form (<ref>) by taking μ_N(𝐱)=1_(0,1,…,N-1)(𝐱). §.§ The space-level inhomogeneous case We now consider the space and level inhomogeneous setting of Theorem <ref>. Our process evolves with either only continuous-time pure-birth dynamics, or only discrete time Bernoulli or only discrete time geometric dynamics which are all homogeneous in time. The extra new ingredient for the argument are the functions 𝖧^∙_(γ_1,…,γ_N), which are defined recursively in (<ref>) and then shown to have a particular determinant expression in Proposition <ref> which reveals the desired parameter symmetry. Other than that, most of the work has already been done.Recall that, we write 𝖳_t^∙ with ∙∈{pb, B, g}, where the abbreviations stand for pure-birth, Bernoulli and geometric, to denote 𝖳_f with the following choices of function f(w): if ∙=pb, then f(w)=exp(-tw), if∙=B, then f(w)=(1-w)^t and if ∙=g, then f(w)=(1+w)^-t. We have the following eigenfunction relations𝖳_t^∙h_γ^∙(x)=c_t,γ^∙ h_γ^∙(x),where the functions h_γ^∙ are given byh_γ^pb(x)=p_x(-γ),h_γ^B(x)=p_x(1-γ^-1), h_γ^g(x)=p_x(γ^-1-1),and the constants c_t,γ^∙ by,c_t,γ^pb=e^tγ, c_t,γ^B=γ^-t, c_t,γ^g=γ^t.For ∙=g we require the technical condition sup_k∈ℤ_+a_k-inf_k∈ℤ_+a_k<1 on 𝐚. Moreover, if the parameter γ satisfies γ≥ 0 for ∙=pb, 0<γ≤ 1 for ∙=B, γ≥ 1 for ∙=g, then h_γ^∙ is strictly positive. The eigenfunction relation in the pure-birth and Bernoulli cases follows directly from Lemma <ref>. For the geometric case, subject to the condition on 𝐚, it also follows from Lemma <ref> but only for γ close to 1. Instead we argue as follows. A direct computation using the explicit formula for 𝖳_1^g from Lemma <ref> gives for any γ∈ℂ,𝖳_1^gh_γ^g(x)=c_1,γ^gh_γ^g(x).Then, by virtue of Lemma <ref>, subject to the condition on 𝐚, we obtain the result for any t ∈ℤ_+. The final statement regarding positivity of h^∙_γ(x) is immediate from the fact that the polynomial p_x is strictly positive on (-∞,0]. This gives a range of γ which is not optimal, but we restrict to it for simplicity.Recall Definition <ref>.In the setting of Definition <ref> we have,𝖳_t,γ^∙(x,y)=1/c_t,γ^∙h_γ^∙(y)/h_γ^∙(x)𝖳_t^∙(x,y). A direct computation reveals this Doob transform relation between 𝖳_t,γ^∙ and 𝖳_t^∙.Clearly, we have the relation𝖳_t,γ_1^∙(x,y)=c_t,γ_2^∙/c_t,γ_1^∙h_γ_1^∙(y)h_γ_2^∙(x)/h_γ_2^∙(y)h_γ_1^∙(x)𝖳_t,γ_2^∙(x,y).We then define the following kernels from 𝕎_N+1 to 𝕎_N by,Λ_N+1,N^γ_N+1,∙(𝐲,𝐱) =∏_j=1^Nv_γ_N+1^∙ (x_j)1_𝐱≺𝐲, Λ_N+1,N^γ_N+1,γ_N,∙(𝐲,𝐱) =∏_j=1^Nv_γ_N+1^∙ (x_j)h_γ_N^∙(x_j)/h_γ_N+1^∙(x_j)1_𝐱≺𝐲,where v_γ^∙(x) is given by:v_γ^pb(x)=(a_x+γ)^-1,v_γ^B(x)=(γ a_x+1-γ)^-1,v_γ^g(x)=(γ a_x-1+γ)^-1.Define the strictly positive function 𝖧^∙_(γ_1,…,γ_N) on 𝕎_N as follows:𝖧^∙_(γ_1,…,γ_N)(𝐱)=[Λ_N,N-1^γ_N,γ_N-1,∙Λ_N-1,N-2^γ_N-1,γ_N-2,∙⋯Λ_2,1^γ_2,γ_1,∙1](𝐱).For the rest of this section it will be convenient to use the following notation (note that γ here is a scalar)𝒫_t^γ,∙,N(𝐱,𝐲)=(𝖳^∙_t, γ(x_i,y_j))_i,j=1^N.Observe that 𝒫^γ,∙,N_t(𝐱,𝐲) coincides with 𝖯_f^(N)(𝐱,𝐲) from Definition <ref> with the following choices of function f and inhomogeneity 𝐚: if ∙=pb then f(w)=exp(-tw) and the inhomogeneity is 𝐚+γ, for ∙=B, then f(w)=(1-w)^t and the inhomogeneity is γ𝐚+1-γ and for ∙=g then f(w)=(1-w)^-t and the inhomogeneity is γ𝐚+γ-1. Here, for a scalar c, 𝐚+c is simply the sequence (a_x+c)_x∈ℤ_+. Thus, by virtue of Theorem <ref> we have the intertwining, with 𝐱∈𝕎_N, 𝐲∈𝕎_N+1,𝒫_t^γ_N+1,∙,N+1Λ_N+1,N^γ_N+1,∙(𝐲,𝐱)=Λ_N+1,N^γ_N+1,∙𝒫_t^γ_N+1,∙,N(𝐲,𝐱).From relation (<ref>) we then obtain yet another intertwining:𝒫_t^γ_N+1,∙,N+1Λ_N+1,N^γ_N+1,γ_N,∙(𝐲,𝐱)=(c^∙_t,γ_N/c^∙_t,γ_N+1)^NΛ_N+1,N^γ_N+1,γ_N,∙𝒫_t^γ_N,∙,N(𝐲,𝐱).Hence, by induction we obtain that 𝖧^∙_(γ_1,…,γ_N) is a strictly positive eigenfunction of 𝒫_t^γ_N,∙,N.We have: 𝒫_t^γ_N,∙,N𝖧^∙_(γ_1,…,γ_N)=(c^∙_t,γ_N)^-(N-1)∏_j=1^N-1c^∙_t,γ_j𝖧^∙_(γ_1,…,γ_N). With all these preliminaries in place we arrive to the following definition.For all N≥ 1, define the Markov kernel Λ_N+1,N^γ,∙ from 𝕎_N+1 to 𝕎_N by Λ_N+1,N^γ,∙(𝐲,𝐱)=∏_j=1^Nv_γ_N+1^∙ (x_j)h_γ_N^∙(x_j)/h_γ_N+1^∙(x_j)𝖧^∙_(γ_1,…,γ_N)(𝐱)/𝖧^∙_(γ_1,…,γ_N+1)(𝐲)1_𝐱≺𝐲,and define abusing notation the Markov kernel 𝒫_t^γ,∙, N from 𝕎_N to itself by𝒫_t^γ,∙, N(𝐱,𝐲)=(c_t,γ_N^∙)^N-1∏_j=1^N-11/c_t,γ_j^∙𝖧_(γ_1,…,γ_N)^∙(𝐲)/𝖧_(γ_1,…,γ_N)^∙(𝐱)(𝖳_t,γ_N^∙ (x_i,y_j))_i,j=1^N.Here, we have abused notation since in principle we have already defined 𝒫_t^γ,∙, N in (<ref>). From expression (<ref>) the symmetry in the parameters γ_1,γ_2,…,γ_N is not obvious at all due to the recursive definition of 𝖧^∙_(γ_1,…,γ_N) from (<ref>). Of course, we will prove shortly in Proposition <ref> below that (<ref>) and (<ref>) are one and the same. The following intertwining is then immediate from (<ref>), with 𝐲∈𝕎_N+1, 𝐱∈𝕎_N,𝒫_t^γ,∙,N+1Λ_N+1,N^γ,∙(𝐲,𝐱)=Λ_N+1,N^γ,∙𝒫_t^γ,∙,N(𝐲,𝐱),and we also note the following explicit formulae. We have the expressions𝒫_t^γ,∙, N(𝐱,𝐲) = ∏_j=1^N1/c_t,γ_j^∙h_γ_N^∙(y_j)/h_γ_N^∙(x_j)𝖧_(γ_1,…,γ_N)^∙(𝐲)/𝖧_(γ_1,…,γ_N)^∙(𝐱)(𝖳_t^∙ (x_i,y_j))_i,j=1^N=(c^∙_t,γ_N+1)^N∏_j=1^N1/c_t,γ_j^∙h_γ_N^∙(y_j)h_γ_N+1^∙(x_j)/h_γ_N^∙(x_j)h_γ_N+1^∙(y_j)𝖧_(γ_1,…,γ_N)^∙(𝐲)/𝖧_(γ_1,…,γ_N)^∙(𝐱)𝒫_t^γ_N+1,∙,N(𝐱,𝐲).We now obtain an alternative expression for 𝖧^∙_(γ_1,…,γ_N) which will give us a final expression for 𝒫_t^γ,∙,N from which parameter symmetry will be obvious. The following little observation is the key ingredient. We have the identity, x,y∈ℤ_+, y>x,∑_m=x^y-1v_γ_2^∙ (m)h_γ_1^∙(m)/h_γ_2^∙(m)=c^∙_γ_2;γ_1[h_γ_1^∙(x)/h_γ_2^∙(x)-h_γ_1^∙(y)/h_γ_2^∙(y)],where the constant c^∙_γ_2;γ_1 is given by c^pb_γ_2;γ_1=1/γ_2-γ_1, c^B_γ_2;γ_1=γ_1/γ_1-γ_2, c^g_γ_2;γ_1=γ_1/γ_2-γ_1. This is proven by induction on y. For y=x+1 an elementary computation gives h_γ_1^∙(x)/h_γ_2^∙(x)-h_γ_1^∙(x+1)/h_γ_2^∙(x+1) =1/c^∙_γ_2;γ_1v_γ_2^∙ (x)h_γ_1^∙(x)/h_γ_2^∙(x).Then, we have a telescoping sum and the conclusion follows. We have𝖧_(γ_1,…,γ_N)^∙(x_1,…,x_N)=∏_j=2^N∏_i=1^j-1c_γ_j;γ_i^∙∏_j=1^N 1/h_γ_N^∙(x_j)(h_γ_i^∙(x_j))_i,j=1^N. This is obtained by induction. For the inductive step we can bring the sums inside the determinant by multinearity and then use the identity (<ref>) from Lemma <ref>. By combining the above we obtain the final expression for 𝒫_t^γ,∙,N matching (<ref>). We have𝒫_t^γ,∙,N(𝐱,𝐲)=∏_j=1^N1/c_t,γ_j^∙×(h_γ_i^∙(y_j))_i,j=1^N/(h_γ_i^∙(x_j))_i,j=1^N(𝖳_t^∙ (x_i,y_j))_i,j=1^N.In particular, 𝒫_t^γ,∙,N is symmetric in the parameters γ_1,γ_2,…,γ_N.Combine Lemma <ref> and Proposition <ref>. We need one more definition. Define the sub-Markov kernel 𝔚_t^N,N+1,∙,γ_N+1 from 𝕎_N,N+1 to itself as follows: * If ∙=pb, then with t∈ℝ_+, 𝔚_t^N,N+1,pb,γ_N+1=𝒬_exp(-tw)^N,N+1, with the underlying inhomogeneity sequence required to define 𝒬_exp(-tw)^N,N+1 being 𝐚+γ_N+1.* If ∙=B, then with t∈ℤ_+, 𝔚_t^N,N+1,B,γ_N+1=𝒬_(1-w)^t^N,N+1, with the underlying inhomogeneity sequence required to define 𝒬_(1-w)^t^N,N+1 being γ_N+1𝐚-γ_N+1+1.* If ∙=g, we first define the sub-Markov kernel 𝒢̃^N,N+1 by𝒢̃^N,N+1[(𝐱,𝐲),(𝐱',𝐲')]= 𝔥_N(𝐱)/𝔥_N(𝐱')𝒢^N,N+1[(𝐱,𝐲),(𝐱',𝐲')],with the underlying inhomogeneity sequence defining 𝒢^N,N+1 being γ_N+1𝐚+γ_N+1-1 and β=1. Then, with t∈ℤ_+, 𝔚_t^N,N+1,g,γ_N+1=𝒢̃^N,N+1⋯𝒢̃^N,N+1 convolved with itself t times. . The slightly more involved definition for the geometric case is because 𝒢^N,N+1 is already Markov as it is the analogue of 𝖰_f^N,N+1; in fact, as mentioned earlier, they should be equal for f(z)=(1+β z)^-1. So we need to unnormalise it first to obtain 𝒢̃^N,N+1, the analogue of 𝒬_f^N,N+1, in order to treat all three cases uniformly. The following is simply a re-writing of Propositions <ref>, <ref> and <ref> in the uniform notation of 𝔚_t^N,N+1,∙,γ_N+1 (but we note again that the way the parameter γ_N+1 appears in the inhomogeneity sequence is different in each case). Let N≥ 1 and t≥ 0. Then, we have [𝔚_t^N,N+1,∙,γ_N+1Π_N]((𝐱,𝐲),𝐱') =[Π_N𝒫_t^γ_N+1,∙,N]((𝐱,𝐲),𝐱'), [𝒫_t^γ_N+1,∙,N+1Λ_N+1,N^γ_N+1,∙](𝐲,(𝐱',𝐲'))=[Λ_N+1,N^γ_N+1,∙𝔚_t^N,N+1,∙,γ_N+1](𝐲,(𝐱',𝐲')), where we view Λ_N+1,N^γ_N+1,∙ as a Markov kernel from 𝕎_N+1 to 𝕎_N,N+1 as before. Observe that, by virtue of Lemma <ref> we can Doob transform the semigroup 𝒫_t^γ_N+1,∙,N to 𝒫_t^γ,∙,N using the eigenfunction∏_j=1^N h_γ_n^∙(x_j)/h_γ_N+1^∙(x_j)𝖧^∙_(γ_1,…,γ_N)(x_1,…,x_N)having eigenvalue (c^∙_t,γ_N+1)^-N∏_j=1^Nc_t,γ_j^∙. Hence, we can correctly define the Markov kernel 𝖣_t^N,N+1,∙,γ_N+1 on 𝕎_N,N+1 by,𝖣_t^N,N+1,∙,γ_N+1[(𝐱,𝐲),(𝐱',𝐲')] =(c^∙_t,γ_N+1)^N∏_j=1^n1/c_t,γ_j^∙h_γ_N^∙(y_j)h_γ_N+1^∙(x_j)/h_γ_N^∙(x_j)h_γ_N+1^∙(y_j)××𝖧_(γ_1,…,γ_N)^∙(𝐲)/𝖧_(γ_1,…,γ_N)^∙(𝐱)𝔚_t^N,N+1,∙,γ_N+1[(𝐱,𝐲),(𝐱',𝐲')].Putting everything together we obtain, by virtue of Proposition <ref>: Let N≥ 1 and t≥ 0. Then, we have [𝖣_t^N,N+1,∙,γ_N+1Π_N]((𝐱,𝐲),𝐱') =[Π_N𝒫_t^γ,∙,N]((𝐱,𝐲),𝐱'),(𝐱,𝐲)∈𝕎_N,N+1, 𝐱'∈𝕎_N, [𝒫_t^γ,∙,N+1Λ_N+1,N^γ,∙](𝐲,(𝐱',𝐲'))=[Λ_N+1,N^γ,∙𝖣_t^N,N+1,∙,γ_N+1](𝐲,(𝐱',𝐲')),𝐲∈𝕎_N+1, (𝐱',𝐲')∈𝕎_N,N+1, where we view Λ_N+1,N^γ,∙ as a Markov kernel from 𝕎_N+1 to 𝕎_N,N+1 as before.Theorem <ref> is then a direct consequence of the following proposition with the choice μ_N(𝐱)=1_(0,1,…,N-1)(𝐱). Let N≥ 1 be arbitrary. Consider the process (𝖷_i^(n),∙(t);t≥ 0)_1≤ i ≤ n; 1 ≤ n ≤ N from Definition <ref> in 𝕀𝔸_N. We assume that the sequence γ satisfies (<ref>) and if ∙=B then sup_x∈ℤ_+a_x≤ 1 or if ∙=g then sup_x∈ℤ_+a_x-inf_x∈ℤ_+a_x<1. Suppose the initial condition in 𝕀𝔸_N is of the form,μ_N(𝐱^(N))Λ_N,N-1^γ,∙(𝐱^(N),𝐱^(N-1)) ⋯Λ_2,1^γ,∙(𝐱^(2),𝐱^(1)),for some probability measure μ_N on 𝕎_N. Then, for any 1≤ n ≤ N, the projection on the n-th row (𝖷^(n),∙(t);t≥ 0) evolves as a Markov process with transition probabilities (𝒫_t^γ,∙,n)_t≥ 0 given in (<ref>) and the distribution of (𝖷^(1),∙(𝔱),…,𝖷^(N),∙(𝔱)) in 𝕀𝔸_N for fixed time 𝔱 is given by[μ_N𝒫_𝔱^γ,∙,N](𝐱^(N))Λ_N,N-1^γ,∙(𝐱^(N),𝐱^(N-1)) ⋯Λ_2,1^γ,∙(𝐱^(2),𝐱^(1)). Similarly to the proof of Proposition <ref> this is obtained by induction using Proposition <ref> now instead, see <cit.> for completely analogous arguments. As in Proposition <ref>, the extra condition for ∙=B is so that the transition probabilities are positive and for ∙=g so that we can use our convolution of kernels result. § COMPUTATION OF THE CORRELATION KERNELS By virtue of display (<ref>) in Proposition <ref>, with μ_N(𝐱)=1_𝐱=(0,1,…,N-1), a simple computation gives that, after running the dynamics described in Theorem <ref> we can write out the resulting distribution of the array (𝖷_i^(n))_1≤ i ≤ n; 1≤ n ≤ N, for all N≥ 1, explicitly as a product of determinants. We then apply the Eynard-Mehta theorem <cit.> in the form that can be found in Lemma 3.4 of <cit.> (we do not recall this theorem explicitly as it has been documented many times). This immediately gives the existence of a determinantal point process structure. Finding an explicit expression for the correlation kernel 𝔎_f is our next task. We need to introduce some notation. Define the functions Ψ_N-i^(N), for i=1,…, N, by Ψ_N-i^(N)(x)=-1/a_x1/2πi∮_𝖢_𝐚,0w^N-if(w)/p_x+1(w)dw,x ∈ℤ_+,with f(w)=∏_i=1^M_1(1-α_iw)∏_i=1^M_2(1+β_iw)^-1exp(-tw) as first defined in (<ref>). Up to a multiplicative constant, 𝔓_f^(N)((0,…,N-1),𝐱) is equal to (Ψ_N-j^(N)(x_i))_i,j=1^N. Note that, the choice of the 𝖢_𝐚 contour in the integral also gives the same functions Ψ (however the choice of 𝖢_𝐚,0 will be important below). Define the kernel ϕ^(n)(x_1,x_2) as the convolution of ϕ from Lemma <ref> with itself n times. For 1≤ n ≤ N-1 and 1≤ j ≤ N define the functions Ψ_n-j^(n) by convolution with ϕ^(N-n):Ψ_n-j^(n)(y)=[ϕ^(N-n)Ψ_N-j^(N)](y),y∈ℤ_+. We will make use of the following intermediate lemmas. Let 1≤ n ≤ N and 1≤ j ≤ N. Then, we have, with f as in (<ref>),Ψ_n-j^(n)(x)=-1/a_x1/2πi∮_𝖢_𝐚,0w^n-jf(w)/p_x+1(w)dw. We prove this by induction. We need to show[ϕΨ_n-j^(n)](x)=Ψ_n-1-j^(n-1)(x).The left hand side is equal to -1/a_x1/2πi∑_y>x-1/a_y∮_𝖢_𝐚,0w^n-1-jf(w)/p_y+1(w)dw.We deform the contour 𝖢_𝐚,0 to the contour 𝖢̃_𝐚 from the proof of Lemma <ref>. Then, we can bring the sum inside the integral, use identity (<ref>) and deform the contour back to𝖢_𝐚,0, without crossing any poles, which gives Ψ_n-1-j^(n-1)(x) and concludes the proof.Let 1 ≤ k ≤ N. Then, we haveϕ^(k)(y,x)=-1/a_y1/2πi∮_𝖢_𝐚,0p_x(w)/p_y+1(w)w^kdw. This is Lemma 3.5 in <cit.>. Let 1≤ k ≤ N. Then, we haveϕ^(k)(𝗏𝗂𝗋𝗍,x)=1/2πi∮_𝖢_0p_x(w)/w^kdw. This is Lemma 3.6 in <cit.>.Let 1≤ n ≤ N and 1≤ j ≤ n. Define the following function, with f as in (<ref>),Φ_n-j^(n)(x)=1/2πi∮_𝖢_0p_x(w)/f(w)w^n-j+1dw,x∈ℤ_+. Let 1≤ n ≤ N. Then, we have∑_x=0^∞Ψ_i^(n)(x)Φ_j^(n)(x)=1_i=j,for 0≤ i,j ≤ n-1.Observe that since i is non-negative we can use the 𝖢_𝐚 contour in the definition of Ψ_i^(n). Then, by deforming, if required, the contour 𝖢_0 to a contour inside the region 𝒰 defined in (<ref>), we can make use of Lemma <ref>, to compute with g(w)=f(w)w^i∈𝖧𝗈𝗅(ℍ_-R) where R>R(𝐚),∑_x=0^∞Ψ_i^(n)(x)Φ_j^(n)(x) =∑_x=0^∞-1/a_x1/2πi∮_𝖢_𝐚,0w^if(w)/p_x+1(w)dw1/2πi∮_𝖢_0p_x(u)/f(u)u^j+1du=1/2πi∮_𝖢_01/f(u)u^j+1∑_x=0^∞ p_x(u)𝖳_g(x) du=1/2πi∮_𝖢_01/f(u)u^j+1 f(u)u^idu=1_i=j,as desired.The functions {Φ_j^(n)(·); 0≤ j ≤ n-1} span the space span{ϕ^(1)(𝗏𝗂𝗋𝗍,·), ϕ^(2)(𝗏𝗂𝗋𝗍,·),…,ϕ^(n)(𝗏𝗂𝗋𝗍,·)}. Using the Cauchy integral formula we see thatspan{ϕ^(1)(𝗏𝗂𝗋𝗍,·), ϕ^(2)(𝗏𝗂𝗋𝗍,·),…,ϕ^(n)(𝗏𝗂𝗋𝗍,·)}=span{x↦d^k-1/dw^k-1p_x(w)|_w=0; 1≤ k ≤ n}.Again, using the Cauchy integral formula, since f has no zeroes in 𝖢_0, we haveΦ_j^(n)(x)=1/j!d^j/dw^j(p_x(w)/f(w))|_w=0=1/j!d^j/dw^jp_x(w)|_w=0+∑_k=0^j-1c_k,jd^k/dw^kp_x(w)|_w=0.Thus, we get span{Φ_j^(n)(·); 0≤ j ≤ n-1}=span{x↦d^k-1/dw^k-1p_x(w)|_w=0; 1≤ k ≤ n}as required. For 1≤ n ≤ N, we have ϕ(𝗏𝗂𝗋𝗍,·)=Φ_0^(n)(·)≡ 1.This is due to the fact that f(0)=1 and the fact that f has no zeroes in 𝖢_0.We apply a variant of the Eynard-Mehta theorem, in exactly the form found in Lemma 3.4 of <cit.>, by virtue of all the preceding results in this section, to obtain the explicit form of the correlation kernel 𝔎_f as follows:𝔎_f[(n_1,x_1);(n_2,x_2)]=-ϕ^(n_2-n_1)(x_1,x_2)1_n_2>n_1+∑_k=1^n_2Ψ_n_1-k^(n_1)(x_1)Φ_n_2-k^(n_2)(x_2).We need to simplify the sum ∑_k=1^n_2Ψ_n_1-k^(n_1)(x_1)Φ_n_2-k^(n_2)(x_2) =-1/a_x_11/(2πi)^2∑_k=1^n_2∮_𝖢_𝐚,0f(w)w^n_1-k/p_x_1+1(w)dw∮_𝖢_0p_x_2(u)/f(u)u^n_2-k+1du=-1/a_x_11/(2πi)^2∮_𝖢_𝐚,0dw∮_𝖢_0dup_x_2(u)f(w)/p_x_1+1(w)f(u)∑_k=1^∞w^n_1-k/u^n_2-k+1=-1/a_x_11/(2πi)^2∮_𝖢_𝐚,0dw∮_𝖢_0dup_x_2(u)f(w)/p_x_1+1(w)f(u)w^n_1/u^n_21/w-u,where we have extended the sum over k from n_2 to infinity. This is allowed because there are no additional contributions for k>n_2 since there are no poles in u in 𝖢_0 and thus the u-contour integral vanishes.Moreover, note that since |u|<|w| the geometric series converges. This concludes the proof.The functions 𝔥_N can be written as𝔥_N(𝐱)=(ϕ^(N+1-i)(𝗏𝗂𝗋𝗍,x_j))_i,j=1^N=(1/(i-1)!(-d/dw)^i-1p_x_j(w)|_w=0)_i,j=1^N. This easily follows by the formula for 𝔥_N(𝐱) from Definition <ref>.Consider the process (𝖷_k^(n)(t);t ≥ 0)_1≤ k ≤ n; n≥ 1 with discrete-time Bernoulli or geometric dynamics determined by the functions f_i,i+1(z) as in Theorem <ref>. Let M≥1 and t_1<t_2<⋯<t_M be arbitrary times. The joint distribution of (𝖷^(N)(t_1),…,𝖷^(N)(t_M)) is then given by1/Z(Φ̃^(t_1)_j(x_j^(1)))_i,j=1^N ∏_r=1^M-1(𝖳_f_t_r,t_r+1(x^(r)_i,x^(r+1)_j))_i,j=1^N (Ψ̃^(t_M)_j(x_j^(M)))_i,j=1^N,where the functions Ψ̃_j and Φ̃_j are given by:Φ̃^(t_1)_j(x) =-1/2πi1/a_x∮_𝖢_𝐚,0f_0,t_1(w)w^j-1/p_x+1(w)dw, Ψ̃^(t_M)_j(x) =1/2πi∮_𝖢_0p_x(w)/f_0,t_M(w)w^jdw,and Z is some normalisation constant. In the case of (𝖷_k^(n)(t);t ≥ 0)_1≤ k ≤ n; n≥ 1 following the continuous-time pure-birth dynamics the conclusion is exactly the same but with t_i∈ℝ_+ and f_t_1,t_2(z)=e^-(t_2-t_1)z instead. By virtue of Proposition <ref>, we also note Lemma <ref>, (𝖷^(N)(t);t≥ 0) evolves as a Markov process with transition probabilities given by (<ref>). Then, by using Markov property we can write down the following formula for the distribution of (𝖷^(N)(t_1),…,𝖷^(N)(t_M)),1/Z̃(𝖳_f_0,t_1(i-1,x^(1)_j))_i,j=1^N(𝖳_f_t_1,t_2(x^(1)_i,x^(2)_j))_i,j=1^N ⋯(𝖳_f_t_M-1,t_M(x^(1)_i,x^(2)_j))_i,j=1^N 𝔥_N(x^(N)),for some normalisation constant Z̃. Then, by virtue of Lemma <ref> (in particular, since the Ψ̃_i^(t_M)(z) functions, which are essentially the Φ functions from Definition <ref>, span the space spanned bythe ϕ^(i)(𝗏𝗂𝗋𝗍,z) functions) and simple row operations on the determinants defining the first and last factors we obtain the desired result. Finally, the exact same argument gives the conclusion in the continuous-time case. It only remains to compute the correlation functions, as the probabilistic statement that projections on single levels are Markovian follows from Proposition <ref>. Define, for 1≤ k ≤ M, the functions Φ̃_i^(t_k) and Ψ̃_j^(t_k) by the convolutionsΦ̃_j^(t_k)(x) =[Φ̃_j^(t_1)𝖳_f_t_1,t_2⋯𝖳_f_t_k-1,t_k](x), Ψ̃_j^(t_k)(x) =[𝖳_f_t_k,t_k+1⋯𝖳_f_t_M-1,t_MΨ̃_j^(t_M)](x).Then, arguing as in the proof of Lemma <ref> we obtain the following explicit description for them, for any 1≤ k ≤ M:Φ̃^(t_k)_j(x) =-1/2πi1/a_x∮_𝖢_𝐚,0f_0,t_k(w)w^j-1/p_x+1(w)dw, Ψ̃^(t_k)_j(x) =1/2πi∮_𝖢_0p_x(w)/f_0,t_k(w)w^jdw.Finally, for any 1≤ k ≤ M, we have by an obvious adaptation of Proposition <ref>, ∑_x=0^∞Φ̃_i^(t_k)(x) Ψ̃_j^(t_k)(x)=1_i=j,for 1≤ i,j ≤ N. Thus, by virtue of Proposition <ref> and the Eynard-Mehta theorem in the form found for example in <cit.>, the correlation functions of the underlying point process are determinantal and the correlation kernel 𝒦_N is given by𝒦_N[(s,x_1);(t,x_2)] =-1_t>s𝖳_f_s,t(x_1,x_2)+∑_k=1^N Ψ̃^(s)_k(x_1)Φ̃^(t)_k(x_2)=-1_t>s𝖳_f_s,t(x_1,x_2)-1/a_x_21/(2 πi)^2∮_𝖢_𝐚,0 dw∮_𝖢_0 du p_x_1(u)f_0,t(w)/p_x_2+1(w)f_0,s(u)∑_k=1^N w^k-1/u^k=-1_t>s𝖳_f_s,t(x_1,x_2)-1/a_x_21/(2 πi)^2∮_𝖢_𝐚,0 dw∮_𝖢_0 du p_x_1(u)f_0,t(w)/p_x_2+1(w)f_0,s(u)w^N/u^N1/w-u,where we have extended the sum over k to a sum from -∞ to N since for k≤ 0 there are no poles in u in 𝖢_0 and thus no additional contributions to the sum. Moreover, since |u|<|w| the resulting geometric series is convergent. The proof for the continuous-time case is exactly the same but with t_i∈ℝ_+ and f_s,t(z)=e^-(t-s)z instead. § WALKS CONDITIONED TO NEVER INTERSECT In this section we prove Theorem <ref>. The reader is advised to recall the corresponding notation and definitions therein. The following proposition is the main result of this section, from which Theorem <ref> will easily follow.Let N≥ 1 be fixed and write γ=(γ_1,…,γ_N). Consider the stochastic process (𝖷^∙_γ(t);t≥ 0)=(𝗑_γ_1^∙(t),…,𝗑_γ_N^∙(t); t≥ 0), with 𝖷_γ(0)=𝐱∈𝕎_N, with coordinates (𝗑_γ_i^∙(t);t≥ 0) being independent and having transition probabilities (𝖳_t,γ_i^∙)_t≥ 0. Assume thatγ=(γ_1,…,γ_N), satisfy (<ref>), and for i=1,…,N-1, (<ref>), (<ref>), (<ref>). Recall the first collision time τ_col^∙ defined byτ_col^∙=inf{t>0:𝖷^∙_γ(t-)⊀𝖷^∙_γ(t)},where if ∙∈{B, g} then 𝖷^∙_γ(t-)=𝖷^∙_γ(t-1) while if ∙=pb then 𝖷^∙_γ(t-)=lim_s↑ t𝖷^∙_γ(s). Then, we have the explicit expression, with ℙ_𝐱 denoting the law of (𝖷^∙_γ(t);t≥ 0) starting from 𝐱,ℙ_𝐱(τ_col^∙ =∞)=(h_γ_i^∙(x_j))_i,j=1^N/∏_i=1^N h_γ_i^∙(x_i). By a Doob-transform <cit.> of the corresponding LGV/Karlin-McGregor formula <cit.> we haveℙ_𝐱(𝖷^∙_γ(t)=𝐲,τ_col^∙>t)=∏_i=1^N c_t,γ_i^∙∏_i=1^Nh_γ_i^∙(y_i)/h_γ_i^∙(x_i)(𝖳_t^∙(x_i,y_j))_i,j=1^N.Thus, by summing over 𝕎_N we obtain ℙ_𝐱(τ_col^∙>t)=∑_𝐲∈𝕎_N∏_i=1^N c_t,γ_i^∙∏_i=1^Nh_γ_i^∙(y_i)/h_γ_i^∙(x_i)(𝖳_t^∙(x_i,y_j))_i,j=1^N.By writing out the determinant explicitly in terms of the Leibniz formula and then instead of summingover 𝕎_N we sum over ℤ_+^N and subtract the sum over ℤ_+^N∖𝕎_N we get,(h_γ_i^∙(x_j))_i,j=1^N/∏_i=^N h_γ_i^∙(x_i)-∑_σ∈𝔖(N)sgn(σ) ∏_i=1^Nh_γ_i^∙(y_i)/h_γ_i^∙(x_i)ℙ_(x_σ(1),…,x_σ(N))(𝖷_γ^∙(t)∉𝕎_N).Here, 𝔖(N) denotes the symmetric group on N symbols. We now proceed to show that ℙ_(x_σ(1),…,x_σ(N))(𝖷_γ^∙(t)∉𝕎_N) → 0, as t →∞, for any permutation σ∈𝔖(N). Observe that, it suffices to prove the following. Suppose (𝗑_γ_1^∙(t);t≥ 0) and (𝗑_γ_2^∙(t);t≥ 0) are independent and follow the dynamics (𝖳_t,γ_1^∙)_t≥ 0 and (𝖳_t,γ_2^∙)_t≥ 0 respectively with initial conditions 𝗑_γ_i^∙(0)=x_i where x_1,x_2 ∈ℤ_+ are not necessarily ordered. Then, as t →∞ we have,ℙ(𝗑^∙_γ_1(t)<𝗑^∙_γ_2(t)|(𝗑^∙_γ_1(0),𝗑^∙_γ_2(0))=(x_1,x_2))→ 1,which completes the proof. This last claim can be established as follows. Let us consider the pure-birth case. We can then couple (𝗑_γ_1^pb(t);t≥ 0), (𝗑_γ_2^pb(t);t≥ 0) with two independent standard Poisson processes (𝗒_1(t);t≥ 0) and (𝗒_2(t);t≥ 0) of rates sup_k∈ℤ_+ a_k+γ_1 and inf_k∈ℤ_+ a_k+γ_2 respectively such that almost surely,𝗑_γ_2^pb(t)≥𝗒_2(t), 𝗑_γ_1^pb(t)≤𝗒_1(t), ∀ t≥ 0.Note that, since 𝗒_1,𝗒_2 are standard Poisson processes we have almost surely𝗒_1(t)/tt→∞⟶sup_k∈ℤ_+ a_k+γ_1, 𝗒_2(t)/tt→∞⟶inf_k∈ℤ_+ a_k+γ_2.Thus, we readily obtain ℙ(𝗑^pb_γ_1(t)<𝗑^pb_γ_2(t)|(𝗑^pb_γ_1(0),𝗑^pb_γ_2(0))=(x_1,x_2))≥ℙ(𝗒_1(t)<𝗒_2(t)|(𝗒_1(0),𝗒_2(0))=(x_1,x_2))t→∞⟶ 1,if we have inf_k∈ℤ_+ a_k+γ_2>sup_k∈ℤ_+ a_k+γ_1 which proves the claim. Finally, the Bernoulli and geometric cases follow by the exact same argument by comparing to the corresponding homogeneous dynamics. We note that, as long as one has (<ref>) then the whole argument goes through. The coupling with the homogeneous dynamics, which is where the conditions (<ref>), (<ref>), (<ref>) are required, gives a much stronger asymptotic statement than the one needed to establish (<ref>). From our previous results (but a direct proof is also possible) we easily obtain strict positivity for the probability we just computed.We have, for any 𝐱∈𝕎_N, with ∙∈{pb, B,g} and the γ_i parameters satisfying the corresponding conditions in Proposition <ref>,ℙ_𝐱(τ_col^∙=∞)>0. This is a consequence of the formula given in Proposition <ref>above along with the recursive formula for 𝖧_(γ_1,…,γ_N)^∙ found in (<ref>) and Proposition <ref>. Putting everything together we obtain Theorem <ref>. Observe that, by conditioning on τ^∙_col>t+s, using the strong Markov property and taking s →∞ (by virtue of Proposition <ref> there are no issues of dividing by 0), we haveℙ(𝖷^∙,n.c._γ(t)=𝐲|𝖷^∙,n.c._γ(0)=𝐱) =ℙ_𝐱(𝖷^∙_γ(t)=𝐲|τ_col^∙=∞)=lim_s →∞ℙ_𝐲(τ^∙_col>s)/ℙ_𝐱(τ^∙_col>t+s)ℙ_𝐱(𝖷^∙_γ(t)=𝐲,τ_col^∙>t)=ℙ_𝐲(τ^∙_col=∞)/ℙ_𝐱(τ^∙_col=∞)ℙ_𝐱(𝖷^∙_γ(t)=𝐲,τ_col^∙>t).The conclusion then follows by plugging in the explicit formula from Proposition <ref>. § TRANSITION KERNELS AND DETERMINANTAL PROCESSES FOR THE EDGE PARTICLE SYSTEMS FOR GENERAL INITIAL CONDITION In this section we prove Theorem <ref>, by building on our preceding results, as Theorem <ref> and Theorem <ref> below.Define the Markov kernel 𝔏^(N) from 𝕎_N to 𝕀𝔸_N by the formula 𝔏^(N)(𝐱,(𝐲^(1),…,𝐲^(N)))=1_𝐲^(N)=𝐱∏_n=1^N-1𝔏_n+1,n(𝐲^(n+1),𝐲^(n)).Also, define 𝔏̂^(N)(𝐱,(𝐲^(1),…,𝐲^(N)))=𝔥_N(𝐱)𝔏^(N)(𝐱,(𝐲^(1),…,𝐲^(N))). Recall also the definition of 𝕎_N from (<ref>) which is the state space for (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0). We denote by ℰ_r^(N) and ℰ_l^(N) the projections on the right and left edges of 𝕀𝔸_N respectively:ℰ_r^(N)[(𝐱^(1),𝐱^(2),…,𝐱^(N))] =(x_1^(1),x_2^(2),…,x_N^(N))∈𝕎_N, ℰ_l^(N)[(𝐱^(1),𝐱^(2),…,𝐱^(N))] =(x_1^(N),x_1^(N-1),…,x_1^(1))∈𝕎_N.We can view ℰ_r^(N) and ℰ_l^(N) as Markov kernels from 𝕀𝔸_N to 𝕎_N and 𝕎_N respectively. Finally, define the Markov kernels 𝖤_N,r and 𝖤_N,l from 𝕎_N to 𝕎_N and 𝕎_N respectively by the compositions:𝖤_N,r=𝔏^(N)ℰ^(N)_r, 𝖤_N,l=𝔏^(N)ℰ^(N)_l.The proposition below is our starting point. In the setting of Theorem <ref>, with the notations and assumptions therein, recall that we denote by 𝔈_f_s,t,r^(N) and 𝔈_f_s,t,l^(N) the transition kernels from time s to time t of the autonomous systems (𝖷_1^(1)(t),𝖷_2^(2)(t),…,𝖷_N^(N)(t);t≥ 0) and (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0) on the right and left edge of the array. Then, we have the intertwinings𝔓_f_s,t^(N)𝖤_N,r(𝐱,𝐲) =𝖤_N,r𝔈_f_s,t,r^(N)(𝐱,𝐲), 𝐱,𝐲∈𝕎_N,𝔓_f_s,t^(N)𝖤_N,l(𝐱,𝐲) =𝖤_N,l𝔈_f_s,t,l^(N)(𝐱,𝐲), 𝐱∈𝕎_N, 𝐲∈𝕎_N. Let us write ℑ𝔄^(N)_f_s,t for the transition probabilities from time s to time t of the Markov chain (𝖷_i^(n)(t);t ≥ 0)_1≤ i ≤ n; 1≤ n ≤ N in 𝕀𝔸_N from Theorem <ref>. Then, since the evolution of (𝖷_1^(1)(t),𝖷_2^(2)(t),…,𝖷_N^(N)(t);t≥ 0) and (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0) is autonomous we haveℑ𝔄_f_s,t^(N)ℰ^(N)_r =ℰ^(N)_r𝔈_f_s,t,r^(N), ℑ𝔄_f_s,t^(N)ℰ^(N)_l =ℰ^(N)_l𝔈_f_s,t,l^(N).On the other hand, by virtue of Proposition <ref> we have𝔓_f_s,t^(N)𝔏^(N)=𝔏^(N)ℑ𝔄_f_s,t^(N).Combining the equations in the above displays we obtain the desired statement.We define, for x,y∈ℤ_+,ψ_r(x,y)=a_y^-11_x≤ yandψ_l(x,y)=a_y^-11_y<x.Write ψ_r^n(x,y) for the convolution of ψ_r(x,y) with itself n times, with ψ^0_r(x,y)=1_x=y, and similarly for ψ_l^n(x,y).Define the kernels 𝖤̂_N,r, 𝖤̂_N,l by𝖤̂_N,r(𝐱,𝐲) =𝔥_N(𝐱)𝖤_N,r(𝐱,𝐲),𝐱,𝐲∈𝕎_N, 𝖤̂_N,l(𝐱,𝐲) =𝔥_N(𝐱)𝖤_N,l(𝐱,𝐲), 𝐱∈𝕎_N, 𝐲∈𝕎_N. 𝖤̂_N,r has the following determinant expression, with 𝐱,𝐲∈𝕎_N,𝖤̂_N,r(𝐱,𝐲)=(ψ_r^N-j(x_i,y_j))_i,j=1^N,and similarly 𝖤̂_N,l, with 𝐱∈𝕎_N, 𝐲∈𝕎_N is given by 𝖤̂_N,l(𝐱,𝐲)=(ψ_l^j-1(x_i,y_j)))_i,j=1^N. Observe that, we have 𝖤̂_N,r(𝐱,𝐲)=∑_𝐱^(1),𝐱^(2),…,𝐱^(N-1) 𝐱^(N)=𝐱;(x_1^(1),x_2^(2),…,x_N^(N))=𝐲∏_i=1^N-11_𝐱^(i)≺𝐱^(i+1)∏_j=1^i1/a_x_j^(i), 𝐱,𝐲∈𝕎_N.We now observe that, each term in the sum above is in correspondence with a set of N non-intersecting paths in a certain LGV graph <cit.>. The LGV graph is given by the square grid ℤ_+×ℕ with horizontal edges directed to the right and vertical edges directed down. All horizontal edges have weight 1 and all vertical edges with horizontal co-ordinate x have weight a_x^-1. We are then looking at N non-intersecting paths from (x_1,N),…, (x_N,N) to (y_1,1),…,(y_N,N) which end with a vertical edge, see Figure <ref> for an illustration. Hence, by the LGV formula <cit.> we obtain 𝖤̂_N,r(𝐱,𝐲)=(Weight of such path(x_i,N) → (y_j,j))_i,j=1^N,𝐱,𝐲∈𝕎_N.Finally, it is not hard to see that in this LGV graphWeight of such path(x_i,N) → (y_j,j) =ψ_r^N-j(x_i,y_j)and this completes the proof for 𝖤̂_N,r. We now turn our attention to the left edge. Observe that,𝖤̂_N,l(𝐱,𝐲)=∑_𝐱^(1),𝐱^(2),…,𝐱^(N-1) 𝐱^(N)=𝐱;(x_1^(N),x_1^(N-1),…,x_1^(1))=𝐲∏_i=1^N-11_𝐱^(i)≺𝐱^(i+1)∏_j=1^i1/a_x_j^(i), 𝐱∈𝕎_N, 𝐲∈𝕎_N.We then observe that, each term in the sum above is in correspondence with a set of N non-intersecting paths in a different LGV graph <cit.>. This graph has vertex set ℤ_+×ℕ. It has horizontal edges from (x+1,n) to (x,n), namely directed to the left, of weight 1. Moreover, it has diagonal edges directed from (x+1,n+1) to (x,n) of weight a_x^-1. We are then looking at N non-intersecting paths from (x_1,N),…,(x_N,N) to (y_1,N),…,(y_N,1) ending with a diagonal edge, see Figure <ref> for an illustration. Then, by the LGV <cit.> formula we get𝖤̂_N,l(𝐱,𝐲)=(Weight of such path(x_i,N) → (y_j,N-j+1))_i,j=1^N,𝐱∈𝕎_N, 𝐲∈𝕎_N.Finally, we see that in this LGV graphWeight of such path(x_i,N) → (y_j,N-j+1) =ψ_l^j-1(x_i,y_j)and this gives the desired expression for 𝖤̂_N,l and completes the proof. A simple calculation gives the following. We define, for x,y∈ℤ_+,ψ_r^-1(x,y)=a_x(1_x=y-1_x+1=y)and ψ_l^-1(x,y)=a_x(1_x+1=y-1_x=y).Then, we have∑_m=0^∞ψ_r(x,m)ψ_r^-1(m,y) =∑_m=0^∞ψ_r^-1(x,m)ψ_r(m,y)=1_x=y, ∑_m=0^∞ψ_l(x,m)ψ_l^-1(m,y) =∑_m=0^∞ψ_l^-1(x,m)ψ_l(m,y)=1_x=y. In particular, by virtue of the lemma above, for any m,n∈ℤ, ψ_r^mψ_r^n(x,y)=ψ_r^m+n(x,y) and similarly ψ_l^mψ_l^n(x,y)=ψ_l^m+n(x,y). Note that, by viewing in the obvious way ψ_r^-1 and ψ_l^-1 as integral kernels with respect to counting measure on ℤ_+, we have for f on ℤ_+:ψ_r^-1f(x)=-a_x∇^+f(x)and ψ_l^-1f(x)=a_x∇^+f(x).We have the following explicit left inverses (we do not need the right inverse) for 𝖤̂_N,r and 𝖤̂_N,l.The kernel 𝖤̂_N,r^-1 defined by 𝖤̂_N,r^-1(𝐱,𝐲)=(ψ_r^-(N-i)(x_i,y_j))_i,j=1^N satisfies𝖤̂_N,r^-1𝖤̂_N,r(𝐱,𝐲)=1_𝐱=𝐲, 𝐱,𝐲∈𝕎_N.Similarly, the kernel 𝖤̂_N,l^-1 defined by 𝖤̂_N,l^-1(𝐱,𝐲)=(ψ_l^-(i-1)(x_i,y_j))_i,j=1^Nsatisfies 𝖤̂_N,l^-1𝖤̂_N,l(𝐱,𝐲)=1_𝐱=𝐲, 𝐱,𝐲∈𝕎_N. We consider the right edge first. We compute using the Cauchy-Binet formula[We note that we can use the Cauchy-Binet formula, by virtue of the form of the determinant formulae for 𝖤_N,r^-1 and 𝖤_N,r since we are computing 𝖤_N,r^-1𝖤_N,r. If we were trying to compute 𝖤_N,r𝖤_N,r^-1 instead then the Cauchy-Binet formula actually cannot be applied directly.]𝖤̂_N,r^-1𝖤̂_N,r(𝐱,𝐲)=(ψ_r^i-j(x_i,y_j))_i,j=1^N,𝐱,𝐲∈𝕎_N.We now show that the determinant on the right hand side boils down to 1_𝐱=𝐲. Clearly the diagonal terms give 1_𝐱=𝐲. We claim all other contributions to the determinant are zero. Take any permutation σ of {1,…,N} different from the identity. Then, there exist i>j (depending on σ) such that σ(i)<i, σ(j)>j and σ(i)<σ(j). Also, we observe that ψ_r^-1(x,·) is supported on {x,x+1} and more generally, for k≥ 1,ψ_r^-k(x,·)is supported on {x,x+1,…,x+k}.We now show at least one of the factors in the product∏_k=1^Nψ_r^k-σ(k)(x_k,y_σ(k))is zero. With i,j as above suppose ψ_r^i-σ(i)(x_i,y_σ(i))>0, for otherwise we are done (since i>σ(i), ψ_r^i-σ(i)(x,y)≥ 0). Then, we must have y_σ(i)≥ x_i. Since moreover x_i-x_j ≥ i-j and y_σ(j)-y_σ(i)≥σ(j)-σ(i) we get y_σ(j)-x_j≥ i-j+σ(j)-σ(i) >σ(j)-jwhich implies by observation (<ref>) that ψ_r^j-σ(j)(x_j,y_σ(j))=0 and completes the proof for the right edge. For the left edge, we again compute using the Cauchy-Binet formula 𝖤̂_N,l^-1𝖤̂_N,l(𝐱,𝐲)=(ψ_l^j-i(x_i,y_j))_i,j=1^N,𝐱,𝐲∈𝕎_N.As before, the diagonal terms give 1_𝐱=𝐲 and we show next that all other contributions to the determinant are zero. Again, take σ an arbitrary permutation and (depending on σ) i>j such that σ(i)<i, σ(j)>j and σ(i)<σ(j). As before, for k≥ 1,ψ_l^-k(x,·)is supported on {x,x+1,…,x+k}.We show that at least one of the factors in the product ∏_k=1^Nψ_r^σ(k)-k(x_k,y_σ(k))is zero. With i,j as above suppose ψ_l^σ(j)-j(x_j,y_σ(j))>0, for otherwise we are done. This implies y_σ(j)<x_j. Hence, we get y_σ(i)≤ y_σ(j)<x_j≤ x_iwhich implies, since σ(i)-i<0, ψ_l^σ(i)-i(x_i,y_σ(i))=0 and completes the proof. Putting everything together we obtain the following formulae for 𝔈_f_s,t,r^(N) and 𝔈_f_s,t,l^(N). Assume the conditions and notation of Proposition <ref>. Then, the transition kernel 𝔈_f_s,t,r^(N) of (𝖷_1^(1)(t),𝖷_2^(2)(t),…,𝖷_N^(N)(t);t≥ 0) is given by the explicit formula𝔈_f_s,t,r^(N)(𝐱,𝐲)=(ψ_r^-(N-i)𝖳_f_s,tψ_r^N-j(x_i,y_j))_i,j=1^N,𝐱,𝐲∈𝕎_N,while the transition kernel 𝔈_f_s,t,l^(N) of (𝖷_1^(N)(t),𝖷_1^(N-1)(t),…,𝖷_1^(1)(t);t≥ 0) is given by the explicit formula𝔈_f_s,t,l^(N)(𝐱,𝐲)=(ψ_l^-(i-1)𝖳_f_s,tψ_l^j-1(x_i,y_j))_i,j=1^N,𝐱,𝐲∈𝕎_N. We give the proof for 𝔈_f_s,t,r^(N) as the proof for 𝔈_f_s,t,l^(N) is completely analogous. Observe that, the intertwining (<ref>) can be written in terms of 𝖤̂_N,r and 𝖯_f_s,t^(N) instead of 𝖤_N,r and 𝔓_f_s,t^(N),𝖤̂_N,r𝔈_f_s,t,r^(N)(𝐱,𝐲)=𝖯_f_s,t^(N)𝖤̂_N,r(𝐱,𝐲),𝐱,𝐲∈𝕎_N.Using Proposition <ref> we then obtain𝔈_f_s,t,r^(N)(𝐱,𝐲)=𝖤̂_N,r^-1𝖯_f_s,t^(N)𝖤̂_N,r(𝐱,𝐲),𝐱,𝐲∈𝕎_N.The final expression then follows by an application of the Cauchy-Binet formula using the explicit expressions found in Propositions <ref> and <ref>. We now prove that 𝔈_f_s,t,r^(N)(𝐱,·) and 𝔈_f_s,t,l^(N)(𝐲,·) can be realised as marginals of certain measures on 𝕀𝔸_N with determinantal correlation functions. Assume the conditions and notation of Proposition <ref>.Let 𝐱∈𝕎_N and 𝐲∈𝕎_N.Then, the (signed) measure on 𝕀𝔸_N,[𝖤̂_N,r^-1𝖯_f_s,t^(N)𝔏̂^(N)](𝐱,(𝐳^(1),…,𝐳^(N)))=((-a_x_i∇_x_i^+)^N-i𝖳_f_s,t(x_i,z_j^(N)))_i,j=1^N ∏_n=1^N-1(ϕ(z_i^(n),z_j^(n+1)))_i,j=1^n+1,has 𝔈_f_s,t,r^(N)(𝐱,·) as its right edge marginal on coordinates (z_1^(1),z_2^(2),…,z_N^(N)). Moreover, the (signed) measure on 𝕀𝔸_N,[𝖤̂_N,l^-1𝖯_f_s,t^(N)𝔏̂^(N)](𝐲,(𝐳^(1),…,𝐳^(N)))=((a_y_i∇_y_i^+)^i-1𝖳_f_s,t(y_i,z_j^(N)))_i,j=1^N ∏_n=1^N-1(ϕ(z_i^(n),z_j^(n+1)))_i,j=1^n+1,has 𝔈_f_s,t,l^(N)(𝐲,·) as its left edge marginal on coordinates (z_1^(N),z_1^(N-1),…,z_1^(1)). In particular, both 𝔈_f_s,t,r^(N)(𝐱,·) and 𝔈_f_s,t,l^(N)(𝐲,·) are marginals of (signed) measures with determinantal correlation functions.By virtue of (<ref>) and the definition ofℰ^(N)_r we get that the measure 𝖤̂_N,r^-1𝖯_f_s,t^(N)𝔏̂^(N)(𝐱,·) on 𝕀𝔸_N has 𝔈_f_s,t,r^(N)(𝐱,·) as its right edge marginal. The expression (<ref>) then simply follows by writing out explicitly all of the involved quantities. The argument for the left edge is completely analogous. Finally, the fact that the measures (<ref>) and (<ref>) give rise to determinantal correlation functions is again a consequence of the Eynard-Mehta theorem, see <cit.> (which makes sense even for signed measures).When the initial condition for the right edge system is 𝐱=(0,1,…,N-1) and similarly for the left edge system is 𝐲=(0,0,…,0), after a simple computation by noting that 𝔥_N((0,1,…,N-1);𝐚)=a_0^-(N-1)a_1^-(N-2)⋯ a^-1_N-2, (<ref>) and (<ref>) are both seen to be equal to,(𝖳_f_s,t(i-1,z_j^(N)))_i,j=1^N/𝔥_N((0,1,…,N-1);𝐚)∏_n=1^N-1(ϕ(z_i^(n),z_j^(n+1)))_i,j=1^n+1, which is nothing else than the distribution at t of the corresponding push-block dynamics in 𝕀𝔸_N if started from the fully-packed configuration at time s. It would be interesting to solve the corresponding biorthogonalisation problem and obtain an explicit expression for the correlation kernel of the determinantal measures from Theorem <ref>. This would be a substantial task, for example in the level/particle inhomogeneous setting this is the whole point of the papers <cit.>, see also <cit.>, and we leave this for future work.Finally, consider the setting of Definition <ref>, see also Section <ref>. Recall that particles in this setup follow either only geometric or only Bernoulli or only pure-birth dynamics and their evolution is time-homogeneous. Also recall the notation ∙∈{pb,B,g} corresponding to pure-birth, Bernoulli and geometric respectively. Let 𝔈_t,r^γ,∙,N and 𝔈_t,l^γ,∙,N denote the transition kernels of the right and left edge particles (𝖷_1^(1),∙(t),𝖷_2^(2),∙(t),…,𝖷_N^(N),∙(t);t ≥ 0) and (𝖷_1^(N),∙(t),𝖷_1^(N-1),∙(t),…,𝖷_1^(1),∙(t);t ≥ 0) respectively(we use a single time variable t since time is homogeneous). Define the Markov kernel 𝔏^γ,∙,N from 𝕎_N to 𝕀𝔸_N𝔏^γ,∙,N=(𝐱,(𝐲^(1),…,𝐲^(N)))=1_𝐲^(N)=𝐱∏_n=1^N-1Λ_n+1,n^γ,∙(𝐲^(n+1),𝐲^(n)),where Λ_n+1,n^γ,∙ is given in Definition <ref>. Under, the conditions of Proposition <ref> we have the following result.Let t≥ 0. Then, we have the intertwinings𝒫_t^γ,∙,N𝔏^γ,∙,Nℰ^(N)_r(𝐱,𝐲) =𝔏^γ,∙,Nℰ^(N)_r𝔈_t,r^γ,∙,N(𝐱,𝐲), 𝐱,𝐲∈𝕎_N,𝒫_t^γ,∙,N𝔏^γ,∙,Nℰ^(N)_l(𝐱,𝐲) =𝔏^γ,∙,Nℰ^(N)_l𝔈_t,l^γ,∙,N(𝐱,𝐲), 𝐱∈𝕎_N, 𝐲∈𝕎_N,where recall the transition kernel 𝒫_t^γ,∙,N was given in Definition <ref>.Exact same proof as Proposition <ref> making use of Proposition <ref>now instead. Similar but more involved arguments to the ones presented previously may be used to invert the kernels 𝔏^γ,∙,Nℰ^(N)_r and 𝔏^γ,∙,Nℰ^(N)_l. Then, analogues of Theorems <ref> and <ref> in this setting can be obtained.§ EXTREMAL MEASURES FOR THE INHOMOGENEOUS GELFAND-TSETLIN GRAPH We begin by connecting the quantities of interest, namely the measures ℳ_N^ω and Markov kernels Λ_N+1,N^𝐆𝐓_+(𝐚) to familiar objects we have seen in the previous sections. For all N ≥ 1, we have, with 𝐱∈𝕎_N and 𝐲∈𝕎_N+1,ℳ_N^ω(𝐱;𝐚) ≡𝔓_f_ω^(N)((0,1,…,N-1),𝐱),Λ_N+1,N^𝐆𝐓_+(𝐚)(𝐲,𝐱) ≡𝔏_N+1,N(𝐲,𝐱).Direct comparison of the formulae for the left and right hand side of each equality by noting that𝔥_N((0,1,…,N-1);𝐚)=a_0^-(N-1)a_1^-(N-2)⋯ a^-1_N-2,which is obtained by an elementary computation.Let ω be defined as in (<ref>). Then, (ℳ_N^ω(·;𝐚))_N=1^∞ form a coherent sequence of probability measures. This follows by virtue of Proposition <ref> above. The conditions (<ref>) on the sequence ω are so that f_ω∈𝖧𝗈𝗅(ℍ_-R) with R>R(𝐚) and the corresponding kernel 𝔓_f_ω^(N) by virtue of its probabilistic description is non-negative. Finally, the intertwining in Proposition <ref> gives the coherency property. Observe that, by virtue of Proposition <ref>, 𝔓_f_ω^(N)((0,1,…,N-1),𝐱)≡ℳ_N^ω(𝐱;𝐚), has the following probabilistic interpretation. Starting from the fully-packed configuration, we run (a possibly infinite number of) sequential-update Bernoulli steps with parameters (α_i)_i=1^∞, Warren-Windridge geometric steps with parameters (β_i)_i=1^∞ and continuous-time pure-birth dynamics for time t. Then, the resulting distribution of the N-th row of the array, for any N≥ 1, is given by 𝔓_f_ω^(N)((0,1,…,N-1),·). It remains to prove extremality. The argument goes via a family of symmetric functions[These are closely related to the functions 𝖧_(γ_1,…,γ_N)^∙ from Section <ref>, which are also very closely related to the factorial Schur polynomials <cit.> as we shall see in the proof. In the homogeneous case a_x ≡ 1, they essentially boil down to (a normalised version of) the standard Schur polynomials <cit.>.] ℱ_𝐱(𝐰;𝐚) defined in (<ref>) below, a multivariate extension of the polynomials p_x(w), to an application of De-Finetti's theorem <cit.>. The main idea is to consider the generating function[For other uses of such generating functions, in the homogeneous case a_x≡ 1 where they go by the name Schur generating functions, and in particular for applications to questions of global asymptotics,see <cit.>.] (<ref>) of a coherent sequence of measures (μ_N)_N=1^∞ with respect to ℱ_𝐱. Remarkably, in the case of the measures (ℳ_N^ω(·;𝐚))_N=1^∞ this generating function factorises, see (<ref>), and along with a certain positivity property of ℱ_𝐱 this allows us to make use of De-Finetti's theorem <cit.>. The positivity property is where we require the condition inf_k a_k ≥ 1 on 𝐚. For 𝐱∈𝕎_N define the following function 𝖥_𝐱, a multivariate polynomial,by the explicit formula:𝖥_𝐱(w_1,…,w_N;𝐚)=(p_x_j(w_i))_i,j=1^N/((-w_i)^j-1)_i,j=1^N.By taking the limit w_1,…,w_N → 0 we get, after recalling the representation of 𝔥_N from Lemma <ref>,𝖥_𝐱(0,…,0;𝐚)=𝔥_N(𝐱)>0.We then define for 𝐱∈𝕎_N, the function ℱ_𝐱, the normalized version of 𝖥_𝐱 at 𝐰=(0,…,0), so that ℱ_𝐱(0,…,0;𝐚)≡ 1 by,ℱ_𝐱(w_1,…,w_N;𝐚)=𝖥_𝐱(w_1,…,w_N;𝐚)/𝖥_𝐱(0,…,0;𝐚).A simple computation gives, for any 𝐲∈𝕎_N+1, the identityℱ_𝐲(w_1,…,w_N-1,0;𝐚)=∑_𝐱≺𝐲𝔏_N+1,N(𝐲,𝐱)ℱ_𝐱(w_1,…,w_N-1;𝐚).We now observe that we have the following representation of 𝖥_𝐱 in terms of factorial Schur polynomials s_λ(·|·), see<cit.>,∏_j=1^N ∏_l=0^x_j-1a_x_l𝖥_𝐱(1-z_1,…,1-z_N;𝐚) =(∏_l=0^x_j-1(z_i+a_l-1))_i,j=1^N/(z_i^j-1)_i,j=1^N=(∏_l=1^λ_j+N-j(z_i+ã_l))_i,j=1^N/(z_i^N-j)_i,j=1^N=s_λ(𝐳|𝐚̃),where the partition λ=(λ_1,λ_2,λ_3,…) and sequence 𝐚̃ (indexed by ℕ instead of ℤ_+) are given in terms of 𝐱 and 𝐚 by λ_j=x_N-j+1-N+j and ã_l=a_l-1-1 ≥ 0 and s_λ(𝐳|𝐚̃) is the factorial Schur polynomial in variables 𝐳 indexed by the partition λ and with parameter sequence 𝐚̃, see <cit.>. Here, we note that the partition λ has length, denoted by l(λ), at most N, see <cit.>. From the combinatorial formula <cit.> for the factorial Schur polynomial we can write s_λ(z_1,…,z_N|𝐚̃)=∑_μ∈ℤ_+^Nc(μ|λ;𝐚̃)z_1^μ_1z_2^μ_2⋯ z_N^μ_N,with c(μ|λ;𝐚̃)≥ 0, since ã_l≥ 0 by our assumption that inf_k∈ℤ_+a_k≥ 1. Abusing notation we will also write c(μ|𝐱;𝐚) for c(μ|λ;𝐚̃) under the correspondence between 𝐱 and λ and 𝐚 and 𝐚̃ above. Moreover, since s_λ(𝐳|𝐚̃) is a symmetric polynomial in the variables z_i we must have, for any permutation σ of {1,…,N},c(μ_1,…, μ_N|λ;𝐚̃)=c(μ_σ(1),…, μ_σ(N)|λ;𝐚̃).In particular, we also have s_λ(z_1,…,z_N|𝐚̃)=∑_μ partition,l(μ)≤ Nc(μ|λ;𝐚̃)m_μ(z_1,…,z_N),where m_μ is the monomial symmetric polynomial associated to a partition μ, see <cit.>. By making use of the connection between 𝖥_𝐱 and the factorial Schur polynomial we obtainℱ_𝐱(1-z_1,…,1-z_N;𝐚)=∑_μ∈ℤ_+^Nξ(μ|𝐱;𝐚)z_1^μ_1z_2^μ_2⋯ z_N^μ_N,where ξ(·|𝐱;𝐚) is a probability measure on ℤ_+^N (since ℱ_𝐱(0)=1) satisfying for any permutation σ of {1,…,N}, ξ(μ_1,…, μ_N|𝐱,𝐚)=ξ(μ_σ(1),…, μ_σ(N)|𝐱,𝐚).We now define a map from coherent sequences of measures to certain sequences of analytic functions on the unit polydisk given by,(μ_N)_N=1^∞ ↦ (𝒯_Nμ_N)_ N=1^∞, [𝒯_Nμ_N](z_1,…,z_N) =∑_𝐱∈𝕎_Nμ_N(𝐱)ℱ_𝐱(1-z_1,…,1-z_N;𝐚).The fact that this map is well-defined can be seen as follows. Since μ_N is a probability measure on 𝕎_N and by virtue of the expansion (<ref>) the function [𝒯_Nμ_N](z_1,…,z_N) converges uniformly on 𝔻^N, where 𝔻={z∈ℂ:|z|≤ 1} is the closed unit disk, and it is analytic in its interior (𝔻^∘)^N. It is moreover, the characteristic function of an exchangeable measure on ℤ_+^N (the convolution of μ_N and ξ). Since (μ_N)_N=1^∞ is coherent an easy computation using (<ref>) reveals that, for all N ≥ 1,[𝒯_Nμ_N](z_1,…,z_N-1,0)=[𝒯_N-1μ_N-1](z_1,…,z_N-1).In particular, by virtue of this consistency, the sequence (𝒯_Nμ_N)_N=1^∞can be viewed (via a projective limit) as a function on 𝔻^∞ and it is the characteristic function of an exchangeable measure on ℤ_+^∞. The key property of the transform 𝒯_N is that it factorizes in the case of ℳ_N^ω. Namely, a computation using the Cauchy-Binet formula and Lemma <ref> gives [𝒯_Nℳ_N^ω](z_1,…,z_N)=∏_i=1^N f_ω(1-z_i), valid in a small (this restriction comes from Lemma <ref>) neighourhoud of 𝐳=(1,…,1), where we have used (<ref>) to seethat ℱ_(0,…,N-1)(w_1,…,w_N;𝐚)≡ 1. Observe that, since f_ω is holomorphic in ℍ_-β_1^-1 then the function (z_1,…,z_N)↦∏_i=1^N f_ω(1-z_i) is analytic in (𝔻^∘)^N. Hence, by the identity theorem for analytic functions we can extend equality (<ref>) first to 𝐳∈(𝔻^∘)^N and by continuity to 𝔻^N.Hence, by De-Finetti's theorem <cit.>, for any f_ω, the corresponding function (𝒯_Nℳ_N^ω)_N=1^∞ is an extreme point in the convex set of characteristic functions of exchangeable measures on ℤ_+^∞ by virtue of the factorization property (<ref>). Thus, since the map that takes (μ_N)_N=1^∞↦(𝒯_Nμ_N)_N=1^∞ is affine, and as we show next also injective, (ℳ_N^ω)_N=1^∞ is an extreme point in the original convex set of coherent sequences of probability measures. It remains to show that the map given by (μ_N)_N=1^∞↦(𝒯_Nμ_N)_N=1^∞ is injective. Let N≥ 1 be fixed but arbitrary. We show that given the function 𝒯_Nμ_N(𝐳) we can recover μ_N uniquely. We give a combinatorial argument although a complex analytic approach using a suitable orthogonality property of the functions ℱ_𝐱 isalso possible. Observe that, if we know the function 𝒯_Nμ_N(𝐳) we then know itscollection of Taylor coefficients at (0,…,0), namely the coefficients of the terms z_1^μ_1z_2^μ_2⋯ z_N^μ_N, for μ a partition, l(μ)≤ N, that we denote by (𝔳_μ)_μ partition,l(μ)≤ N. In particular, by looking at 𝔳_μ we have the equality∑_𝐱∈𝕎_Nμ_N(𝐱)g_N(𝐱)c(μ|𝐱;𝐚)=𝔳_μ,where g_N(𝐱) is an explicit non-zero function involving 𝔥_N(𝐱) and the a_i's. Having knowledge of these quantities we want to solve for μ_N(𝐱). Towards this end, let us view the variable 𝐱 as a partition λ and the sequence 𝐚 as the sequence 𝐚̃ under the correspondence discussed earlier, λ_j=x_N-j+1-N+j and ã_l=a_l-1-1. In particular, we can rewrite (<ref>) in this notation, where μ_N(λ),g_N(λ) denote the values of μ_N(𝐱),g_N(𝐱) under the above correspondence between 𝐱 and λ and 𝐚 and 𝐚̃, as follows:∑_λ partition,l(λ)≤ Nμ_N(λ)g_N(λ)c(μ|λ;𝐚̃)=𝔳_μ.We need one final well-known fact. Both {s_λ(·|𝐚̃)}_λ partition,l(λ)≤ N and {m_μ(·)}_μ partition,l(μ)≤ N form bases for the ring of symmetric polynomials in N variables, see <cit.>. Then, by virtue of (<ref>), the matrix [𝐀_μλ]_μλ=[c(μ|λ;𝐚̃)]_μλ is the change of basis matrix which is thus invertible. Hence, from (<ref>) we can solve for μ_N:μ_N(λ)=1/g_N(λ)[𝐀^-1𝔳]_λand the desired conclusion follows. Finally, extremal coherent sequences (ℳ_N^ω)_N=1^∞ indexed by points ω are distinct for distinct ω.Suppose ω≠ω̃, then the sequences (ℳ_N^ω)_N=1^∞ and (ℳ_N^ω̃)_N=1^∞ are distinct.This is a direct consequence of Lemma <ref> since f_ω≠ f_ω̃ (and also implicitly follows from the proof above). § DUALITY VIA A HEIGHT FUNCTION We now prove the results stated in Section <ref>. Recall our labelling convention from the second paragraph of Section <ref>. To ease notation we will frequently drop dependence on the time variable t if there is no risk of confusion. Recall the terminology that for a configuration (x_k^(n))_0≤ k ≤ n; n ≥ 0 in 𝕀𝔸_∞,(x_k^(n))_n≥ k is called the k-th column of the configuration.Observe that, we can write the map 𝖧𝗀𝗍 as follows ((x_0^(j))_j≥ 0, (x_1^(j))_j≥ 1, (x_2^(j))_j≥ 2,…)↦((𝗁_0(j))_j≥ 0, (𝗁_1(j)+1)_j≥ 1, (𝗁_2(j)+2)_j≥ 2,…)with the map fromthe i-th column of (x_i^(j))_0≤ i ≤ j;j≥ 0 to the i-th column of 𝖧𝗀𝗍((x_i^(j))_0≤ i ≤ j;j≥ 0) given by, (x_i^(j))_j≥ i↦(𝗁_i(j)+i)_j≥ i. Here, we note that 𝗁_i(·) is only dependent on the i-th column (x_i^(j))_j≥ i of the configuration (x_i^(j))_0≤ i ≤ j;j≥ 0. Then, observe that for each i∈ℤ_+, (x_i^(j)-i)_j≥ i↦(𝗁_i(j))_j≥ i is simply the map that takes a partition to its conjugate partition. This map is an involution. The conclusion follows.We first prove the statement for the continuous time pure-birth dynamics. Observe that, if there exists, with positive probability, a finite τ_*>0 such that 𝖷(τ_*)∉𝕀𝔸_∞^*, then on this event of positive probability, for all t≥τ_* we have 𝖷(t)∉𝕀𝔸_∞^*. Hence, it suffices to show that for any fixed t≥ 0, almost surely 𝖷(t) ∈𝕀𝔸_∞^*. Suppose not. So with positive probability there exists i∈ℤ_+ such that 𝖷_i^(j)(t)>i for all j≥ i. Now, let n_i be the first k such that 𝖷_i^(k)(0)=x_i^(k)=i (by the interlacing, for all k≥ n_i we must have 𝖷_i^(k)(0)=x_i^(k)=i). If 𝖷_i^(j)(t)>i for all j≥ i then for all j≥ n_i at least one particle from each “diagonal" of particles (we drop time dependence from the notation):𝖷_0^(j-i), 𝖷_1^(j-i+1),…,𝖷_i^(j),has moved of its own volition by time t, see Figure <ref> for an illustration. Note that this is becausepushing of particles only occurs along diagonals from lower to higher levels. For each diagonal j≥ n_i consider the exponential clock of the particle that tries to move first. These clocks are independent for different diagonals. Moreover, by our assumption on the environment θ all these clocks have uniformly bounded rates. Thus, the event that they all ring by time t has probability zero, which gives a contradiction. In discrete time it suffices to show that the process is in 𝕀𝔸_∞^* after a single time step. We use the same argument, but instead of looking at exponential random variables with bounded rates we are dealing with infinitely many independent 0-1 random variables (corresponding to whether at least one particle from each diagonal moves) with success probabilities uniformly strictly between 0 and 1. The event that all of them are 1 has probability zero. Observe that, the evolution of the first k columns of (𝖷(t);t≥ 0), namely (𝖷_i^(j)(t);t≥ 0)_0≤ i ≤ k-1; j ≥ i in all three types of dynamics is autonomous. Moreover, under 𝖧𝗀𝗍 it is mapped to the first k columns of (𝖧𝗀𝗍(𝖷(t));t≥ 0). Hence, it suffices to prove the result for the restriction of the dynamics on any fixed number of consecutive columns starting from the first one. Moreover, we restrict our attention to the dynamics of the first two columns as these involve both types of possible interactions (pushing and blocking) between particles. We first consider the continuous-time pure-birth case. We also drop dependence on time from the notation from now on.Suppose the exponential clock of particle 𝖷_0^(j) which is at spatial location x rings. This happens at rate θ(x,j). Suppose the particle is not blocked. Then, it moves to location x+1. We note that since 𝖷_0^(j) was not blocked before attempting to move we must have had 𝗁_0(x)=j just before the move. After the move we get 𝗁_0(x)=j+1. In particular, we get that 𝖧𝗀𝗍(𝖷)_0^(x) moved from j to j+1 and this happened with rate θ(x,j)=θ̂(j,x).If moreover 𝖷_1^(j+1) was at location x+1 just before the clock rang, then it is pushed to location x+2. Furthermore, just before this pushing move we must have had 𝗁_1(x+1)=j which after the pushing becomes j+1. In particular, 𝖧𝗀𝗍(𝖷)_1^(x+1)=𝗁_1(x+1)+1 is pushed from j+1 to j+2. The pushing of the 𝖷 particles and thus of the 𝖧𝗀𝗍(𝖷) particles is propagated to higher levels in this fashion. See Figure <ref> for an illustration. Now, if 𝖷_0^(j) was blocked before the clock rang, namely we had 𝖷_0^(j)=𝖷_0^(j-1)=x then nothing happens for either the 𝖷 process and thus for the 𝖧𝗀𝗍(𝖷) process as well. In the reverse direction, if 𝖧𝗀𝗍(𝖷)_0^(x)=𝖧𝗀𝗍(𝖷)_0^(x-1) then this implies that there is no j such that 𝖷_0^(j)=x and hence 𝖧𝗀𝗍(𝖷)_0^(x)=𝗁_0(x) cannot change/move. Thus, we observe that the dynamics described above for 𝖧𝗀𝗍(𝖷) are exactly the continuous-time pure-birth push-block dynamics in environment θ̂ and this completes the proof of the continuous-time case. We now turn to the discrete-time case. It suffices to prove that 𝖧𝗀𝗍 maps the Warren-Windridge geometric dynamics in environment θ to sequential-update Bernoulli dynamics in environment θ̂. Since 𝖧𝗀𝗍 is an involution the reverse statement also follows. We will require the following observation. First, recall that in the sequential-update Bernoulli dynamics we update each level of the array sequentially. However, we can also obtain the same configuration at the end of the time-step if we follow a slightly different update rule. Let us call a collection of particles from the same column a stack of particles if they have the same spatial location. Moreover, we call two stacks of particles from consecutive columns adjacent if their spatial location differs by 1. Consider the following updating rules (with push-block interactions between particles being as for sequential-update Bernoulli dynamics). For each individual stack of particles lower-level particles are updated first. For a sequence of adjacent stacks of particleswe update stacks of lower-indexed columns first. Beyond these rules we can update particles in any order. Then, we can observe that at the end of this time-step (completion of the updating process) we obtain the same configuration as for the sequential-update Bernoulli dynamics, assuming of course we use the same Bernoulli random variables to decide whether each particle moves. See Figure <ref> for an illustration.Assume that the 𝖷 process follows the Warren-Windridge geometric dynamics. Suppose we are about to update particle 𝖷_0^(j) which is at location x and assume that particle 𝖷_0^(j-1) was at location m at the end of the last time-step. Then, 𝖷_0^(j) moves to location y, with x ≤ y<m, with probability(1-θ(y,j))∏_k=x^y-1θ(k,j)and to y=m with probability ∏_k=x^mθ(k,j). We can view this as performing sequentially independent Bernoulli trials with success probabilities θ(k,j) of whether to move from k to k+1. Thus, from the point of view of 𝖧𝗀𝗍(𝖷), sequentially each 𝖧𝗀𝗍(𝖷)_0^(k)=𝗁_0(k)=j, for x≤ k <m decides whether to jump from j to j+1 with probability θ(k,j)=θ̂(j,k), and if one particle in this stack decides to stay put then all particles on higher levels stay put. See Figure <ref> for an illustration. Now, regarding the pushing interaction, suppose that 𝖷_1^(j+1) was at a spatial location x̃≤ y. Then, 𝖷_1^(j+1)gets pushed to location y+1. This implies that 𝖧𝗀𝗍(𝖷)_1^(k)=𝗁_1(k)+1=j+1 for x̃≤ k ≤ y+1 all get pushed from j+1 to j+2. Moreover, such a pushed particle 𝖧𝗀𝗍(𝖷)_1^(k) cannot move again in this time-step since in the Warren-Windridge geometric dynamics the particles of the 𝖷 process are blocked by the positions of the lower-level particles at the end of the previous time-step (in particular, no more particles from the column with index 1 can get past x̃). When 𝖷 particles are blocked, nothing happens as in continuous time. We also note that the usual update from lower to higher levels of Warren-Windridge geometric dynamics for 𝖷 gives rise to update rules as in the previous paragraph for 𝖧𝗀𝗍(𝖷), which as explained are equivalent to sequential-update Bernoulli dynamics. Thus, we observe that the dynamics described above for 𝖧𝗀𝗍(𝖷) are exactly the sequential update Bernoulli push-block dynamics in environment θ̂ and this completes the proof of the theorem.Observe that, the restriction of 𝖧𝗀𝗍 to the first column, given by the function j↦𝗁_0(j), is simply the standard height function associated to (𝖷_0^(j))_j≥ 0 viewed as a corner growth model. Then, it is easy to see that the evolution of the height function, viewed as a particle system, follows the corresponding dynamics from Theorem <ref> in environment θ̂ (we are swapping x and y), see Figure <ref> for an illustration.§ THE DOMINO TILING SHUFFLING ALGORITHM CONNECTION§.§ The Aztec diamond graph, probability measures on dimers, square probabilities, gauge equivalence, particles Instead of considering domino tilings of the Aztec diamond it is equivalent, and more convenient, to consider dimer coverings of the associated Aztec diamond graph. The Aztec diamond graph 𝖠𝖦_N of size N, consisting of N^2 squares whose vertices and edges constitute the vertex and edge set, is defined as in Figure <ref>. We associate to it a coordinate system as in Figure <ref>, with space-level coordinate (x,n) being the centre of the corresponding square. We identify a square with its space-level coordinate so that 𝖠𝖦_N consists of squares {(x,n):0≤ x ≤ N-1, 1≤ n ≤ N}. Each square consists of 4 edges and we call these north, south, west and east in the obvious way, see Figure <ref>. We denote an edge 𝖾 of 𝖠𝖦_N by 𝖾=(∙,(x,n)) where (x,n) gives the square the edge belongs to and ∙∈{n,s,w,e} identifies which of the four edges of that square it is. A dimer covering of 𝖠𝖦_N is a collection of edges of 𝖠𝖦_N such that every vertex is covered and no two edges are incident on the same vertex. We denote the set of all dimer coverings of 𝖠𝖦_N by 𝖣𝖢_N. It is easy to see that a dimer coveringof 𝖠𝖦_N is equivalent to a tiling of the Aztec diamond of size N by dominos, see for example Figure <ref> for an illustration. We denote the set of all dimer coverings of 𝖠𝖦_N by 𝖣𝖢_N. It is well-known, see <cit.>, that |𝖣𝖢_N|=2^N(N+1)/2.A weighting 𝒲 of 𝖠𝖦_N is a function from the edge set of 𝖠𝖦_N to (0,∞). We write 𝒲_𝖾 for its value at the edge 𝖾. Given a weighting 𝒲 of 𝖠𝖦_N we define the probability measure ℙ_𝒲^(N) on dimer coverings 𝖣𝖢_N as followsℙ_𝒲^(N)(𝔡)=1/Z_𝒲∏_𝖾∈𝔡𝒲_𝖾, 𝔡∈𝖣𝖢_N,where Z_𝒲=∑_𝔡∈𝖣𝖢_N∏_𝖾∈𝔡𝒲_𝖾 is the normalisation constant/partition function. We observe that, by virtue of strict positivity of 𝒲, the measure ℙ^(N)_𝒲 is supported on the whole of 𝖣𝖢_N. Given a weighting 𝒲 of 𝖠𝖦_N as above, we associate certain probabilities ρ_𝒲 to each square (x,n) of 𝖠𝖦_N as follows:ρ_𝒲(x,n)=𝒲_w,(x,n)𝒲_e,(x,n)/𝒲_w,(x,n)𝒲_e,(x,n)+𝒲_n,(x,n)𝒲_s,(x,n). Observe that, ρ_𝒲(x,n) is strictly between 0 and 1.An elementary gauge transformation of a weighting 𝒲 of 𝖠𝖦_N is a multiplication of the weights of edges incident to a single vertex in 𝖠𝖦_N by the same number in (0,∞). See Figure <ref>. A gauge transformation is the application of a finite number of elementary gauge transformations. We say that two weightings 𝒲 and 𝒲̃ of 𝖠𝖦_N are gauge-equivalent if one can be obtained from the other by a gauge transformation. The following lemma is easy to see. Suppose 𝒲 and 𝒲̃ are gauge-equivalent weightings of 𝖠𝖦_N. Then, we haveℙ_𝒲^(N)=ℙ^(N)_𝒲̃and ρ_𝒲(x,n)=ρ_𝒲̃(x,n)for all squares(x,n)in 𝖠𝖦_N.Given a dimer cover 𝔡∈𝖣𝖢_N of 𝖠𝖦_N we can associate to it a particle configuration by putting a particle in each square with a south or east dominos, see Figure <ref>. The particle inherits the coordinates of the square. It is a combinatorial fact that there exist exactly n particles at level n. We order them in terms of their horizontal x coordinate so that the particle configuration at level n is in 𝕎_n. We note that the particle configuration is not one-to-one with the dimer covering since some information is lost. For example squares corresponding to particles can be covered by either a north-south or west-east dimer pair. This information can be recovered by keeping track of two sets of particles and extending the state space, see <cit.>, but we will not do it here.§.§ Urban renewal Let k≥ 1 be arbitrary. We now define a map 𝒰ℛ_k^k+1 that takes a weighting 𝒲 of 𝖠𝖦_k+1 to a weighting 𝒰ℛ_k^k+1(𝒲) of 𝖠𝖦_k. This map is called the urban renewal in the literature <cit.>. It is also known as the spider move. It is of central importance in the shuffling algorithm described shortly as it gives the sampling probabilities at the various iterations. It is defined as follows.Given a weighting 𝒲 of 𝖠𝖦_k+1, the weighting 𝒰ℛ_k^k+1(𝒲) of 𝖠𝖦_k is defined as follows, with 0≤ x ≤ k-1 and 1≤ n ≤ k,𝒰ℛ_k^k+1(𝒲)_e,(x,n) =𝒲_e,(x,n)/𝒲_e,(x,n)𝒲_w,(x,n)+𝒲_s,(x,n)𝒲_n,(x,n) , 𝒰ℛ_k^k+1(𝒲)_w,(x,n) = 𝒲_w,(x+1,n+1)/𝒲_e,(x+1,n+1)𝒲_w,(x+1,n+1)+𝒲_s,(x+1,n+1)𝒲_n,(x+1,n+1), 𝒰ℛ_k^k+1(𝒲)_n,(x,n) = 𝒲_n,(x+1,n)/𝒲_e,(x+1,n)𝒲_w,(x+1,n)+𝒲_s,(x+1,n)𝒲_n,(x+1,n) ,𝒰ℛ_k^k+1(𝒲)_s,(x,n) = 𝒲_s,(x,n+1)/𝒲_e,(x,n+1)𝒲_w,(x,n+1)+𝒲_s,(x,n+1)𝒲_n,(x,n+1) .See Figure <ref> for an illustration. By construction 𝒰ℛ_k^k+1(𝒲) inherits the strict positivity of 𝒲 and thus the corresponding probability measure ℙ_𝒰ℛ_k^k+1(𝒲)^(k) on 𝖣𝖢_k is supported on the whole of 𝖣𝖢_k. We finally use the notation, with k≤ n,𝒰ℛ_k^n=𝒰ℛ_n-1^n𝒰ℛ^n-1_n-2⋯𝒰ℛ_k^k+1with the convention that 𝒰ℛ_n^n is the identity map. §.§ The shuffling algorithmLet N≥ 1 be fixed and consider a weighting 𝒲 of 𝖠𝖦_N. We now describe an algorithm, called the shuffling algorithm <cit.>, which generates a random dimer cover 𝔡 of 𝖠𝖦_N distributed according to ℙ_𝒲^(N). We proceed in an inductive fashion. Beginning with 𝖠𝖦_1 we cover the single square (0,1) with a west-east pair of dimers with probability ρ_𝒰ℛ^N_1(𝒲)(0,1) or with a north-south pair of dimers with probability 1-ρ_𝒰ℛ^N_1(𝒲)(0,1). Then, after k steps of the algorithm we have generated a random dimer covering 𝔡^(k)∈𝖣𝖢_k and we do the following:* We embed 𝖠𝖦_k into 𝖠𝖦_k+1 so that square (x,n) in 𝖠𝖦_k consists of the west, north, east and south edges of squares (x,n), (x,n+1), (x+1,n+1) and (x+1,n) in 𝖠𝖦_k+1 respectively as depicted in the Figure <ref>. This embeds 𝔡^(k) into a subcollection of edges of 𝖠𝖦_k+1.* If in this embedding two dimers of 𝔡^(k) belong to the same square of 𝖠𝖦_k+1 (in this embedding) we remove them. See Figure <ref> for an illustration.* We then move all dimers by one edge in the opposite direction of their names (viewed as dimers of 𝖠𝖦_k+1). Namely, a north dimer moves down by one, a south dimer moves up by one, a west dimer moves right by one and an east dimer moves left by one. See Figure <ref> for an illustration.* This leaves a number of squares not covered by any dimers which are filled in the following fashion. If square (x,n) is empty it is covered with a west-east dimer pair with probability ρ_𝒰ℛ^N_k+1(𝒲)(x,n) and covered with a north-south dimer pair with probability 1-ρ_𝒰ℛ^N_k+1(𝒲)(x,n). This gives a random element 𝔡^(k+1) of 𝖣𝖢_k+1.We record a couple of observations about the algorithm. First, items (1), (2), (3) in the description of the algorithm do not depend on the weighting 𝒲 in any way. Second, randomness only comes in item (4) of the description of the algorithm and moreover, at step k+1, it only depends on the weighting 𝒰ℛ_k+1^N(𝒲) through the square probabilities ρ_𝒰ℛ_k+1^N(𝒲). We have the following remarkable theorem due to Propp <cit.>, see the earlier papers <cit.> for the uniform case.The random dimer cover 𝔡^(k) of 𝖠𝖦_k obtained after k steps of the shuffle is distributed according to ℙ_𝒰ℛ^N_k(𝒲)^(k). In particular, after N steps we obtain a random dimer covering 𝔡=𝔡^(N) of 𝖠𝖦_N distributed according to ℙ_𝒲^(N), our target distribution. §.§ Connection to dynamics on interlacing arrays Observe that, by virtue of the map from dimer configurations to particles, the shuffling algorithm gives rise to a sequence of random particle configurations. We will be interested in the evolution of particles under this algorithm, thinking of the number of iterations of the shuffle as discrete time.Define for j≤ t ≤ N, 1≤ i ≤ j,𝗑_i^(j),sh(t)=position of i-th particle of level j after t steps of the shuffle.Recall that particles are ordered so that (𝗑_1^(j),sh(t),…,𝗑_j^(j),sh(t))∈𝕎_j for any j ≤ t ≤ N. Also, observe that for t<j the particles at level j do not come into play yet as the Aztec diamond graphs involved are of size less than j. Finally, observe that the initial condition, for level j at time j, is given by𝗑_i^(j),sh(j)=i-1,1≤ i ≤ j. Consider the sequential-update push-block Bernoulli dynamics in 𝕀𝔸_N from Definition <ref>, except that a particle at space location x, at level n, at time t has the jump probability ρ_𝒰ℛ_t+n^N(𝒲)(x,n) instead. Starting with the last time the jump probabilities for a certain level are well-defined, namely for level n at time t=N-n, we stop/freeze the particles on that level for all subsequent times. Then, this defines a stochastic process (𝖸_i^(j)(t);0≤ t ≤ N-j)_1≤ i ≤ j, 1 ≤ j ≤ N such that for any 1≤ n≤ N and all 0≤ t ≤ N-n, (𝖸_i^(j)(t))_1≤ i ≤ j, 1 ≤ j ≤ n is in 𝕀𝔸_n. We then prove the following proposition by combining the results of Nordenstam <cit.> and Propp <cit.>. Let N≥ 1. Let 𝒲 be a weighting of 𝖠𝖦_N. Let 𝖸_i^(j)(t) and 𝗑_i^(j),sh(t) be as above. Then, we have the following equality in distribution, jointly in all involved indices,(𝖸_i^(j)(t-j);1≤ j ≤ N, 1 ≤ i ≤ j, j≤ t ≤ N) d=(𝗑_i^(j),sh(t);1≤ j ≤ N, 1 ≤ i ≤ j, j≤ t ≤ N) . The fact that the shuffling algorithm induces the sequential-update push-block dynamics under a different time-shift for each level is explained in Section 3 of <cit.>. There this exact proposition is proven in the case of the uniform weighting. However, the interactions between particles, which correspond to items (1), (2), (3) in the description of the algorithm, are exactly the same for all weightings 𝒲. The only thing that changes are the probabilities, corresponding to item (4) in the description of the algorithm, of covering an empty square by a west-east or north-south dimer pair. These correspond to a particle jumping by one to the right or staying put respectively. For 𝗑_i^(j),sh(t)=x these probabilities are given by ρ_𝒰ℛ_t^N(𝒲)(x,j) and 1-ρ_𝒰ℛ_t^N(𝒲)(x,j) respectively which by virtue of the time-shift give the jump probabilities for 𝖸_i^(j)(t).The equality can be made an almost sure equality by taking the Bernoulli random variables driving the dynamics of 𝖸_i^(j)(t-j) and 𝗑_i^(j),sh(t) to be the same.§.§ The shuffle as a Markov chain and consistent weightings Observe that, given N≥ 1 and a weighting 𝒲 of 𝖠𝖦_N, we can view the shuffling algorithm as a certain Markov chain with discrete time 0≤ n ≤ N and varying state spaces 𝖣𝖢_n with target marginal distribution at time N given by ℙ_𝒲^(N). In particular, we have a sequence of Markov transition kernels 𝖲𝗁_n-1,n^N,𝒲 from 𝖣𝖢_n-1 to 𝖣𝖢_n such that,ℙ_𝒰ℛ_n-1^N(𝒲)^(n-1)𝖲𝗁_n-1,n^N,𝒲=ℙ_𝒰ℛ_n^N(𝒲)^(n) ,for1≤ n ≤ N.We note moreover that 𝖲𝗁_n-1,n^N,𝒲 only depends on 𝒰ℛ_n^N(𝒲) through the square probabilities ρ_𝒰ℛ_n^N(𝒲)(x,k) for 0≤ x ≤ n-1 and 1≤ k ≤ n. We would now like to extend the above to N=∞. The following definition gives the key notion. We say that a sequence of weightings (𝒲^(k))_k≥ 1 on (𝖠𝖦_k)_k≥ 1 is consistent if for all k≥ 1, 𝒲^(k) is gauge-equivalent to 𝒰ℛ_k^k+1(𝒲^(k+1)). As far as we can tell, a classification of consistent weightings of Aztec diamonds has not been written down explicitly in the literature. It boils down to the study of the urban renewal (or spider move) transformations (𝒰ℛ_n^n+1)_n≥ 1 viewed as a dynamical system and thus it should be related to the works <cit.>. It is easy to check that if 𝒲 and 𝒲̃ are gauge-equivalent weightings of 𝖠𝖦_n then 𝒰ℛ^n_n-1(𝒲)and 𝒰ℛ^n_n-1(𝒲̃)are gauge-equivalent.In particular, we obtain that for a consistent sequence (𝒲^(k))_k≥ 1, for N,M≥ n, the weightings 𝒰ℛ_n^N(𝒲^(N)) and 𝒰ℛ_n^M(𝒲^(M)),are both gauge-equivalent to 𝒲^(n). Thus, by virtue of Lemma <ref>, both the corresponding measures on 𝖣𝖢_n and square probabilities are equal:ℙ^(n)_𝒰ℛ_n^N(𝒲^(N))=ℙ^(n)_𝒰ℛ_n^M(𝒲^(M))=ℙ^(n)_𝒲^(n) and ρ_𝒰ℛ_n^N(𝒲^(N))=ρ_𝒰ℛ_n^M(𝒲^(M))=ρ_𝒲^(n). In particular, we have that for any N,M ≥ n, 𝖲𝗁_n-1,n^N,𝒲^(N)=𝖲𝗁_n-1,n^M,𝒲^(M) and so we can define, for all n≥ 1, transition kernels 𝖲𝗁_n-1,n^(𝒲^(k))_k≥ 1 from 𝖣𝖢_n-1 to 𝖣𝖢_n such that 𝖲𝗁_n-1,n^(𝒲^(k))_k≥ 1=𝖲𝗁_n-1,n^N,𝒲^(N), forN ≥ n, and ℙ_𝒲^(n-1)^(n-1)𝖲𝗁_n-1,n^(𝒲^(k))_k≥ 1=ℙ_𝒲^(n)^(n),for alln ≥ 1.Thus, we can couple all the ℙ_𝒲^(n)^(n), for n≥ 1, in a natural way as marginals of the trajectory of the shuffle Markov chain. To give a formal statement, let us write ∏_N=1^∞𝖣𝖢_N for the path space and for any m≥ 1 and indices n_1<n_2<⋯<n_m, write π_n_1,…,n_m:∏_N=1^∞𝖣𝖢_N→𝖣𝖢_n_1×⋯×𝖣𝖢_n_m for the obvious projection map. Kolmogorov's extension theorem then gives the following.Let(𝒲^(k))_k≥ 1 be a consistent sequence of weightings on (𝖠𝖦_k)_k≥ 1. Then, there exists a unique probability measure 𝖯𝖬_(𝒲^(k))_k≥ 1 on ∏_N=1^∞𝖣𝖢_N such that for any m≥ 1 and indices n_1≤…≤ n_m,(π_n_1,…,n_m)_*𝖯𝖬_(𝒲^(k))_k≥ 1(𝔡^(n_1),𝔡^(n_2),…,𝔡^(n_m))=ℙ_𝒲^(n_1)^(n_1)(𝔡^(n_1))𝖲𝗁_n_1,n_2^(𝒲^(k))_k≥ 1(𝔡^(n_1),𝔡^(n_2))⋯𝖲𝗁_n_m-1,n_m^(𝒲^(k))_k≥ 1(𝔡^(n_m-1),𝔡^(n_m)).Moreover, since for consistent weightings (𝒲^(k))_k≥ 1, we have ρ_𝒰ℛ_t+n^N(𝒲^(N))=ρ_𝒲^(t+n), we can extend, for all N≥ 1, the sequential-update push-block Bernoulli dynamics process(𝖸_i^(j)(t);0≤ t ≤ N-j)_1≤ i ≤ j, 1 ≤ j ≤ N from Definition <ref> to all times t≥ 0 to obtain a process (𝖸_i^(j)(t);t≥ 0)_1≤ i ≤ j, j≥ 1 in 𝕀𝔸_∞. Observe that, this process has jump probabilities at space location x, at level n, at time t given by ρ_𝒲^(t+n)(x,n). Hence, we obtain the following extension of Proposition <ref> in the special case of consistent weights.Let(𝒲^(k))_k≥ 1 be a consistent sequence of weightings on (𝖠𝖦_k)_k≥ 1. Consider the coupling of the ℙ_𝒲^(k)^(k) obtained by the shuffling algorithm from Proposition <ref>. Let 𝗑_i^(j),sh(t) be the particle locations as in (<ref>) and 𝖸_i^(j)(t) as in the above paragraph. Then, we have the equality in distribution(𝖸_i^(j)(t-j);j ∈ℕ, 1 ≤ i ≤ j, t≥ j) d=(𝗑_i^(j),sh(t);j∈ℕ, 1 ≤ i ≤ j, t ≥ j) .We now give an application of the above framework, in combination with our results from previous sections, to prove a reformulation of Theorem <ref>.Given two sequences 𝐳^(1)=(z^(1)_x)_x∈ℤ_+, 𝐳^(2)=(z^(2)_x)_x∈ℤ_+∈ (0,∞)^ℤ_+ we define for each k≥ 1 a weighting 𝒲^(k),𝐳^(1),𝐳^(2) on 𝖠𝖦_k as follows, see Figure <ref> for an illustration,𝒲_e,(x,n)^(k),𝐳^(1),𝐳^(2) =𝒲_n,(x,n)^(k),𝐳^(1),𝐳^(2)=1, 𝒲_w,(x,n)^(k),𝐳^(1),𝐳^(2) =z^(1)_x, 𝒲_s,(x,n)^(k),𝐳^(1),𝐳^(2) =z^(2)_x. Recall that this is the weighting from the introduction viewed in terms of the dimers instead of dominoes. The following lemma is easy to show.Assume that the sequences (𝐳^(1),𝐳^(2)), (𝐳̃^(1),𝐳̃^(2)) ∈ (0,∞)^ℤ_+× (0,∞)^ℤ_+ satisfyz_x^(1)/z_x^(1)+z_x^(2)=z̃_x^(1)/z̃_x^(1)+z̃_x^(2),for allx ∈ℤ_+.Then, for all k≥ 1, the weightings 𝒲^(k),𝐳^(1),𝐳^(2) and 𝒲^(k),𝐳̃^(1),𝐳̃^(2)are gauge-equivalent.Fix n≥ 1. Observe that, for all x∈ℤ_+, z_x^(1)/z̃_̃x̃^(1)=z_x^(2)/z̃_x^(2)=r_x, for some r_x∈ (0,∞). Then, start with 𝒲^(k),𝐳^(1),𝐳^(2) and first apply a gauge-transformation of multiplication by r_n-1^-1 at each vertex joining the squares (n-2,k) and (n-1,k), for k=1,…, n. Observe that, all these vertices lie on a diagonal of slope 135 degrees. Then, apply a gauge transformation of multiplication by r_n-1 along the vertices of the next diagonal with the same slope (the diagonal connecting vertices of the squares (n-2,k) for k=1,…,n) and so on. In the end, we obtain 𝒲^(k),𝐳̃^(1),𝐳̃^(2). We can then define 𝒲^(k),𝐚 to be 𝒲^(k),𝐳^(1),𝐳^(2) for any fixed choice of 𝐳^(1) and 𝐳^(2) such that the ratio in (<ref>) is equal to a_x, for all x∈ℤ_+, since from a probabilistic standpoint they give rise to the same models, namely for any such choice of 𝐳^(1),𝐳^(2) and all k≥ 1,ℙ^(k)_𝒲^(k),𝐚=ℙ^(k)_𝒲^(k),𝐳^(1),𝐳^(2).For simplicity we could take 𝐳^(1)=𝐚 and 𝐳^(2)=1^ℤ_+. We then have the following theorem, a reformulation of Theorem <ref> from the introduction. Let 𝐚 be such that inf_k∈ℤ_+a_k>0 and sup_k∈ℤ_+a_k<1. Consider theprobability measures ℙ^(k)_𝒲^(k),𝐚 on 𝖣𝖢_k associated to the weighting 𝒲^(k),𝐚 defined above. Then, there exists a coupling of the ℙ^(k)_𝒲^(k),𝐚 such that the following happens. If we denote by 𝗑_i^(j)(m), for m≥ j, the location of the i-th south or east dimer(equivalently particle) on level j of the random element of 𝖣𝖢_m distributed according to ℙ^(m)_𝒲^(m),𝐚 in this coupling, then jointly for all N≥ 1, the discrete-time stochastic process (𝗑_1^(N)(t+N),𝗑_2^(N)(t+N),…,𝗑_N^(N)(t+N);t ≥ 0)evolves as a Markov process in 𝕎_N, starting from (0,1,…,N-1), with transition probabilities from time t_1 to time t_2 given by 𝔓^(N)_(1-z)^t_2-t_1. In particular, for any n≥ 1 and pairwise distinct time-space points (t_1,x_1),…,(t_n,x_n) in ℤ_+×ℤ_+,ℙ(∃ j_1,…,j_nsuch that 𝗑_j_i^(N)(t_i+N)=x_ifor1 ≤ i ≤ n)= (𝒦_N[(t_i,x_i);(t_j,x_j)])_i,j=1^nwhere f_s,t(z)=(1-z)^t-s in the definition of 𝒦_N from (<ref>).It is easy to check, by an analogous argument to the proof of Lemma <ref>, that for all n≥ 1, 𝒰ℛ_n^n+1(𝒲^(n+1),𝐚) and 𝒲^(n),𝐚 are gauge-equivalent. Hence, the sequence of weightings (𝒲^(k),𝐚)_k≥ 1 is consistent and consider the shuffling algorithm coupling from Proposition <ref>. Moreover, observe that ρ_𝒲^(t+n),𝐚(x,n)=a_x. Thus, the process (𝖸_i^(j)(t);t≥ 0)_1≤ i ≤ j, 1 ≤ j ≤ N follows the inhomogeneous in space (but homogeneous in time and levels) dynamics in 𝕀𝔸_N we studied earlier in the paper. Hence, by virtue of Proposition <ref> and Theorem <ref> we obtain the desired statement. § LINE ENSEMBLES WITH FIXED STARTING AND FINAL POSITIONS IN INHOMOGENEOUS SPACE In this section we consider inhomogeneous Toeplitz-like matrices with matrix-valued symbols 𝐟 and extend some of the results of <cit.>. In particular, this allows us to study random walks with fixed starting and final positions, see Section <ref> for motivation.Beyond the definition of an inhomogeneous Toeoplitz-like matrix and the basic composition property which naturally extends to the matrix symbol setting, see Lemma <ref>, this section is largely independent of the theory developed previously. What would be very interesting would be to extend some of our more probabilistic results, such as the intertwined semigroups and couplings, to the matrix symbol 𝐟 setting. We believe that analogues should exist (see also the discussion at the end of Section <ref>) but the naive guess of simply plugging in a matrix-valued 𝐟 in the relevant formulae, as far as we can tell, does not work. We leave this investigation for future work. We begin with the main definition. Let 𝔭≥ 1. Let 𝐟 be a 𝔭×𝔭 matrix-valued function such that all its entries belong to 𝖧𝗈𝗅(ℍ_-ϵ) for ϵ>0. Define the inhomogeneous Toeplitz-like matrix [𝖳_𝐟(x,y)]_x,y∈ℤ_+ associated to the matrix symbol 𝐟 by, with x,y∈ℤ_+:𝖳_𝐟(x,y) =-1/2πi1/a_m∮_𝖢_𝐚p_k(w)/p_m+1(w)𝐟(w)_ijdw, ifx=k𝔭+i, y=m𝔭+j, fori,j=0,…,𝔭-1andm,k∈ℤ_+. Observe that, for 𝔭=1 this boils down to the inhomogeneous Toeplitz-like matrices from Section <ref>, while for general 𝔭, but in the homogeneous case a_x≡ 1, this is the (block) Toeplitz matrix associated to the matrix symbol 𝐟(1-z). As in the scalar case, more general functions 𝐟 could be considered but we restrict to the above class for simplicity. Now, suppose that we are given L≥ 2, 𝔭×𝔭 matrix-valued functions 𝐟_0(z),…,𝐟_L-1(z). We assume that all their entries belong to 𝖧𝗈𝗅(ℍ_-R) for R>R(𝐚).As mentioned in the introductory part, for this section it will be more convenient to index matrix entries and co-ordinates of configurations in 𝕎_𝔭N starting from 0 instead of 1. Let M∈ℤ_+ be fixed. We consider a probability measure on (L-1) copies of 𝕎_𝔭N of the form1/Z(𝖳_𝐟_0(j,x_k^(1)))_j,k=0^𝔭N-1∏_r=1^L-2(𝖳_𝐟_r(x_j^(r),x_k^(r+1)))_j,k=0^𝔭N-1(𝖳_𝐟_L-1(x_j^(L-1),𝔭M+k))_j,k=0^𝔭N-1,where Z is a certain strictly positive normalization constant so that the above measure is a probability measure. We stress that positivity of (<ref>) is part of our assumption.We associate to the probability measure (<ref>) a stochastic process:(𝖷_0^N,𝔭,L,M(t), 𝖷_1^N,𝔭,L,M(t),…,𝖷_𝔭N-1^N,𝔭,L,M(t); t=1,…,L-1),which records the particle positions, such that (𝖷_0^N,𝔭,L,M(t),𝖷_1^N,𝔭,L,M(t),…,𝖷_𝔭N-1^N,𝔭,L,M(t))∈𝕎_𝔭N, for t=1,…,L-1.Moreover, we can extend the definition of this stochastic process to times t=0 and t=L by taking:(𝖷_0^N,𝔭,L,M(0),𝖷_1^N,𝔭,L,M(0),…,𝖷_𝔭N-1^N,𝔭,L,M(0)) =(0,1,…,𝔭N-1),(𝖷_0^N,𝔭,L,M(L),𝖷_1^N,𝔭,L,M(L),…,𝖷_𝔭N-1^N,𝔭,L,M(L)) =(𝔭M,𝔭M+1,…,𝔭(N+M)-1).Thus, by connecting the points 𝖷_i^N,𝔭,L,M(0), 𝖷_i^N,𝔭,L,M(1),…,𝖷_i^N,𝔭,L,M(L) by straight lines, for each i=0,…, 𝔭N-1, we can think of the measure (<ref>) as giving rise to a line ensemble with 𝔭N lines with fixed starting and final positions at times t=0 and t=L respectively, see Figure <ref> for an illustration. Define the function 𝐟_r,r' for r<r' by 𝐟_r,r'(z)=[𝐟_r𝐟_r+1⋯𝐟_r'-1](z) and the complex matrix-valued measure 𝐌(z)=𝐌(z;𝐚), which also depends on L and M+N but we suppress this in the notation, by 𝐌(z;𝐚)=[𝐟_0𝐟_1⋯𝐟_L-1](z)/p_M+N(z;𝐚). Let 𝐀^t denote the transpose of a matrix 𝐀 and let 0_𝔭 and 𝐈_𝔭 be the zero and identity 𝔭×𝔭 matrices respectively. We have the following theorem which generalises the results of Section 4 of <cit.> to a_x1. Let M≥ 0,𝔭≥ 1, L≥ 2. Suppose that the matrix-valued functions 𝐟_0,…,𝐟_L-1 have entries in 𝖧𝗈𝗅(ℍ_-R) for R>R(𝐚). Assume that the expression (<ref>) defines a probability measure on (L-1) copies of 𝕎_𝔭N. Denote by the corresponding stochastic processes of non-intersecting paths. Then, this is described by a determinantal point process, namely for any n≥ 1 and pairwise distinct time-space points (t_1,x_1),…,(t_n,x_n) in 1,L-1×ℤ_+, we haveℙ(∃ j_1,…,j_nsuch that 𝖷_j_i^N,𝔭,L,M(t_i)=x_ifori=1,…,n)=(𝖪_N[(t_i,x_i);(t_j,x_j)])_i,j=1^n,where the correlation kernel 𝖪_N given by[𝖪_N[(r,m𝔭+j);(r',k𝔭+i)]]_i,j=0^𝔭-1=1_r>r'1/2πi1/a_m∮_𝖢_𝐚𝐟_r',r(z)p_k(z)/p_m+1(z)dz-1/(2πi)^21/a_k∮_𝖢_𝐚∮_𝖢_𝐚𝐟_r',L(w)𝐑_N(w,z)𝐟_0,r(z)p_k(w)/p_M+N(w)p_m+1(z)dz dw.Here, the 𝔭×𝔭 matrix-valued function 𝐑_N(w,z) is given by 𝐑_N(z,w)=1/z-w[ 0_𝔭 𝐈_𝔭 ]𝐘^-1(w)𝐘(z)[ 𝐈_𝔭; 0_𝔭 ]with the 2𝔭× 2 𝔭 matrix-valued function 𝐘:ℂ∖𝖢_𝐚→ℂ^2𝔭×2𝔭 being the unique solution to the following Riemann-Hilbert problem (RHP) of size 2𝔭× 2𝔭: * 𝐘 is analytic,* 𝐘_+(z)=𝐘_-(z)[𝐈_𝔭 𝐌(z;𝐚);0_𝔭𝐈_𝔭 ] on 𝖢_𝐚, where 𝐘_+ is the limit of 𝐘(z) from inside 𝖢_𝐚 and 𝐘_- the limit of 𝐘(z) from outside 𝖢_𝐚 respectively,* 𝐘(z)=(𝐈_2𝔭+𝒪(z^-1))[ z^N 𝐈_𝔭 0_𝔭; 0_𝔭 z^-N𝐈_𝔭 ] as z→∞. We observe that for a_x≡ 1, and after the change of variables z↦ 1-z, w↦ 1-w Theorem <ref> specialises to Theorem 4.7 of <cit.> with the identifications, in the notations of <cit.>, 𝐟_r',r(1-z)=A_r',r(z), 1-𝖢_𝐚=γ and 𝐑_N(1-w,1-z)=-R_N(w,z).Positivity of (<ref>) is in some sense not strictly necessary as long as the normalisation constant Z≠ 0. One then has a signed measure of total mass 1 on point configurations (but there is no associated stochastic process of course). We can still define correlation functions for such signed measures, see <cit.>. The analysis that follows essentially goes through and the conclusions of Theorem <ref> on explicit determinantal correlations remain valid. Using the above theorem and some Riemann-Hilbert problem asymptotic analysis, under a factorisation assumption for 𝐟_0,L(z) and some assumptions on the sequence 𝐚, we obtain the following limit theorem for the bottom paths as N→∞. This generalises Theorem 3.1 of <cit.> to a_x≠ 1. Let us define 𝔄_η, with η∈ (0,1), to be the annulus 𝔄_η={z∈ℂ:η < |z|<η^-1} about 0.Assume that the parameter sequence 𝐚 satisfiesinf_x∈ℤ_+a_x≥1-𝔠 and sup_x∈ℤ_+a_x≤1+𝔠,for some 0≤𝔠 <1/3. Suppose that the matrix-valued functions 𝐟_0(z),…,𝐟_L-1(z) have entries in 𝖧𝗈𝗅(ℍ_-R) for R>R(𝐚) and moreover that the functions 𝐟_0(1-z)^± 1,…, 𝐟_L-1(1-z)^± 1 are analytic in an annulus 𝔄_η where η <1-2𝔠. Finally, assume we have the factorisations 𝐟_0,L(1-z)=𝐒_+(z)𝐒_-(z)=𝐒̃_-(z)𝐒̃_+(z), where 𝐒_±, 𝐒̃_± are 𝔭×𝔭 matrix-valued functions satisfying* 𝐒_+^± 1(z), 𝐒̃_+^± 1(z) are analytic for |z|<1 and continuous for |z|≤ 1,* 𝐒_-^± 1(z),𝐒̃_-^± 1(z) are analytic for |z|>1 and continuous for |z|≥ 1,* 𝐒_-(z)∼ z^M𝐈_𝔭 and 𝐒̃_-(z)∼ z^M𝐈_𝔭 as z→∞.Consider the line ensemble (𝖷_0^N,𝔭,L,M(t), 𝖷_1^N,𝔭,L,M(t),…,𝖷_𝔭N-1^N,𝔭,L,M(t); t=1,…,L-1)associated to (<ref>). Then, for any m≥ 1, we have the following convergence in distribution for the bottom m lines in the line ensemble as N →∞, with m≤𝔭N,(𝖷_0^N,𝔭,L,M(t), …,𝖷_m-1^N,𝔭,L,M(t);1≤ t ≤ L-1)d⟶(𝖷_0^∞,𝔭,L,M(t),…,𝖷_m-1^∞,𝔭,L,M(t); 1≤ t ≤ L-1),where the limiting line ensemble ((𝖷_i^∞,𝔭,L,M(t))_i=0^∞; t=1,…,L-1) is again determined through its determinantal correlation functions: for any n≥ 1 and pairwise distinct time-space points (t_1,x_1),…,(t_n,x_n) in 1, L-1 ×ℤ_+ we have ℙ(∃ j_1,…,j_nsuch that 𝖷_j_i^∞,𝔭,L,M(t_i)=x_ifori=1,…,n)=(𝖪_∞[(t_i,x_i);(t_j,x_j)])_i,j=1^n,where the kernel 𝖪_∞ is given by:[𝖪_∞[(r,m𝔭+j);(r',k𝔭+i)]]_i,j=0^𝔭-1=-1_r>r'1/2πi1/a_m∮_|z|=1𝐟_r',r(1-z)p_k(1-z)/p_m+1(1-z)dz-1/(2πi)^21/a_k∮_|z|=1^-∮_|w|=1^+𝐟_r',L(1-w)𝐒_-^-1(w)𝐒_+^-1(z)𝐟_0,r(1-z)p_k(1-w)/p_m+1(1-z)dz dw/z-w. Although the functions 𝐒̃_-, 𝐒̃_+ (for 𝔭=1 these can simply be picked the same as 𝐒_-, 𝐒_+) do not appear in the limiting kernel 𝖪_∞, they are required for the proof. The proof unsurprisingly boils down to computing the asymptotics of 𝐑_N(w,z) as N→∞. Note that, along with the polynomial p_M+N(·) this is the only N-dependent quantity in 𝖪_N. In <cit.> the authors also consider a limit theorem for the top paths in the line ensemble (in this case 𝐒̃_-, 𝐒̃_+ appear in the limiting kernel instead). In our case, if we consider the same scaling, we would need to enforce the inhomogeneity parameters 𝐚 to be asymptotically constant to get a limit (which essentially puts us back into the setting of <cit.>). It is plausible that interesting limit theorems exist for the top paths for general 𝐚 but under a different scaling. We leave this for future work. We first prove Theorem <ref>. The following lemma is essential for computations.Let 𝐟,𝐠 be 𝔭×𝔭 matrix-valued functions such that all their entries belong to 𝖧𝗈𝗅(ℍ_-R), for R>R(𝐚). Then, we have 𝖳_𝐟𝖳_𝐠=𝖳_𝐟𝐠. The proof is a straightforward extension of the proof of Lemma <ref>.One may hope that Lemmas <ref> and <ref> have analogous straightforward extensions for matrix-valued functions 𝐟 but as far as we can tell this is not the case. By virtue of Lemma <ref> we obtain that convolutions of the 𝖳_𝐟_r operators satisfy[𝖳_𝐟_r𝖳_𝐟_r+1⋯𝖳_𝐟_r'-1(k𝔭+i,m𝔭+j)]_i,j=0^𝔭-1=[𝖳_𝐟_r,r'(k𝔭+i,m𝔭+j)]_i,j=0^𝔭-1.Let 𝐆=(𝐆_κμ)_κ, μ=0^𝔭N-1 be the so-called Gram matrix associated to this problem given by, by virtue of Lemma <ref>, 𝐆_κμ=𝖳_𝐟_0𝖳_𝐟_1…𝖳_𝐟_L-1(κ,𝔭M+μ)=𝖳_𝐟_0,L(κ,𝔭M+μ).Then, note that (𝐆)=Z>0, by our assumption that (<ref>) is a well-defined probability measure, and thus 𝐆 is invertible. Moreover, observe that we can write𝐆_κμ =-1/2πi1/a_M+m∮_𝖢_𝐚 p_k(z)𝐌(z)_ijp_M+N(z)/p_M+m+1(z)dzif κ=k𝔭+i, μ=m𝔭+j, fori,j=0,…,𝔭-1andk,m=0,…,N-1.Now, given invertible 𝔭N×𝔭N matrices 𝐏,𝐐 we define (slightly abusing notation) 𝔭×𝔭 matrix-valued orthogonal polynomials 𝐏_j,𝐐_j, for j=0,…,N-1, of degree at most N-1 (namely their entries have degrees at most N-1) as follows:[ 𝐏_0(z); 𝐏_1(z);⋮; 𝐏_N-1(z);]=𝐏[ 𝐈_𝔭; p_1(z)𝐈_𝔭; ⋮; p_N-1(z)𝐈_𝔭; ], [ 𝐐_0(z); 𝐐_1(z);⋮; 𝐐_N-1(z);]=𝐐[ -1/a_Mp_M+N(z)/p_M+1(z)𝐈_𝔭; -1/a_M+1p_M+N(z)/p_M+2(z)𝐈_𝔭;⋮;-1/a_M+N-1𝐈_𝔭;]. The following result relates inverting the matrix 𝐆 to finding biorthogonal matrix-valued polynomials. Suppose 𝐏,𝐐 are invertible 𝔭N×𝔭N matrices and define 𝐏_j,𝐐_j matrix-valued polynomials as above. Let 𝐆 be the Gram matrix defined in (<ref>). Then, the following are equivalent* 𝐆^-1=𝐐^t𝐏* For j,k=0,…,N-1, 1/2πi∮_𝖢_𝐚𝐏_j(z)𝐌(z)𝐐_k^t(z)dz=1_j=k𝐈_𝔭. We adapt the proof of Proposition 4.5 in <cit.>. The key is to consider the matrix𝐗=1/2πi∮_𝖢_𝐚[ 𝐏_0(z); 𝐏_1(z);⋮; 𝐏_N-1(z);]𝐌(z) [ 𝐐^t_0(z)𝐐^t_1(z) ⋯𝐐^t_N-1(z) ]dzand observe that 𝐏^-1𝐗𝐐^-t=𝐆 from which the conclusion follows. For any factorization𝐆^-1=𝐐^t𝐏 we define the reproducing kernel 𝐑_N(w,z), which we will show in the sequel, in the proof of Theorem <ref>, that it can be written as (<ref>), by𝐑_N(w,z)=∑_j=0^N-1𝐐_j^t(w)𝐏_j(z).Observe that, 𝐑_N(w,z) is independent of the choice of factorization 𝐐^t𝐏 since 𝐑_N(w,z)=[ -1/a_Mp_M+N(w)/p_M+1(w)𝐈_𝔭-1/a_M+1p_M+N(w)/p_M+2(w)𝐈_𝔭⋯ -1/a_M+N-1𝐈_𝔭; ]𝐆^-1[ 𝐈_𝔭; p_1(z)𝐈_𝔭; ⋮; p_N-1(z)𝐈_𝔭; ].The following proposition gives thereproducing property of 𝐑_N(w,z) which also characterises it. For any matrix-valued polynomial 𝐒(z) of degree at most N-1 we have1/2πi∮_𝖢_𝐚𝐒(w)𝐌(w)𝐑_N(w,z)dw =𝐒(z), 1/2πi∮_𝖢_𝐚𝐑_N(w,z)𝐌(z)𝐒^t(z)dz =𝐒^t(w).Moreover, any bivariate polynomial of degree at most N-1 satisfying either equality above for all matrix-valued polynomials 𝐒(z) of degree at most N-1 must be equal to 𝐑_N(w,z).The proof is a word for word adaptation of the proof of Lemma 4.6 in <cit.>. We finally need the following.There is a unique monic matrix-valued polynomial 𝐏_N:𝐏_N(z)=z^N𝐈_𝔭+⋯of degree N such that 1/2πi∮_𝖢_𝐚𝐏_N(z)𝐌(z)z^kdz=0_𝔭, k=0,…,N-1. We expand (note that we can write z^j as a linear combination of p_0(z),…,p_j(z), see also the proof of Proposition <ref>)𝐏_N(z)=𝐈_𝔭z^N+∑_j=0^N-1𝐂_j p_j(z)for some unknown 𝔭×𝔭 matrices 𝐂_j that we would like to solve for. Showing that this is possible proves the proposition. Plugging in the above expansion in the orthogonality relation we obtain the following equations for 𝐂_j,∑_j=0^N-11/2πi𝐂_j ∮_𝖢_𝐚p_j(z)𝐌(z)z^k dz=-1/2πi∮_𝖢_𝐚z^N𝐌(z)z^k dz, k=0,…,N-1.Let us write 𝐔 for the 𝔭N ×𝔭 N matrix having j-th, k-th block, for j,k=0,…,N-1, given by the 𝔭×𝔭 matrix1/2πi∮_𝖢_𝐚p_j(z)𝐌(z)z^k dz.If we show that 𝐔 is invertible then we are done since by inverting it we can solve for 𝐂_j. Recall that 𝐆 is a 𝔭N ×𝔭N matrix having j-th, k-th block, for j,k=0,…,N-1, given by the 𝔭×𝔭 matrix1/2πi∮_𝖢_𝐚p_j(z)𝐌(z) (-1/a_M+k+1)p_M+N(z)/p_M+k+1(z)dzand it is invertible, by our assumption that (<ref>) is a well-defined probability measure. Let us write 𝐕 for the 𝔭N ×𝔭N change of basis matrix given by[ 𝐈_𝔭z𝐈_𝔭⋯ z^N-1𝐈_𝔭; ]=[ -1/a_Mp_M+N(z)/p_M+1(z)𝐈_𝔭-1/a_M+1p_M+N(z)/p_M+2(z)𝐈_𝔭⋯ -1/a_M+N-1𝐈_𝔭; ]𝐕.Clearly 𝐕 is invertible. Thus, since we can write 𝐔=𝐆𝐕 the conclusion follows. By the Eynard-Mehta theorem, for example in the form presented in <cit.>, we have that the induced point process from (<ref>) is determinantal and the correlation kernel is given by𝖪_N[(r,x);(r',y)]=-1_r>r'𝖳_𝐟_r',r(y,x)+∑_κ,μ=0^𝔭N-1𝖳_𝐟_0,r(κ,x)𝐆^-t_κμ𝖳_𝐟_r',L(y,𝔭M+μ). Then, the first term (more precisely the i,j-th coordinate) is easily seen to be given by the first term in the expression in the statement of the theorem. So let us focus on the second term involving the sum over κ,μ which we denote by 𝖪̃_N. Let us write κ=𝔭ν_1+δ_1 and μ=𝔭ν_2+δ_2. Then, this term becomes𝖪̃_N[(r,m𝔭+j);(r',k𝔭+i)]=∑_ν_1,ν_2=0^N-1∑_δ_1,δ_2=0^𝔭-1𝖳_𝐟_r',L(k𝔭+i,𝔭M+𝔭ν_2+δ_2)𝐆^-1_𝔭ν_2+δ_2,𝔭ν_1+δ_1𝖳_𝐟_0,r(𝔭ν_1+δ_1,m𝔭+j).We can write this in block matrix form as follows [𝖪̃_N[(r,m𝔭+j);(r',k𝔭+i)]]_i,j=0^𝔭-1=[-1/2πi1/a_M+ν_2∮_𝖢_𝐚𝐟_r',L(w)p_k(w)/p_M+ν_2+1(w)dw]_ν_2=0^N-1𝐆^-1[-1/2πi1/a_m∮_𝖢_𝐚𝐟_0,r(z)p_ν_1(z)/p_m+1(z)dz]_ν_1=0^N-1,where the first factor above is a block row vector having length N with 𝔭×𝔭 blocks and the second factor a block column vector of the corresponding size. Combining the integrals and recalling the expression for 𝐑_N(w,z) from (<ref>) gives the expression of the correlation kernel in the statement of the theorem. It remains to show that 𝐑_N(w,z) can also be written as (<ref>). Towards this end, following <cit.>, we note that the size 2𝔭× 2𝔭 Riemann-Hilbert problem in the statement of the theorem for 𝐘(z) has a unique solution given by: 𝐘(z)= [ 𝐏_N(z) 1/2πi∮_𝖢_𝐚𝐏_N(s)𝐌(s)/s-zds; 𝐐_N-1(z) 1/2πi∮_𝖢_𝐚𝐐_N-1(s)𝐌(s)/s-zds ], z ∈ℂ∖𝖢_𝐚,where 𝐏_N(z) is the monic matrix-valued polynomial from Proposition <ref> and 𝐐_N-1 is the matrix-valued polynomial of degree at most N-1 satisfying1/2πi∮_𝖢_𝐚𝐐_N-1(z)𝐌(z)z^k dz= 0_𝔭,k=0,…,N-2,-𝐈_𝔭,k=N-1.The fact that 𝐐_N-1 exists and is unique is proven by following the same scheme of proof as for Proposition <ref>, by solving for the corresponding coefficient matrices. Finally, the fact that 𝐑_N(w,z) has a representation as in (<ref>) is proven in exactly the same way as in Proposition 4.9 of <cit.>: by checking that this expression satisfies the characterising reproducing property from Proposition <ref>. This completes the proof. Moving on to the proof of Theorem <ref> the following proposition is the key technical ingredient. Analogous results should hold for a more general class of parameters 𝐚.Assume that the parameter sequence 𝐚 satisfiesinf_x∈ℤ_+a_x≥ 1-𝔠 and sup_x∈ℤ_+a_x≤1+𝔠,for some 0≤𝔠 <1/3. For r∈ (𝔠,1-2𝔠) define the constant 𝔠_r,𝔠_r=𝔠+r/1-𝔠<1.Moreover, assume that the matrix-valued functions 𝐟_0(1-z)^± 1,…, 𝐟_L-1(1-z)^± 1 are analytic in an annulus 𝔄_η where η <1-2𝔠. Finally, assume we have the factorisations 𝐟_0,L(1-z)=𝐒_+(z)𝐒_-(z)=𝐒̃_-(z)𝐒̃_+(z), where 𝐒_±, 𝐒̃_± are 𝔭×𝔭 matrix-valued functions satisfying* 𝐒_+^± 1(z), 𝐒̃_+^± 1(z) are analytic for |z|<1 and continuous for |z|≤ 1,* 𝐒_-^± 1(z),𝐒̃_-^± 1(z) are analytic for |z|>1 and continuous for |z|≥ 1,* 𝐒_-(z)∼ z^M𝐈_𝔭 and 𝐒̃_-(z)∼ z^M𝐈_𝔭 as z→∞.Consider the 2𝔭× 2 𝔭 matrix-valued function 𝐘̃:ℂ∖{|z|=1}→ℂ^2𝔭×2𝔭 given as the unique solution to the 2𝔭× 2 𝔭 Riemann-Hilbert problem: * 𝐘̃ is analytic,* 𝐘̃_+(z)=𝐘̃_-(z)[ 𝐈_𝔭 𝐟_0,L(1-z)p_M+N(1-z)^-1; 0_𝔭 𝐈_𝔭 ] on |z|=1,* 𝐘̃(z)=(𝐈_2𝔭+𝒪(z^-1))[ z^N 𝐈_𝔭 0_𝔭; 0_𝔭 z^-N𝐈_𝔭 ] as z→∞.Then, as N→∞ and for |z|<1-2𝔠, we have 𝐘̃(z)=(𝐈_2𝔭+𝒪(𝔠_r^N))[ 0_𝔭 (∏_k=0^M+N-1a_k)𝐒̃_+(z); -(∏_k=0^M+N-1a_k^-1)𝐒_+^-1(z) 0_𝔭 ],for any r∈ (𝔠,1-2𝔠) such that max{|z|,η}<r. Similarly, as N→∞ and for |z|>(1-2𝔠)^-1, 𝐘̃(z)=(𝐈_2𝔭+𝒪(𝔠_r^N))[ (∏_k=0^M+N-1a_k)𝐒̃_-^-1(z)p_M+N(1-z)0_𝔭;0_𝔭 (∏_k=0^M+N-1a_k^-1)𝐒_-(z)p_M+N(1-z)^-1 ],for any r∈(𝔠,1-2𝔠) such that r^-1<min{|z|,η^-1}.We transform, in several steps, the Riemann-Hilbert problem for 𝐘̃(z) to a problem which can be solved explicitly, up to asymptotically negligible terms, by virtue of the factorisation for 𝐟_0,L(1-z). The strategy itself is standard and was followed in <cit.> but there are a couple of interesting complications due to the non-constant inhomogeneity sequence 𝐚.Step 1 We make the following transformation to normalise (in a slightly less standard way) the Riemann-Hilbert problem for 𝐘̃(z) at infinity. Namely, define 𝐗(z) by𝐗(z)=𝐘̃(z) [ p_M(1-z)p_M+N(1-z)^-1𝐈_𝔭0_𝔭;0_𝔭 p_M+N(1-z)p_M(1-z)^-1𝐈_𝔭 ],|z|>1, 𝐘̃(z),|z|<1.Then, a simple computation shows that 𝐗(z) solves the Riemann-Hilbert problem with conditions, sincep_m(1-z)∼(∏_k=0^m-1a_k^-1) z^m as z →∞, 𝐗_+(z) =𝐗_-(z) [ p_M+N(1-z)p_M(1-z)^-1𝐈_𝔭𝐟_0,L(1-z)p_M(1-z)^-1;0_𝔭 p_M(1-z)p_M+N(1-z)^-1𝐈_𝔭 ],|z|=1, 𝐗(z) =(𝐈_2𝔭+𝒪(z^-1)) [∏_k=M^M+N-1a_k𝐈_𝔭0_𝔭;0_𝔭 ∏_k=M^M+N-1a_k^-1𝐈_𝔭 ], asz→∞. Step 2 We make another transformation. Consider 𝐕(z) given by, where r∈ (𝔠,1-2𝔠) is picked such that the circles |z|=r and |z|=r^-1 are within the annulus of analyticity𝔄_η from the statement, see Figure <ref> for an illustration,𝐕(z)=𝐗(z) [𝐈_𝔭0_𝔭; p_M^2(1-z)p_M+N(1-z)^-1𝐟_0,L(1-z)^-1𝐈_𝔭 ],1<|z|<r^-1, 𝐗(z) [𝐈_𝔭0_𝔭; -p_M+N(1-z)𝐟_0,L(1-z)^-1𝐈_𝔭 ]r<|z|<1, 𝐗(z),|z|<ror |z|>r^-1.Then, a computation shows that 𝐕(z) solves the following RHP𝐕_+(z) =𝐕_-(z) [𝐈_𝔭0_𝔭; -𝐟_0,L(1-z)^-1p_M+N(1-z)𝐈_𝔭 ],|z|=r, 𝐕_+(z) =𝐕_-(z) [0_𝔭𝐟_0,L(1-z)p_M(1-z)^-1; -𝐟_0,L(1-z)^-1p_M(1-z)0_𝔭 ],|z|=1, 𝐕_+(z) =𝐕_-(z) [𝐈_𝔭0_𝔭; p_M^2(1-z)p_M+N(1-z)^-1𝐟_0,L(1-z)^-1𝐈_𝔭 ],|z|=r^-1, 𝐕(z) =(𝐈_2𝔭+𝒪(z^-1)) [∏_k=M^M+N-1a_k𝐈_𝔭0_𝔭;0_𝔭 ∏_k=M^M+N-1a_k^-1𝐈_𝔭 ], asz→∞. Step 3 Now, consider the following RHP problem for 𝐕̃(z), which disregards the jumps on |z|=r and |z|=r^-1 for 𝐕(z), jumps that we will show next are exponentially small as N →∞,𝐕̃_+(z) =𝐕̃_-(z) [0_𝔭𝐟_0,L(1-z)p_M(1-z)^-1; -𝐟_0,L(1-z)^-1p_M(1-z)0_𝔭 ],|z|=1, 𝐕̃(z) =(𝐈_2𝔭+𝒪(z^-1)) [∏_k=M^M+N-1a_k𝐈_𝔭0_𝔭;0_𝔭 ∏_k=M^M+N-1a_k^-1𝐈_𝔭 ], asz→∞.Then, by our factorisation assumption on 𝐟_0,L(1-z), we can construct the explicit solution to this RHP for 𝐕̃(z) by, since recall p_M(1-z)∼(∏_k=0^M-1a_k^-1) z^M as z →∞,𝐕̃(z)=[ (∏_k=0^M+N-1a_k)𝐒̃_-^-1(z)p_M(1-z)0_𝔭;0_𝔭 (∏_k=0^M+N-1a_k^-1)𝐒_-(z)p_M(1-z)^-1 ] , |z|>1, [ 0_𝔭 (∏_k=0^M+N-1a_k)𝐒̃_+(z); -(∏_k=0^M+N-1a_k^-1)𝐒_+^-1(z) 0_𝔭 ], |z|<1.Step 4 We now show that the jumps on the circles |z|=r and |z|=r^-1 for 𝐕(z) are exponentially small. Let || · || be any norm on 𝔭×𝔭 matrices. Then,for r∈ (𝔠,1-2𝔠) picked such that the circles |z|=r and |z|=r^-1 are within the annulus of analyticity𝔄_η of 𝐟_0,L(1-z)^-1, we have, with the implicit constant being independent of N and r,sup_|z|=r||𝐟_0,L(1-z)^-1p_M+N(1-z)|| =𝒪(1/(inf_k∈ℤ_+a_k)^N(sup_|z|=rsup_k∈ℤ_+ |z+a_k-1|)^N)=𝒪(1/(1-𝔠)^N(sup_|z|=rsup_x∈ [-𝔠,𝔠] |z+x|)^N)=𝒪((𝔠+r/1-𝔠)^N),and similarly,sup_|z|=r^-1||𝐟_0,L(1-z)^-1p_M^2(1-z)p_M+N(1-z)^-1|| =𝒪((sup_k∈ℤ_+a_k)^N(sup_|z|=r^-1sup_k∈ℤ_+1/|z+a_k-1|)^N)=𝒪((1+𝔠)^N(1/inf_|z|=r^-1inf_x∈ [-𝔠,𝔠] |z+x|)^N)=𝒪((1+𝔠/r^-1-𝔠)^N).Observe that, for r∈ (𝔠,1-2𝔠) with 0≤𝔠<1/3, we have𝔠_r=𝔠+r/1-𝔠=max{𝔠+r/1-𝔠,1+𝔠/r^-1-𝔠}<1.Now, consider the function 𝐉(z) given by𝐉(z)=𝐕(z)𝐕̃(z)^-1.Then, observe that 𝐉(z), which only has jumps on the circles |z|=r and |z|=r^-1 which are exponentially small, can be solved in terms of a Neumann series, see for example <cit.>, to give𝐉(z)=𝐈_2𝔭+𝒪(𝔠_r^N), asN →∞,uniformly on compact sets of ℂ∖({|z|=r}∪{|z|=r^-1}). This completes this step.Step 5 We finally undo the transformations to obtain the statement of the proposition. For |z|<1-2𝔠 we can choose r∈ (𝔠,1-2𝔠) such that max{|z|,η}<r, to obtain𝐘̃(z)=𝐗(z)=𝐕(z) =(𝐈_2𝔭+𝒪(𝔠_r^N))𝐕̃(z)=(𝐈_2𝔭+𝒪(𝔠_r^N))[ 0_𝔭 (∏_k=0^M+N-1a_k)𝐒̃_+(z); -(∏_k=0^M+N-1a_k^-1)𝐒_+^-1(z) 0_𝔭 ].For |z|>(1-2𝔠)^-1 we can choose r∈ (𝔠,1-2𝔠) such that r^-1<min{|z|,η^-1}, to obtain𝐘̃(z) =𝐗(z) [ p_M(1-z)p_M+N(1-z)^-1𝐈_𝔭0_𝔭;0_𝔭 p_M+N(1-z)p_M(1-z)^-1𝐈_𝔭 ]=𝐕(z)[ p_M(1-z)p_M+N(1-z)^-1𝐈_𝔭0_𝔭;0_𝔭 p_M+N(1-z)p_M(1-z)^-1𝐈_𝔭 ]=(𝐈_2𝔭+𝒪(𝔠_r^N))𝐕̃(z)[ p_M(1-z)p_M+N(1-z)^-1𝐈_𝔭0_𝔭;0_𝔭 p_M+N(1-z)p_M(1-z)^-1𝐈_𝔭 ]=(𝐈_2𝔭+𝒪(𝔠_r^N))[ (∏_k=0^M+N-1a_k)𝐒̃_-^-1(z)p_M+N(1-z)0_𝔭;0_𝔭 (∏_k=0^M+N-1a_k^-1)𝐒_-(z)p_M+N(1-z)^-1 ].This completes the proof. We can finally proveTheorem <ref>. Note that, by our assumption on 𝐚 we can pick the 𝖢_𝐚 contour to be the circle |z-1|=1. After the change of variables z↦ 1-z, w↦ 1-w, we can then write the kernel 𝖪_N presented in block matrix form in (<ref>) as follows:[𝖪_N[(r,m𝔭+j);(r',k𝔭+i)]]_i,j=0^𝔭-1=-1_r>r'1/2πi1/a_m∮_|z|=1𝐟_r',r(1-z)p_k(1-z)/p_m+1(1-z)dz+1/(2πi)^21/a_k∮_|z|=1∮_|w|=1𝐟_r',L(1-w)1/z-w[ 0_𝔭 𝐈_𝔭 ]𝐘̃^-1(w)𝐘̃(z)[ 𝐈_𝔭; 0_𝔭 ]𝐟_0,r(1-z)××p_k(1-w)/p_M+N(1-w)p_m+1(1-z)dz dw,where 𝐘̃ is the Riemann-Hilbert problem given in Proposition <ref>. We then deform the contour for w to a circle of radius at least (1-2𝔠)^-1 but still within the annulus of analyticity 𝔄_η and similarly deform the contour for z to a circle of radius at most (1-2𝔠) but also contained in 𝔄_η. Note that, by our assumption this is possible. Hence, from Proposition <ref> wehave for any δ∈(𝔠,1-2𝔠) such that δ^-1<min{|w|,η^-1}1/p_M+N(1-w)[ 0_𝔭 𝐈_𝔭 ]𝐘̃^-1(w)=1/p_M+N(1-w)[ 0_𝔭 𝐈_𝔭 ](𝐈_2𝔭+𝒪(𝔠_δ^N))××[ (∏_k=0^M+N-1a_k^-1)𝐒̃_-(w)p_M+N(1-w)^-1 0_𝔭; 0_𝔭 (∏_k=0^M+N-1a_k)𝐒_-^-1(w)p_M+N(1-w) ]=(∏_k=0^M+N-1a_k)[ 0_𝔭 𝐒_-^-1(w) ](𝐈_2𝔭+𝒪(𝔠_δ^N)),and similarly for any δ∈ (𝔠,1-2𝔠) such that max{|z|,η}<δ,𝐘̃(z)[ 𝐈_𝔭; 0_𝔭 ] =(𝐈_2𝔭+𝒪(𝔠_δ^N)) [ 0_𝔭 (∏_k=0^M+N-1a_k)𝐒̃_+(z); -(∏_k=0^M+N-1a_k^-1)𝐒_+^-1(z) 0_𝔭 ][ 𝐈_𝔭; 0_𝔭 ]=-(∏_k=0^M+N-1a_k^-1)(𝐈_2𝔭+𝒪(𝔠_δ^N))[ 0_𝔭; 𝐒_+^-1(z) ].Plugging these expressions in the correlation kernel 𝖪_N, noting that ∏_k=0^M+N-1a_k^-1 and ∏_k=0^M+N-1a_kcancel out exactly, and taking the N →∞ limit first (recall 𝔠_δ<1), and then deforming the contours to the circles |z|=1^- and |w|=1^+ we getlim_N→∞[𝖪_N[(r,m𝔭+j);(r',k𝔭+i)]]_i,j=0^𝔭-1=[𝖪_∞[(r,m𝔭+j);(r',k𝔭+i)]]_i,j=0^𝔭-1.This implies the convergence of all the correlation functions and since the state space is discrete it implies convergence in distribution of the corresponding point processes, and thus of the minimal particles, and gives the statement of the theorem.We finally give an explicit example where all the conditions of Theorem <ref> are satisfied. We focus on the scalar case 𝔭=1 with functions corresponding to the three types of non-intersecting processes (pure-birth, Bernoulli, geometric) we have been studying in this paper. Recall that for 𝔭=1 we can just take 𝐒̃_±(z)=𝐒_±(z). For a detailed study of the factorisation in the matrix case 𝔭≥ 2, see <cit.>, also <cit.>.Let 𝔭=1. Assume that the parameter sequence 𝐚 satisfiesinf_x∈ℤ_+a_x≥ 1-𝔠 and sup_x∈ℤ_+a_x≤ 1+𝔠,for some 0≤𝔠 <1/3. Suppose the functions 𝐟_0(z),…,𝐟_L-1(z) are a product of factors of the form (1-α z),(1+β z)^-1,e^-tz so that in particular the function 𝐟_0,L(1-z) is given by𝐟_0,L(1-z)=∏_r=1^L_1(1-α_r+α_r z)∏_r=1^L_21/1+β_r-β_r z∏_r=1^L_3e^t_r z-t_r,for some L_1,L_2,L_3∈ℤ_+ and constants α_r,β_r, t_r ∈ℝ_+. Moreover, suppose there exist exactly M indices i_1,…,i_M such that(2-2𝔠)^-1<α_i_j<(1+𝔠)^-1, j=1,…,M,and for r≠ i_j we have α_r<1-2𝔠/2-2𝔠. Finally, assume that β_r<1/2𝔠-1. Then, with the factorisation 𝐟_0,L(1-z)=𝐒_+(z)𝐒_-(z) chosen as follows𝐒_-(z) =∏_j=1^M(z+1-α_i_j/α_i_j), 𝐒_+(z) =∏_j=1^Mα_i_j∏_r=1, r≠ i_j^L_1 (1-α_r+α_r z) ∏_r=1^L_21/1+β_r-β_r z∏_r=1^L_3e^t_r z-t_r,all the conditions of Theorem <ref> are satisfied.Observe that, 𝐟_i(1-z)^± 1 can have poles only at points (1-α_r^-1) or (1+β_r^-1) for some r. Then, elementary computations show that the conditions above on the α_r,β_r ensure that 𝐟_0,L(z)∈𝖧𝗈𝗅(ℍ_-R) for some R>R(𝐚)=sup_x∈ℤ_+a_x-inf_x∈ℤ_+a_x and that we can find η<1-2𝔠 such that none of the points (1-α_r^-1) or (1+β_r^-1) are in the intervals (-η^-1,-η)∪ (η,η^-1). Finally, it is easy to see by the assumptions on the α_r,β_r's the functions 𝐒^± 1_-(z), 𝐒^± 1_+(z) are analytic for |z|>1 and |z|<1 respectively (and continuous up to the boundary) and we have the correct growth condition as z →∞ for 𝐒_-(z).Note that, the induced measure on 𝕎_N^L-1 by the non-intersecting random walks is given by (<ref>) with 𝔭=1 and 𝐟_r=f_r,r+1 as in the statement of Theorem <ref>. Then, the result follows by combining Theorem <ref> and Proposition <ref> with 𝖪_∞^L,M=𝖪_∞. § CONVERGENCE TO THE DISCRETE BESSEL POINT PROCESS Let us denote by K̃_t^(N)(y_1,y_2) the correlation kernel of the point process with the origin shifted to N, namely {𝖷_i^(N)(t)-N}_i, which is then given by from Theorem <ref>, where we make a couple of changes of variables in the contour integrals,K̃_t^(N)(y_1,y_2) =-1/a_y_1+N1/(2 πi)^2∮_𝖢_𝐚,0 dw∮_𝖢_0 du p_y_2+N(u)e^-tw/p_y_1+N+1(w)e^-tuw^N/u^N1/w-u=1/a_y_1+N1/(2 πi)^2∮_1-𝖢_𝐚,0 dw∮_1-𝖢_0 du p_y_2+N(1-u)e^tw/p_y_1+N+1(1-w)e^tu(1-w)^N/(1-u)^N1/w-u=1/a_y_1+N1/(2 πi)^2∮_|w|=ϵ^-1 dw∮_|u|=(ϵ')^-1 du p_y_2+N(1-u)e^tw/p_y_1+N+1(1-w)e^tu(1-w)^N/(1-u)^N1/w-u=1/a_y_1+N1/(2πi)^2∮_|w|=ϵdw∮_|u|=ϵ'du p_y_2+N(1-1/u)/p_y_1+N+1(1-1/w)e^tw^-1/e^tu^-1u^N-1/w^N+1(1-w)^N/(1-u)^N1/u-w ,with the radii 0<ϵ<ϵ' chosen appropriately small so that the circles |w|=ϵ^-1 and |z|=(ϵ')^-1 contain all the relevant poles. Let us pick, for concreteness, for N large enough, ϵ=ζ/N and ϵ'=2ζ/N and make the change of variables v=N/ζu and z=N/ζw. We then obtain that the kernel is given by, for N large enough,K̃^(N)_t(y_1,y_2)=1/a_y_1+N1/(2πi)^2∮_|z|=1dz∮_|v|=2dv p_y_2+N(1-N/ζ v)/p_y_1+N+1(1-N/ζ z)e^tN/ζz^-1/e^tN/ζv^-1v^N-1/z^N+1(1-ζ z/N)^N/(1-ζ v/N)^N1/v-zN/ζ.Consider the ratio of characteristic polynomialsp_y_2+N(1-N/ζ v)/p_y_1+N+1(1-N/ζ z)=∏_k=0^N-1(1-(1-N/ζ v)/a_k)/∏_k=0^N-1(1-(1-N/ζ z)/a_k)×∏_k=N^y_2+N-1(1-(1-N/ζ v)/a_k)/∏_k=N^y_1+N(1-(1-N/ζ z)/a_k).The second factor above satisfies, where as usual 𝔣_N ∼𝔤_N means lim_N→∞𝔣_N/𝔤_N=1, ∏_k=N^y_2+N-1(1-(1-N/ζ v)/a_k)/∏_k=N^y_1+N(1-(1-N/ζ z)/a_k)∼(N/ζ v)^y_2/(N/ζ z)^y_1+1∏_k=N^y_1+Na_k/∏_k=N^y_2+N-1a_k=(N/ζ v)^y_2/(N/ζ z)^y_1+1∏_k=0^y_1+Na_k/∏_k=0^y_2+N-1a_k.Recall that {a_x}_x=0^∞ is uniformly bounded. If we write a_i^(N)=N^-1(ζ a_i-N), for 1≤ i ≤ N, then we have a_i^(N)→ 0 for all i, ∑_i=1^N a_i^(N)→ζ (a̅-1) and ∑_i=1^N (a_i^(N))^2 → 0. We then note that, uniformly on compact sets in ℂ, we have as N →∞,∏_k=0^N-1(1+v(ζ a_k-ζ/N))=∏_k=0^N-1(1+va_i^(N))→ e^ζ(a̅-1)v.Hence, we obtain that the first factor in (<ref>) satisfies, as N →∞,∏_k=0^N-1(1-(1-N/ζ v)/a_k)/∏_k=0^N-1(1-(1-N/ζ z)/a_k)=z^N/v^N∏_k=0^N-1(1+v(ζ a_k-ζ/N))/∏_k=0^N-1(1+z(ζ a_k-ζ/N))∼z^N/v^Ne^ζ(a̅-1)v/e^ζ(a̅-1)z.Thus, we get the asymptotics as N →∞,v^N-1/z^N+1p_y_2+N(1-N/ζ v)/p_y_1+N+1(1-N/ζ z)N/ζ∼z^y_1/v^y_2+1e^ζ(a̅-1)v/e^ζ(a̅-̅1̅)z(N/ζ)^y_2-y_1∏_k=0^y_1+Na_k/∏_k=0^y_2+N-1a_k.Hence, putting everything together we obtain, as N →∞,K̃_ζ/N^(N)(y_1,y_2) ∼(N/ζ)^y_2-y_1∏_k=0^y_1+N-1a_k/∏_k=0^y_2+N-1a_k1/(2πi)^2∮_|z|=1dz∮_|v|=2dv z^y_1/v^y_2+1e^z^-1-v^-1+ζa̅v-ζa̅z1/v-z.Observe that the ratio of factors in front of the contour integral cancel out when we take the determinant. Moreover, we can deform the z and v contours as long as the z-contour is contained in the v-contour and contains 0. Hence, for any m≥ 1,(K̃_ζ/N^(N)(y_i,y_j))_i,j=1^mN →∞⟶(𝐉_ζa̅(y_i,y_j))_i,j=1^m.This implies the convergence of all the correlation functions and since the state space is discrete it implies convergence in distribution of the corresponding point processes, and thus of the maximal particles, and gives the statement of the theorem. acm School of Mathematics, University of Edinburgh, James Clerk Maxwell Building, Peter Guthrie Tait Rd, Edinburgh EH9 3FD, U.K.mailto:[email protected]@ed.ac.uk
http://arxiv.org/abs/2310.18055v2
{ "authors": [ "Theodoros Assiotis" ], "categories": [ "math.PR", "math-ph", "math.CO", "math.MP" ], "primary_category": "math.PR", "published": "20231027110933", "title": "On some integrable models in inhomogeneous space" }
^1Department of Physics, University of Arizona Tucson, AZ 85721, USA ^2Institute for Theoretical Solid State Physics, Leibniz IFW Dresden, Helmholtzstraße 20, 01069 Dresden, Germany ^3Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India We show from many-body quantum mechanical calculations that there occur structurally distinct triplet-pair eigenstates in the intramolecular singlet fission (iSF) compound pentacene-tetracene-pentacene. Triplet excitons occupy neigboring pentacene and tetracene monomers in the higher energy doubly degenerate triplet-triplet multiexcitons, and terminal pentacene chromophores in the lower energy multiexciton.The lowest energy multiexciton is reachedby ultrafast triplet migration within the triplet-triplet manifold, a result with profound implication for the design of superior iSF compounds. 0.5pc Distinct contiguous versus separated triplet-pair multiexcitons in an intramolecular singlet fission chromophore R. Chesler^1, P. Bhattacharyya^2, A. Shukla^3 and S. Mazumdar^1 January 14, 2024 ================================================================================================================Electron correlation effects on the optoelectronic properties of carbon-based π-conjugatedsystems have been of continuous and intense interest <cit.>.One consequence of strong π-electron correlation is the occurrence of the two electron-two hole (2e-2h) bound spin triplet-pair state^m(T_1T_1) energeticallybelow the one electron-one hole (1e-1h) optical spin-singlet state in long linearconjugated polyenes <cit.> and polyacenes <cit.> (here and in what follows T_1 is the lowest triplet exciton, and m the overall spin multiplicity). This energy ordering is of direct relevance to singlet fission (SF), a spin-allowed photophysical process involving the internalconversion of the optically accessible spin-singletexciton to the optically dark ^1(T_1T_1) state <cit.>.The possibility of circumventing the Shockley-Queisser limit to photoconductivity <cit.> in organic solar cells has been the driver of the field. In systems with small triplet-triplet binding energy E_b, defined as 2×E(T_1) - E(^1(T_1T_1)), where E(T_1) and E(^1(T_1T_1)) are the energies of the lowest free triplet and the triplet-triplet multiexciton,^m(T_1T_1) will undergo dissociation to two free triplets T_1. In principle, each tripletcan donate an electron to an acceptor molecule in a donor-acceptor heterostructure, thereby doubling the photoconductivity <cit.>. Until recently research on SF was largely limited to intermolecular SF (xSF), in which the triplet excitons constituting the ^1(T_1T_1) occupy chromophore monomers that are not bonded covalently <cit.>. SF being a multichromophore process requires moderate to strong “through space” coupling between the chromophore monomers in this case, which in turn places stringent morphological requirements for the chromophore composite, thereby limiting the classes of materials available for xSF. The focus of SF research consequently shifted to intramolecular SF (iSF), in which the chromophore monomers are either directly covalently bonded or are linked covalently via bridge molecules <cit.>. Based on ultrafast spectroscopic measurements that found excited state absorptions (ESAs) at triplet absorption energies of the terminal chromophore molecules in the visible wavelength region it was initially believed that iSF led to rapid generation of free triplets. Following theoretical research that demonstrated that the bound ^1(T_1T_1) had additional ESA in the IR over and above ESA in the visible <cit.>, and experimental observations of this additional ESA <cit.> it is now accepted that while iSF may lead to rapid ^1(T_1T_1) generation, quantitative dissociation to free triplets is rare. Successful implementation of iSF in organic photovoltaics will require fast ^1(T_1T_1) generationas well as its fast dissociation to free triplets, overcoming competing photophysical processes that include triplet recombination. This requirement poses serious theoretical and experimental challenges, as fast ^1(T_1T_1) generation necessarily requires strong effective electronic coupling between the chromophore components occupied by the individual triplet excitations of the ^1(T_1T_1) wavefunction <cit.>.The latter in turn leads to strong E_b, which slows down and even prevents triplet dissociation. Recent iSF research has therefore focused on the search for appropriate chromophore-bridge molecule combination that can solve the riddle of fast ^1(T_1T_1) generation yet small E_b. From a theoretical perspective this requires, (i) precise understanding of ^1(T_1T_1) wavefunction, (ii) the dependence of the triplet-triplet entanglementon the structure of the bridge molecule, (iii) the mechanism of ^1(T_1T_1) formation, and (iv) the mechanism of spin dephasing.Simply increasing the length of the bridge molecule is not an optimal solution; even as that reduces E_b it also slows ^1(T_1T_1) generation <cit.>.The ^1(T_1T_1) has therefore been the focus of several recent reviews <cit.>. An ingenious advance in the design of iSF compounds with ultrafast ^1(T_1T_1) generation (in few ps) yet long lifetime (hundreds of ns) was achieved recently by Pun et al. <cit.>. The authors synthesized and tested a series of iSF compounds P-Tn-P, where P, T and n refer to terminal pentacene chromophores, linker tetracene molecules and the number of linker molecules, respectively (see Fig. 1).According to the authors, initial photoexcitation at pentacene singlet exciton energy creates a localized excitation on a terminal P monomer, which undergoes rapid conversion to a contiguous triplet-triplet pair ^1(T_1[P]T_1[T]), where T_1[P] and T_1[T] refer to lowest triplets on P and T, respectively.The difference in the triplet energies of T and P monomers (∼ 0.3 eV) thendrives triplet migration and transition from contiguous tripletsto the lowest energy triplet-pair ^1(T_1[P]T_1[P]) in which the triplets occupy only the terminal pentacenes. This conclusion was reached from observation of transient ESA in the IR due to ^1(T_1[P]T_1[T]) almost immediately after photoexcitation, and the rapid disappearance of this ESA (in 3.0 ps and 5.3 ps in n=2 and n=3, respectively)followed by ^1(T_1[P]T_1[P]) generation <cit.>. The physical separation between the triplets and uphill recombination are behindthe long ^1(T_1[P]T_1[P]) lifetime <cit.>.The experimental results are exciting, as they for the first time suggestthat triplet separation can occur by downhill triplet migration.Theoretical evidence for distinct phase-coherent high energy ^1(T_1[P]T_1[T]) and low energy ^1(T_1[P]T_1[P]) that would be required for the proposed triplet migration scheme <cit.>,as opposed to quantum mechanical superpositions, currently does not exist.While weakly phase-coherent physically separated triplet-triplet^1(T_1 ⋯ T_1) has also been suggested in the context of xSF <cit.>, there the weakly bound multiexcitonis reached from the strongly bound ^1(T_1T_1) in an uphill process, overcoming the binding energy of the nearest neighbor triplet-pair.Convincing theoretical evidence for such a ^1(T_1 ⋯ T_1) state in xSF is lacking.Furthermore, in structurally related compounds P-β-P, β = benzene (B), naphthalene (N) and anthracene (A) triplets in the lowest triplet-triplet ^1(T_1T_1) do occupy the terminal P chromophores only, but the ^1(T_1T_1) in these are reached via virtual charge-transfer (CT) excitation between the terminal pentacene chromophore themselves, effectively bypassing the bridge molecules <cit.>. Taken together, these observations suggest that the mechanism of iSF in these closely related compounds changes with the length of the bridge molecule. Clear understanding of the roles of interchromophore CT versus triplet migration, as functions of electron correlation and bridge moleculelength will be essential to establish triplet migration as the dominant pathway to the lowest triplet-triplet in P-Tn-P.In what follows we present the results of detailed quantum many-bodycalculations of the excited state electronic structures of the n=1 compound P-T-P, focusing not only on the triplet-triplet states, but also on the mechanism of their generation.Our calculations are based on the π-electron only Pariser-Parr-Pople (PPP) Hamiltonian <cit.>, which hasbeen widely used to describe π-conjugated carbon(C)-based systems <cit.>,H=∑_⟨ ij ⟩,σt_ij(c_iσ^†c_jσ+c_jσ^† c_iσ) + U∑_i n_i↑n_i↓+∑_i<j V_ij (n_i-1)(n_j-1)Here c^†_iσ creates an electron with spin σ on the p_z orbital of C-atom i, n_iσ = c^†_iσ c_iσ is the number of electrons with spin σ on atom i, and n_i=∑_σn_iσ is the total number of electrons on the atom.t_ij are nearest neighbor electron hopping integrals, U the Coulomb repulsion between two electrons occupying the p_z orbital of the same C-atom, and V_ij is long range Coulomb interaction. The parameters are taken from our previous applications of the model to acene monomers and dimers <cit.>.We have chosen peripheral and internal bond lengths for the acene monomers to be (1.40 A) and (1.46 A), respectively, and the corresponding t_ij as -2.4 and -2.2 eV, respectively <cit.>.We have assumed the molecules to be planar forsimplicity, with the interunit bond length 1.46 A and the corresponding hopping integral -2.2 eV, respectively. Monomer rotation effect can be taken into consideration by reducing the interunit t_ij by a multiplicative factor of cos θ, where θ is the dihedral angle <cit.>. Explicit calculations have confirmed that physical conclusions are not altered substantively by ignoring rotation effects <cit.>.We use thescreened Ohno parameterization for the long range Coulomb repulsion, V_ij=U/κ√(1+0.6117 R_ij^2), where R_ij is the distance in A between C-atoms i and j and κ is an effective dielectric constant <cit.>. The Coulomb parameter for C-atoms U (7.7 eV) and the dielectric constant κ (1.3) are chosen based on fitting monomer energetics <cit.>.Our calculations are done within the diagrammatic molecular exciton basis to obtain physical pictorialdescriptions of eigenstates <cit.>. Accurate determinations of energy orderings and correlated wavefunction of the ^1(T_1T_1) excited states demand inclusion of high order correlation effects. We use the multiple reference singles and doubles configuration interaction (MRSDCI) procedure that incorporates CI withdominant me-mh excited configurations (m=1-4) <cit.>. The MRSDCI calculations are done over exciton basis active spaces of 22-26 localized molecular orbitals (8-10 MOs for each P monomer and 6-8 per T monomer, see Supplemental Material (SM), S.1 <cit.>.Energy convergence required MRSDCI matrices of dimensions several times 10^6. In Fig. 2(a) we have shown the calculated ground state optical absorption spectra for anti- and syn-P-T-P. The absorption spectra of Fig. 2(a) aredifferent from those for P-B-P, P-N-P and P-A-P in two important ways. First, the CT absorption in P-T-P occurs at energy higher than the localized monomeric absorptions of both the terminal pentacene and the linker tetracene,in contrast to P-B-P, P-N-P and P-A-P where CT absorption is either to a state that occurs below thelocalized bridge molecule exciton (P-B-P and P-N-P) or to a state that exhibits significant configuration mixing with the bridge exciton (P-A-P). Thelatter is thethe criterion for bridge resonance <cit.> within correlated-electron theory. Second, the nearly equal strengths of the CT absorptions for anti- and syn-P-T-P is also in contrast to that in β = B, N and A,where the CT absorption occurs only for anti connectivity in P-B-P and P-N-P, and isstronger for anti-P-A-P than in syn-P-A-P <cit.>.The difference between anti versus syn connectivities there is ascribed to the tendency to electron-correlation driven antiferromagnetic spin-couplingbetween electrons on nearest neighbor C-atoms, which promotes selective stronger direct CT between the terminal pentacenes for anti connectivity. The exciton-basis allows precise characterizations of the final states of the absorption bands as excitations localized on the monomer molecules versus CT. Detailed examinations of the exciton basis CT eigenstates indicated one-to-one correspondence between iSF rate and the (a) strength of the dipole coupling between the ground state and the CT eigenstate and (b) strong contribution to the CT eigenstate by the configuration with direct CT coupling between the terminalP chromophores. The constructive (destructive) quantum interference leading to strong (vanishing to weaker) CT absorption in anti (syn) β = B, N, Acompounds is direct evidence for CT-mediated iSF <cit.>. We show in Figs. 2(b)(i) and (ii) the most dominant exciton basis contributions to the CT eigenstates with the largest dipole couplings to the ground states in anti- and syn-P-T-P. In strong contrast to P-B-P, P-N-P and P-A-P, the relative weight of the configuration with direct CT coupling between the terminal P chromophores isnoticeably smaller than that of configurations with CT between P and T (see (i)), or is vanishingly small (see (ii)). Additional CT states also contribute to the very strong CT band in Fig. 2(a). The energies, dipole couplings to the ground states and the dominant exciton basis contributions to all these states are shown in supplementary Table I <cit.>.The relative weights of configurations with direct CT between the P monomers aresmaller than that of configurations with CT between P and T, or are vanishingly small in every case. Dominant CT in all these wavefunctions is between nearest neighbor P and T monomers, as in PT dimer <cit.>,suggesting already that CT-mediation here can generate contiguous triplet-triplet ^1(T_1[P]T_1[T]) but not^1(T_1[P]T_1[P]) with distant triplets. CT wavefunctions of symmetric iSF molecules occur as nearly degenerate pairs with even and odd parity symmetries. Ground state absorption is to the odd parity CT eigenstate and provides only indirect information about CT-mediated SF. The CT process directly relevant for SF is the virtual excitation between the optical singlet and the even parity CT excitation<cit.>.Dominant contributions to the final states of the strongest dipole-allowed CT excitations from the optical exciton localized on pentaceneare shown in Figs. 3(a) and (b). Supplementary Table II gives the normalized contributions to the final states of all wavefunctions to which ESAs from the opticalsinglets are expected, and the dipole couplings between the initial and final eigenstates <cit.>. Once again, CT between the terminal P molecules is significantly smaller than that between P and T. Taken together, Figs. 2 and 3 give clear evidence that ^1(T_1[P]T_1[P]) is not generated in a single-step CT-mediated process.We havecalculated near-exact energies and wavefunctions of the lowest triplet-triplet eigenstates of P-T-P. The lowest of these, ^1(T_1[P]T_1[P]), occurs energetically below the optical exciton, while doubly degenerate higher energy ^1(T_1[P]T_1[T]) and ^1(T_1[T]T_1[P])states occur immediately above the optical exciton (2.4 eV versus 2.3 eV). The wavefunctions are shown in Figs 4(a)-(c). Triplet excitations in ^1(T_1[P]T_1[P]) are localizedentirely on the terminal pentacenes. Higher energy triplet-triplet eigenstates consist entirely of even and odd linear combinations of ^1(T_1[P]T_1[T]) and^1(T_1[T]T_1[P]). Fig. 4 shows only the largest contributions to the multiexciton eigenstates. We have examined upto 15 higher order quadruple excitations in each case, with normalized coefficients down to 0.03. Even these higher order excitations are strictly localized either on nearest neighbor monomers or on distant monomers, but not both in the same eigenstate. It is now interesting to go back to Fig. 3: the admixing between the first and last terms in the two wavefunctions here give indirectproof of the hypothesis that initial triple-triplet multiexciton generation occurs via virtual excitation of the CT eigenstate dipole-coupled to the optical singlet <cit.>.Free triplet generation from the bound triplet-triplet is preceded by spin mixing between overall spin singlet (m=1),triplet (m=3) and quintet (m=5) multiexcitons <cit.>.In Table I we have given the calculated m=1 and m=5 energies of^m(T_1[P]T_1[P]) and ^m(T_1[P]T_1[T]).Δ_s between the quintet and singlet spin states in the contiguous triplet-pair ^m(T_1[P]T_1[T]) is large compared to room temperature, explaining the downhill triplet migration from this state over spin dephasing <cit.>. Fast thermally-induced mixing between the spin states is however expected in ^m(T_1[P]T_1[P]).The larger distance between the triplet excitons occupying the terminal pentacenes inn >1 P-Tn-P wouldlead to even smaller Δ_s, explaining the very long lifetimes of the triplet-triplet in these <cit.>.In summary, there indeed occur distinct triplet-triplet multiexcitons in P-T-P (and by implication, in P-Tn-P) with triplets occurring on neigboring versus distant acene monomers. This result is reminiscent of the classic work by Tavan and Schulten, who showed that in linear π-conjugated polymers there occurs a “band” ofcovalent two-photon states <cit.>. The useof the exciton basis <cit.> allows us to physically locate the individual triplets in the multiexciton states. The mechanism of the lowest energy ^1(T_1T_1) generation in iSF chromophores is indeed dependent on the length of the bridge acene. Direct generation of the ^1(T_1T_1) from the optical excitation has beensuggested in the absence of a linker <cit.>, but this is contentious <cit.>. With short acenes(benzene, napththalene and anthracene) as linkers, bridge monomer excitons have relatively high energies. Consequently contiguous triplet-triplets, when they exist(see <cit.>, Fig S.5) occur significantly above the lowest optical exciton on the terminal chromophore and are irrelevant to iSF.CT states dominated by CT between terminal chromophore and bridge molecule are also high in energy. Direct CT between the terminal chromophoresis both energetically and optically accessible in these cases and iSF is mediated by end-to-end CT, with quantum interferencedue to electron correlation playing a strong role <cit.>. With increasing length of the bridge molecule there occurs a reversal in the energy orderings of the contiguous triplet-tripletand the lowest CT state, a correlation effect also reminiscent of the length dependence of energy orderings of excited states in the shortest linear polyenes <cit.>. The iSF process occurs in two steps now. First there occurs nearest neighbor CT-mediatedgeneration of contiguous triplet-triplet (see Fig. 3), followed by triplet migration to the lowest energy triplet-triplet. Large (small) Δ_s in T_1[P]T_1[T] (T_1[P]T_1[P]) implies rapid dephasing only following the generation of T_1[P]T_1[P]. Most importantly,which particular mechanism dominates can be anticipated already from ground state absorption spectra:CT absorption below or at the molecular absorption energy due to the bridge molecule indicates CT-mediated SF that will exhibit strong quantum interference effects. CT absorption higher in energy than the bridge excitonis due to CT among nearest monomers; there is no dependence of absorption strength on connectivity here, and the final state of iSF is reached by ultrafast triplet migration. While the present theoretical work is based on acene-based compounds, the fundamental conclusions as well as the computational approach can both be extended to other systems, thereby contributing to the the design of new iSF chromophores.Work at Arizona was partially supported by National Science Foundation (NSF) grant NSF-CHE-1764152.Some of the calculations were performed using high performance computing resources maintained by the University of Arizona Research Technologies department and supported by the University of Arizona Technology and Research Initiative Fund (TRIF), University Information Technology Services (UITS), and Research, Innovation, and Impact (RII). P. B. acknowledges financial support from the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), Project ID 441216021,and U. Nitzsche for technical support.10Baeriswyl92a D. Baeriswyl, D. K. Campbell, and S. Mazumdar. An overview of the theory of π-Conjugated polymers. In H. Kiess, editor, Conjugated Conducting Polymers. Springer Verlag, Berlin, 1992.Soos94a Z. G. Soos, D. S. Galvao, and S. Etemad. Fluorescence and excited-state structure of conjugated polymers. Adv. Mater., 6:280–287, 1994.Ramasesha00a S. Ramasesha, Swapan K. Pati, Z. Shuai, and J.L. Brédas. The density matrix renormalization group method: Application to the low-lying electronic states in conjugated polymers. Adv. Quant. Chem., pages 121 – 215, 2000.Barford05a W. Barford. Electronic and Optical Properties of Conjugated Polymers. Oxford Science Publications, 2005.Hudson82a B. S. Hudson, B. E. Kohler, and K. Schulten. Linear polyene electronic-structure and potential surfaces. Excited States, 6:1–95, 1982.Raghu02a C. Raghu, Y. Anusooya Pati, and S. Ramasesha. Density-matrix renormalization-group study of low-lying excitations of polyacene within a Pariser-Parr-Pople model. Phys. Rev. B, 66:035116, 2002.Sanders20a S. N. Sanders, E. Kumarasamy, K. J. Fallon, M. Y. Sfeir, and L. M. Campos. Singlet fission in a hexacene dimer: energetics dictate dynamics. Chem. Sci, 11:1079–1084, 2020.Smith13a M. B. Smith and J. Michl. Recent advances in singlet fission. Annu. Rev. Phys. Chem., 64:361–386, 2013.Lee13a J. Lee, P. Jadhav, P. D. Reusswig, S. R. Yost, N. J. Thompson, D. N. Congreve, E. Hontz, T. Van Voorhis, and M. A. Baldo. Singlet exciton fission photovoltaics. Acc. Chem. Res., 46:1300–1311, 2013.Rao17a A. Rao and R. H. Friend. Harnessing singlet exciton fission to break the Shockley-Queisser limit. Nature Reviews, 2:17063, 2017.Xia17a J. Xia, S. N. Sanders, W. Cheng, J. Z. Low, J. Liu, L. M. Campos, and T. Sun. Singlet fission: Progress and prospects in solar cells. Adv. Mater., 29:1601652, 2017.Casanova18a D. Casanova. Theoretical modeling of singlet fission. Chem. Rev., 118:7164–7207, 2018.Felter19a K. M. Felter and F. C. Grozema. Singlet fission in crystalline organic materials: Recent insights and future directions. J. Phys. Chem. Lett., 10:7208–7214, 2019.Shockley61a W. Shockley and H. J. Queisser. Detailed balance limit of efficiency of p-n junction solar cells. J. Appl. Phys., 32:510–519, 1961.Sanders15a S. N. Sanders, E. Kumarasamy, A. B. Pun, M. T. Trinh, B. Choi, J. Xia, E. J. Taffet, J. Z. Low, J. R. Miller, X. Roy, X.-Y. Zhu, M. L. Steigerwald, M. Y. Sfeir, and L. M. Campos. Quantitative intramolecular singlet fission in bipentacenes. J. Am. Chem. Soc., 137(28):8965–8972, 2015.Zirzlmeier15a J. Zirzlmeier, D. Lehnherr, P. B. Coto, E. T. Chernick, R. Casillas B. S. Basel, M. Thoss, R. R. Tykwinski, and D. M. Guldi. Singlet fission in pentacene dimers. Proc. Natl. Acad. Sci., 112(17):5325–5330, 2015.Lukman15a S. Lukman, A. J. Musser, K. Chen, A. Stavros, C. K. Yang, Z. Zeng, Q. Ye, C. Chi, J. M. Hodgkiss, J. Wu J., R. H. Friend, and N. C. Greenham. Tuneable singlet exciton fission and triplet-triplet annihilation in an orthogonal pentacene dimer. Adv. Funct. Mater., 25:5452–5461, 2015.Sakuma16a T. Sakuma, H. Sakai, Y. Araki, T. Mori, T. Wada, N. Tkachenko, and T. Hasobe. Long-lived triplet excited states of bent-shaped pentacene dimers by intramolecular singlet fission. J. Phys. Chem. A, 120:1867–1875, 2016.Sanders16a S. N. Sanders, E. Kumarasamy, A. B. Pun, M. L. Steigerwald, M. Y. Sfeir, and L. M. Campos. Intramolecular singlet fission in oligoacene heterodimers. Angew. Chem. Int. Ed., 55:3373–3377, 2016.Korovina16a N. V. Korovina, S. Das, Z. Nett, X. Feng, J. Joy, R. Haiges, A. I. Krylov, S. E. Bradforth, and M. E. Thompson. Singlet fission in a covalently linked cofacial alkynyltetracene dimer. J. Am. Chem. Soc., 138:617–627, 2016.Korovina18a N. V. Korovina, J. Joy, X. T. Feng, C. Feltenberger, A. I. Krylov, S. E. Bradforth, and M. E. Thompson. Linker-dependent singlet fission in tetracene dimers. J. Am. Chem. Soc., 140:10179–10190, 2018.Zirzlmeier16a J. Zirzlmeier, R. Casillas, S. R. Reddy, P. B. Coto, D. Lehnherr, E. T. Chernick, I. Papadopoulos, M. Thoss, R. R. Tykwinski, and D. M. Guldi. Solution-based intramolecular singlet fission in cross-conjugated pentacene dimers. Nanoscale, 8:101133, 2016.Sun16a T. Sun, L. Shen, H. Liu, X. Sun, and X. Li. Synthesis and photophysical properties of a single bond linked tetracene dimer. J. Mol. Struc., 1116:200–206, 2016.Basel17a Bettina S. Basel, Johannes Zirzlmeier, Constantin Hetzer, Brian T. Phelan, Matthew D. Krzyaniak, S. Rajagopala Reddy, Pedro B. Coto, Noah E. Horwitz, Ryan M. Young, Fraser J. White, Frank Hampel, Timothy Clark, Michael Thoss, Rik R. Tykwinski, Michael R. Wasielewski, and Dirk M. Guldi. Unified model for singlet fission within a non-conjugated covalent pentacene dimer. Nat. Commun., 8:15171, 2017.Margulies16a E. A. Margulies, C. E. Miller, Y. Wu, L. Ma, G. C. Schatz, R. M. Young, and M. R. Wasielewski. Enabling singlet fission by controlling intramolecular π stacked covalent terrylenediimide dimers. Nat. Chem., 8:1120–1125, 2016.Pun19a A. B. Pun, A. Asadpoordarvish, E. Kumarasamy, M. J. Y. Tayebjee, D. Niesner, D. R. McCamey, S. N. Sanders, L. M. Campos, and M. Y. Sfeir. Ultra-fast intramolecular singlet fission to persistent multiexcitons by molecular design. Nat. Chem., 11:821–828, 2019.Krishnapriya19a K.C. Krishnapriya, Palas Roy, Boregowda Puttaraju, Ulrike Salzner, Andrew J. Musser, Manish Jain, Jyotishman Dasgupta, and Satish Patil. Spin density encodes intramolecular singlet exciton fission in pentacene dimers. Nat. Commun., 10:33, 2019.Korovina18c N. V. Korovina, Pompetti N. F., and Johnson J. C. Lessons from intramolecular singlet fission with covalently bound chromophores. J. Chem. Phys., 152:040904, 2020.Hetzer18a C. Hetzer, D. M. Guldi, and R. R. Tykwinski. Pentacene dimers as a critical tool for the investigation of intramolecular singlet fission. Chem. Eur. J., 24:8245–8257, 2018.Chen18a M. Chen, Y. J. Bae, C. M. Mauck, A. Mandal, R. M. Young, and M. R. Wasielewski. Singlet fission in covalent terrylenediimide dimers: Probing the nature of the multiexciton state using femtosecond mid-infrared spectroscopy. J. Am. Chem. Soc., 140:9184–9192, 2018.Parenti20a K. R. Parenti, G. He, S. N. Sanders, A. B. Pun, E. Kumarasamy, M. Y. Sfeir, and L. M. Campos. Bridge resonance effects in singlet fission. J. Phys. Chem. A, 124:9392–9399, 2020.Wang22a Kangwei Wang, Guangwei Shao, Shaoqian Peng, Xiaoxiao You, Xingyu Chen, Jingwen Xu, Huaxi Huang, Huan wang, Di Wu, and Jianlong Xia. Achieving symmetry-braking charge separation in perylenediimide trimers: The effect of bridge resonance. J. Phys. Chem. B, 126:3758–3767, 2022.Purdy23a M. Purdy, P. Budden, K. Fallon, Cara N. Gannett, H. D. Abruña, W. Zeng, R. Friend, A. J. Musser, and H. Bronstein. Re-thinking dimer design principles with indolonaphthyridine intramolecular singlet fission. Chem. Eur. J., page e202301547, 2023.Majumder23a K. Mazjumder, S. Mukherjee, N. A. Panjwani, J. Lee, R. Bittl, W. Kim, S. Patil, and A. J. Musser. Controlling intramolecular singlet fission dynamics via torsional modulation of through-bond versus through-space couplings. J. Am. Chem. Soc., 145:20883–208896, 2023.Khan17b S. Khan and S. Mazumdar. Diagrammatic exciton basis theory of the photophysics of pentacene dimers. J. Phys. Chem. Lett., 8:4468–4478, 2017.Khan18a S. Khan and S. Mazumdar. Optical probes of the quantum-entangled triplet-triplet state in a heteroacene dimer. Phys. Rev. B, 98:165202, 2018.Trinh17a M. T. Trinh, A. Pinkard, A. B. Pun, S. N. Sanders, E. Kumarasamy, M. Y. Sfeir, L. M. Campos, X. Roy, and X.-Y. Zhu. Distinct properties of the triplet pair state from singlet fission. Science Advances, 3(7):e1700241, 2017.Miyata19a K. Miyata, F. S. Conrad-Burton, F. L. Geyer, and X.-Y. Zhu. Triplet pair states in singlet fission. Chem. Rev., 119:4261–4292, 2019.Pensack18a Ryan D. Pensack, Andrew J. Tilley, Christopher Grieco, Geoffrey E. Purdum, Evgeny E. Ostroumov, Devin B. Granger, Daniel G. Oblinsky, Jacob C. Dean, Grayson S. Doucette, John B. Asbury, Yueh-Lin Loo, Dwight S. Seferos, John E. Anthony, and Gregory D. Scholes. Striking the right balance of intermolecular coupling for high-efficiency singlet fission. Chem. Sci., 9:6240–6259, 2018.Masoomi-Godarzi20a S. Masoomi-Godarzi, C. R. Hall, B. Zhang, M. A. Gregory, J. M. White, W. W. H. Wong, K. P. Ghiggino, T. A. Smith, and D. A. Jones. Competitive triplet formation and recombination in crystalline films of perylenediimide derivatives: Implications for singlet fission. J. Phys. Chem. C, 124:11574, 2020.Kim18a H. Kim and P. M. Zimmerman. Coupled double triplet state in singlet fission. Physical Chemistry Chemical Physics, 20:30083–30094, 2018.Musser19a A. J. Musser and J. Clark. Triplet-pair states in organic semiconductors. Annu. Rev. Phys. Chem., 70:323–351, 2019.Scholes15a G. D. Scholes. Correlated pair states formed by singlet fission and exciton-exciton annihilation. J. Phys. Chem. A, 119:12699, 2015.Pensack16a R. D. Pensack, E. E. Ostroumov, A. J. Tilley, S. Mazza, C. Grieco, K. J. Thorley, J. B. Asbury, D. S. Seferos, J. E. Anthony, and G. D. Scholes. Observation of two triplet-pair intermediates in singlet exciton fission. J. Phys. Chem. Lett., 7:2370–2375, 2016.Lee18a T. S. Lee, Y. L. Lin, H. Kim, R. D. Pensack, B. P. Rand, and G. D. Scholes. Triplet energy transfer governs the dissociation of the correlated triplet pair in exothermic singlet fission. J. Phys. Chem. Lett., 9:4087–4095, 2018.Taffet20a E. J. Taffet, D. Beljonne, and G.D. Scholes. Overlap-driven splitting of triplet pairs in singlet fission. J. Am. Chem. Soc., 142:20040–20047, 2020.Hudson22a R. J. Hudson, A. N. Stuart, D. M. Huang, and T. W. Kee. What next for singlet fission in photovoltaics? J. Phys. Chem. C, 126:5369–5377, 2022.Parenti22a K. Parenti, R. Chesler, G. He, P. Bhattacharyya, B. Xiao, D. Malinowski, J. Zhang, X. Yin, A. Shukla, S. Mazumdar, M. Sfeir, and L. Campos. The role of quantum interference in intramolecular singlet fission. Nat. Chem., 15:339–346, 2022.Pariser53a R. Pariser and R.G. Parr. A semi-empirical theory of the electronic spectra and electronic structure of complex unsaturated molecules ii. J. Chem. Phys., 21:767–776, 1953.Pople53a J. A. Pople. Electron interaction in unsaturated hydrocarbons. Trans. Faraday Soc., 49:1375–1385, 1953.Ramasesha90a S. Ramasesha and I.D.L. Albert. Sudden polarization in interacting model π-systems: An exact study. Chem. Phys., 142(3):395 – 402, 1990.Chandross97a M. Chandross and S. Mazumdar. Coulomb interactions and linear, nonlinear, and triplet absorption in poly(para-phenylenevinylene). Phys. Rev. B, 55:1497–1504, 1997.Tavan87a P. Tavan and K. Schulten. Electronic excitations in finite and infinite polyenes. Phys. Rev. B, 36:4337–4358, 1987.Aryanpour15a Karan Aryanpour, Alok Shukla, and Sumit Mazumdar. Theory of singlet fission in polyenes, acene crystals, and covalently linked acene dimers. J. Phys. Chem. C, 119(13):6966–6979, 2015.SM See Supplemental Material at http://link.aps.org/ supplemental/xx.xxxx/ PhysRevLett.xxx.xxxxxx for further details of calculations and discussion of relevance to experiments.Berkelbach13b Timothy C. Berkelbach, Mark S. Hybertsen, and David R. Reichman. Microscopic theory of singlet exciton fission. ii. application to pentacene dimers and the role of superexchange. J. Chem. Phys., 138(11):114103, 2013.Yost14a S. R. Yost, J. Lee, M. W. B. Wilson, T. Wu, D. P. McMahon, R. R. Parkhurst, Nicholas J. Thompson, Daniel N. Congreve, A. Rao, K. Johnson, M. Y. Sfeir, M. G. Bawendi, T. M. Swager, R. H. Friend, M. A. Baldo, and T. Van Voorhis. A transferable model for singlet-fission kinetics. Nat. Chem., 6:492–497, 2014.Tayebjee17a Murad J. Y. Tayebjee, S. N. Sanders, E. Kumaraswamy, L. M. Campos, M. Y. Sfeir, and D. R. McCamey. Quintet multiexciton dynamics in singlet fission. Nat. Phys., 13:182–188, 2017.Chen19a M. Chen, M. D. Krzyaniak, J. N. Nelson, Y. J. Bae, S. M. Harveya, R. D. Schaller, R. M. Young, and M. R. Wasielewski. Quintet-triplet mixing determines the fate of the multiexciton state produced by singlet fission in a terrylenediimide dimer at room temperature. Proc. Natl. Acad. Sci. USA, 116:8178–8183, 2019.Sanders16b E. G. Fuemmeler, S. N. Sanders, A. B. Pun, E. Kumarasamy, T. Zeng, K. Miyata, M. L. Steigerwald, X. Y. Zhu, M. Y. Sfeir, L. M. Campos, and N. Ananth. A direct mechanism of ultrafast intramolecular singlet fission in pentacene dimers. ACS Cent. Sci., 2(5):316–324, 2016.
http://arxiv.org/abs/2310.17818v1
{ "authors": [ "R. Chesler", "P. Bhattacharyya", "A. Shukla", "S. Mazumdar" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20231026234411", "title": "Distinct contiguous versus separated triplet-pair multiexcitons in an intramolecular singlet fission chromophore" }
Utilizing Language Models for Energy Load Forecasting Flora D. Salim January 14, 2024 =====================================================empty emptyMany practically relevant robot grasping problems feature a target object for which all grasps are occluded, e.g., by the environment. Single-shot grasp planning invariably fails in such scenarios. Instead, it is necessary to first manipulate the object into a configuration that affords a grasp.We solve this problem by learning a sequence of actions that utilize the environment to change the object's pose. Concretely, we employ hierarchical reinforcement learning to combine a sequence of learned parameterized manipulation primitives.By learning the low-level manipulation policies, our approach can control the object's state through exploiting interactions between the object, the gripper, and the environment.Designing such a complex behavior analytically would be infeasible under uncontrolled conditions, as an analytic approach requires accurate physical modeling of the interaction and contact dynamics. In contrast, we learn a hierarchical policy model that operates directly on depth perception data, without the need for object detection, pose estimation, or manual design of controllers.We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace. Our method transfers to a real robot and is able to successfully complete the object picking task in 98% of experimental trials. § INTRODUCTION State-of-the-art robotic grasping systems <cit.> function well in moderately cluttered scenes, but are fundamentally limited in assuming that objects are directly graspable — that is, that there always exists a collision-free grasp configuration within the reachability space of the robot arm. In practice, this assumption is often violated: for example, in cases when objects are tightly packed together, or placed in configurations that obstruct all feasible grasps (e.g., think of a book lying flat on a table). To address such practically relevant scenarios the robot arm needs to re-arrange objects in a non-prehensile manner, which poses unique challenges to perception, planning, and control.Current non-prehensile object re-arrangement approaches aim to overcome the stochastic and unpredictable nature of physical interaction through trial-and-error learning <cit.>.As reinforcement learning (RL) involving contact dynamics has prohibitively high interaction sample complexity, a common solution is to employ manually designed parametric controllers dubbed manipulation primitives <cit.>. However, this has two disadvantages compared to the more general end-to-end approaches: first, it limits applicability to only tasks that can be solved by combining the available primitives; and second, it necessitates expert input in designing, implementing, and tuning the primitive controllers.While some progress has recently been made in alleviating the first shortcoming through e.g., the use of atomic actions to “stitch” together primitives <cit.>, the need for expert input in primitive design still poses a major challenge. Instead of relying solely on manually-defined primitives or resorting to costly end-to-end RL, we take a middle ground and propose to learn hierarchical control policies whose actions are a series of parametrized learned manipulation primitives. This allows us to maintain the generality of full-scale RL while improving learning efficiency through the decomposition of tasks into several primitives, each associated with lower-dimensional state-action spaces.We apply our approach to solve a variation of the occluded grasping task <cit.> in which a robot arm equipped with a simple parallel jaw gripper needs to pick a flat object placed on a table-top (see Fig. <ref>).As the object is only graspable along approach directions that collide with the table, the task can only be solved via non-prehensile manipulation and interaction with the environment:the robot needs to push the object against one of the four boundaries, pivot to flip and finally grasp it from the top. However, successful execution of these actions is challenging as even minor errors can have significant negative consequences. For instance, a slight misalignment or inappropriate force can cause the object to tumble unpredictably during flipping.Manually designing primitives for these intricate actions would be a difficult and error-prone task, further highlighting the need for a more adaptive approach to addressing these challenges.We train our approach in simulation through curriculum learning <cit.> and apply Automatic Domain Randomization <cit.> to enable zero-shot transfer to the real world, achieving a picking success rate of 98% even when the object is placed in a random initial location. Our main contributions are thus: First, we propose a novel method for solving the occluded grasping task using hierarchical reinforcement learning.Second, we devise a curriculum learning strategy that allows us to train the low-level agents before progressing to high-level decision-making.Lastly, we demonstrate our method trained in simulation and achieve zero-shot transfer to a real-world robot experiment. § RELATED WORK§.§ Primitive-based Robotic ManipulationReinforcement learning for robot manipulation poses a significant challenge due to the difficulty of effectively exploring the high-dimensional continuous action space and the complexity of contact-based dynamics.To address these problems, early works <cit.> have explored reinforcement learning for manipulation using pre-defined primitives. Instead of exploring the high-dimensional continuous action space, their policies learn to estimate the appropriate primitive and its parameters, such as starting pose, moving distance, or rotation angle.Recent work <cit.> applies hierarchical reinforcement learning <cit.> for separating the primitive and the estimation of its parameters to improve performance. They use an atomic primitive that directly controls the end-effector pose to fill the missing gaps that cannot be fulfilled by the available primitives.Although these works demonstrate significant results in using primitives for manipulation tasks, they all rely heavily on manually designed primitives, which often requires human expertise and takes a significant amount of time and effort. In contrast, we learn the behavior of an extrinsic dexterity primitive for flipping flat objects by hierarchical reinforcement learning without designing it manually.§.§ Extrinsic Dexterity for ManipulationIn many practical applications, robots are equipped with parallel jaw grippers that are simple, but limited in dexterity, and thus often insufficient for accomplishing more complex tasks. Extrinsic dexterity <cit.> is one strategy to mitigate this issue by exploiting external resources such as gravity, external contacts, or dynamic motions for assisting manipulation. Early works <cit.> have proposed exploiting constraints imposed by the environment and manually designed controllers to grasp objects by sliding and pushing against a wall or sliding to the edge of a table. These approaches are, however, limited to controlled conditions and known environments.Current non-prehensile object re-arrangement approaches <cit.> utilize reinforcement learning to overcome the inherent unpredictability of physical interactions.However, they often suffer from prohibitively high interaction sample complexity and are constrained by their reliance on precise knowledge of the object pose, or the necessity for specialized policies for diverse objects. Recent works <cit.> learn a policy to grasp flat objects based on visual information. However, they rely on simple visual servoing to initiate grasping, assume the object position is given, or need a specific gripper design. In contrast, we use a standard parallel jaw gripper, based on visual information and without a given object position or grasp pose. Zhou and Held <cit.> are closely related to our work and also address grasping objects placed in unfavorable configurations through reinforcement learning.Their method focuses on learning a controller that is able to flip an object and acquire an initially occluded grasping configuration.However, the approach presented has several limitations: the target object needs to be placed very close to a wall, a target grasp configuration needs to be available, and the object pose has to be tracked through the interaction.In contrast, we combine different primitives to overcome the constraint on object and wall proximity and demonstrate our approach is able to efficiently solve the problem without access to a target grasp or object pose estimate. § METHOD We address the occluded grasping task by employing a variation of hierarchical Deep Q Networks (DQN) <cit.>.In our approach (see Fig. <ref>), a high-level agent is responsible for selecting a sequence of pose-parametrized manipulation primitives, each of which is in turn assigned to a low-level agent responsible for selecting appropriate primitive-specific actions. The goal of the high-level agent is to learn a policy that maps a sensor observation in the form of depth data to an appropriate pose-parametrized manipulation primitive.The low-level manipulation primitive constitutes a feedback controller that is learned through interaction. We use three manipulation primitives:a push primitive that achieves in-plane object motion;a flip primitive that uses contact with the environment to pivot an object, anda grasp primitive that picks directly graspable objects.These three primitives can be combined to solve the occluded grasping task and demonstrate extrinsic dexterity manipulation. For example, the high-level agent may first decide to use the push primitive to push the object to the wall, then use the flip primitive to pivot the object, and finally use the grasp primitive to grasp it. In this paper, we employ a low-level DQN agent to learn the complex flip primitive, while we design the other two primitives manually. §.§ Manipulation with Parameterized Primitives Observation and action spaces.The observation space of the high-level agent is a depth map of the workspace.The action space is composed of a pixel coordinate that corresponds to samples of the state-action-value function Q. Each pixel represents a starting pose (x, y, θ_i) and a primitive id ϕ, where(x, y) encodes the (x, y)-th pixel of the depth image;θ_i = 2π i / K encodes the i-th discrete end-effector rotation of K possible directions around the Z-axis (K=16 in this paper); while ϕ is a categorical choice between a group of primitives Φ, with Φ=3.The policy estimates a separate Q map for each possible primitive choice. The optimal action then corresponds to the pixel with the maximum Q-value, potentially within a masked region of interest as we discuss later on in this section. Based on the action, the robot moves to the starting pose and waits to execute the selected primitive. The height of the starting pose is decided by the height in the corresponding depth map (x, y)-th pixel.Reward.We train the high-level agent using a sparse reward for each primitive.Successful actions with the flip primitive and the grasp primitive are rewarded by r^H_f_t=1 and r^H_g_t=1 respectfully.The reward of successful push primitive actions is set to a value of r^H_p_t=0.2 on success and r^H_p_t=0.1 on a change in the workspace configuration. The values are kept lower than the rewards for flipping and grasping to discourage the agent from spending the entire episode just pushing the object.Executing a flip primitive is considered as success if the object is flipped up after applying the primitive.The grasp primitive succeeds when the target object is grasped at a certain position.Finally, the push primitive is successful if the object is pushed to a configuration near one of the four workspace walls. Because the reward function of the high-level agent is decided by the results of the primitive execution, we train the high-level agent together with the low-level agent.Primitive masks. We train the high-level agent with ε-greedy exploration and adopt the masking approach from Ren et al. <cit.> to improve learning efficiency. As the initial policy is likely to cause no change in the workspace state (e.g., if the robot does not touch any object), we encourage the agent to explore more widely by uniformly sampling an action that corresponds to one of the top ten Q values.To further improve learning efficiency, we apply primitive masks to reduce the region that the agent needs to explore. The idea is that the agent only has to explore pixels near the boundary of the target object.Thus, we calculate the mask from the height map: first, we calculate the grasp mask by checking for pixels above a threshold M_h; second, we devise the flip and push masks by diffusing an area around the grasp mask.With these optimizations, the high-level agent is ready to choose and apply a low-level action primitive. Model architecture. We use a Fully Convolutional Network (FCN) <cit.> as our high-level agent model, inspired by Ren et al. <cit.>.As indicated in Fig. <ref>, we pre-process the input depth image to obtain the state space for our high-level agent. First, we transform the depth image to a robot-centric coordinate frame and project it to a height map relative to the table surface.The height map is then rotated K times for the angles θ_i = 2π i / K and concatenated as batches of tensors that we pass through the FCN. The FCN outputs three Q maps corresponding to the primitive choices for each rotated height map.After applying primitive masks to the corresponding Q maps, we sample the action by ε-greedy. Finally, the robot moves to the starting pose and waits to execute the selected primitive. §.§ Learning Behavior for A Contact-rich Primitive Overview. A key distinguishing feature of our approach is that we do not rely solely on expert-devised behavior primitives, but rather learn low-level action policies in conjunction with the high-level agent from the previous section.In this paper we train only one such low-level agent — for the flipping primitive — though extensions to multiple learnable low-level agents are in principle possible.Observation and action spaces.The action space for the low-level agent is a vector of the state-action-value function Q. Within this vector, each value corresponds to a specific end-effector displacement (d, z, θ_y), shown in Fig. <ref>. In this context: d ∈{0, a_d} encodes the forward distance, z ∈{0, a_z} encodes the vertical movement, and θ_y ∈{-r_y, 0, r_y} encodes the angular deviation along the Y-axis relative to the primitive's initial pose.The observation space of the low-level agent is a combination of the end-effector pose and the contact force. We formulate the state as s_l = (p_a^l, f_d, f_max), where p_a^l∈ℝ^3 is the current end-effector task-space pose; f_d is the contact force along the d-axis; and f_max is the maximum of the current contact force. To facilitate efficient training, we develop an observation space designed to remain invariant to the starting pose of the low-level agent, ensuring state consistency across different initial poses.Reward.To learn the low-level action policy, we could in a straightforward manner define the reward sparsely on successful flips.However, as sparse rewards result in less efficient learning <cit.>, we choose to instead design a reward function that considers the contact force and end-effector position:r_τ = r^H_f_t + r^L_τ r^L_τ =min(σ, z_τσ/w), if f_c>0-1, if f_c>f_limit0, otherwisewhere f_c is the current contact force, f_𝑙𝑖𝑚𝑖𝑡 is the maximum safety contact force, z_τ is the current end-effector height, and σ and w are hyper-parameters that normalize the z_τ and limit the upper bound of the reward.Based on this reward function, we encourage the agent not only to flip up the object but also to raise it up with contact and avoid applying too much contact force. We find that without the penalty for applying too much contact force, the robot may trigger emergency stops in the real world, making the transfer of policies learned in simulation more challenging. Initial pose invariant state. To further improve the training efficiency, we ensure the low-level model doesn't need to adapt to various initial poses. We establish the starting pose provided by the high-level agent as the reference base frame for p_a^l. Consequently, p_a^l represents a projection of the end-effector pose with respect to the forward direction, vertical displacement, and rotation along the Y-axis, all based on the initial pose frame.Model architecture. For more details of our low-level agent, we employ a multilayer perceptron (MLP) as the underlying model. As indicated in Fig. <ref>, we combine p_a^l, f_d, and f_max as inputs to the MLP, which in turn outputs Q values corresponding to a specific end-effector displacement to control the robot. Once selected for execution, the low-level agent controls the robot iteratively, taking a fixed number of actions within a horizon T. §.§ Curriculum Learning and Domain Randomization We follow a curriculum learning <cit.> approach to train our high-level and low-level agents separately.Learning the high-level agent is hard when the low-level agent is not competent in its sub-task because the high-level reward is conditioned on whether the low-level task is completed successfully.We train the low-level agent first, devising progressively more complex interaction scenarios and only include the high-level model once the low-level policy can successfully flip objects.To adapt to the domain shift between simulation and the real world and perform zero-shot sim2real transfer, we employ Automatic Domain Randomization <cit.> during the simulation training. In each episode, we randomly sample the object size, friction, and mass to make our method able to address varied box-shaped objects. To overcome the noise of the height map in the real world, we add Gaussian noise and randomly block a few regions as fake reflections in the simulated depth image. § EXPERIMENTS §.§ Setup and Evaluation Metrics To demonstrate our model's ability to learn extrinsic dexterity, we use a Franka Emika Panda arm with a 2-finger gripper that does not open wide enough to grasp the target objects (shown in Table <ref>) from above directly. For the environmental setup, we use the inside of a 44.8 × 44.8 (cm) box as the robot's workspace, with the four boundaries serving as potential locations for performing an object pivot.An overhead depth image is captured by a Kinect v2 camera and transformed into a height map as input to the high-level model.To avoid the robot occluding the workspace, we move the robot to the bottom-left corner before acquiring a depth image. We use the completion rate as the evaluation metric and test 10 episodes for each object.An episode is considered successful if the robot picks up the object within a certain number of primitive actions.Similarly to above, the success rate for each primitive is the number of successful actions divided by the total number of attempted actions with that primitive, averaged over the 10 episodes. This metric does not apply to the baseline algorithms that are not based on manipulation primitives, and thus we omit it when discussing the performance of those algorithms and only report the overall task completion rate.§.§ Evaluation in SimulationWe use Isaac Sim to build a simulation for a variation of the occluded grasping task to train our model. The hyper-parameters of the low-level model's actions are set to a_d=0.5 cm, a_z=0.5 cm, and r_y=2 degrees. The maximum terminal step T in the low-level model is 35. In the reward function Equation <ref>, we set σ=0.2 and w=0.1.We train our method in simulation using flat objects of random friction, weight, and size. We randomly place an object in the workspace and evaluate whether the robot can pick it up, applying 10 primitives or less.Fig <ref> shows the learning curve of success rate versus training episodes. We measure the grasp success rate over the last 100 grasp attempts and use the same way to measure the flip success rate and full-task completion rate. In this experiment, we start to test the model after 500 episodes. We compare the training regime indicated above for our method (ED-PMP) with two baselines: SAC <cit.> and Rainbow DQN <cit.>. Additionally, we introduce an ablation — ED-PMP-MD — where the learned flip primitive is replaced by a manually designed version.To adapt the two baselines to our task, we set their action spaces as 6D end-effector movements, with SAC using a continuous version and Rainbow DQN employing a discrete variant. We also increase the maximum episode length from 10 to 40 steps for both baselines. Regarding ED-PMP-MD, the manually designed flip primitive comprises three stages. First, the gripper moves forward until contact is established. Next, it moves diagonally upward at a 45-degree angle while maintaining the contact force between 8 to 10 N by adjusting the upward and forward movements. Finally, it moves forward while adjusting the upward movement to maintain contact with a force of less than 10 N. As shown in Fig <ref>, both SAC and Rainbow DQN struggle to achieve successful object picking with such a limited number of samples. Comparing our method to ED-PMP-MD, both approaches attain an 80% task completion rate within 800 episodes.However, when analyzing the individual primitive success rates for the grasping (Fig. <ref> (b)) and flipping (Fig. <ref> (c)) primitives, we note that in both cases our full method results in higher success rates. Thus, while in simulation the overall performance of the two methods is comparable, for ED-PMP-MD this comes at the cost of repeated trials: the agent relying on a fixed flipping primitive needs more steps to complete the task.We also note that despite the two agents using an identical grasping primitive implementation, ED-PMP achieves substantially higher grasping success rates.We speculate that this is due to the learned flipping primitive manipulating the object into configurations that better afford top-down grasps: a synergy that arises thanks to the simultaneous learning at different hierarchical levels. To analyze how our agent makes a decision in the current state, we visualize the high-level model's Q maps in Fig <ref> for three consecutive actions executed in simulation.As the first action (Fig <ref> top row), the robot moves the object against the right workspace boundary by rotating the gripper -22.5^∘ around the z-axis and applying the push primitive.After pushing (middle row), the robot executes the flip primitive to pivot the object using the right workspace boundary as support. Finally, the robot rotates the gripper 67.5^∘ around the z-axis and executes a grasp primitive to pick up the object.When determining the second action (second row in Fig. <ref>), we note that there are very distinct peaks in the Q maps for the flip primitive. The majority of alternative actions for the flip primitive are valued at a uniform low level, indicating that there are only a limited set of primitive parameters that are likely to result in a successful flipping. A similar pattern can be observed for the grasping primitive selected as the third action (third row in the figure). In contrast, the Q maps for the push primitive at all three decision points consistently exhibit more uniformly bright pixels. This is due to the agent's ability to easily acquire rewards by pushing the object and re-arranging the scene. §.§ Real-world Experiments To evaluate our method, we test it with zero-shot transfer from simulation to a real-world setup using the box-shaped objects in Table <ref>.We use four different boxes (Box-0 to Box-3) but consider two configurations of Box-0 (with and without additional contents to make it heavier), which means we have five object types. We consider two setups: close, where the object is placed next to one random wall of the box; and random where the object is placed randomly near the center of the workspace. The results are shown in Table <ref>.We compare with Zhou and Held <cit.> as our baseline. Their method focuses on learning a controller that is able to flip an object and acquire an initially occluded grasping configuration. Their method achieves completion rates of 70% and 32% in the close and random setup scenes, respectively. It's worth noting that their method encounters significant challenges when the object is not in close proximity to the wall, indicating limitations in its ability to handle such scenarios effectively.In contrast, our approach with the manually designed flipping primitive (ED-PMP-MD) demonstrates 92% and 88% completion rates in the close and random setup, respectively. Further more, Our method with the learned low-level flipping primitive (ED-PMP) clearly outperforms the manually designed variant. Whether the object is placed close to a wall or randomly within the workspace, our method consistently attains high completion rates, ranging from 96% to 98%.We further evaluate our proposed method ED-PMP and ED-PMP-MD considering grasp success rate, flip success rate, and full completion separately.As shown in Table <ref>, the method relying on a learned flipping primitive can achieve better performances in terms of completion rate, grasp success rate, and flip success rate in both setups.Comparing the action efficiency between ED-PMP and ED-PMP-MD, the former requires on average 2.86 and 4.58 primitives to finish the task (in close and random, respectively) and the latter needs 3.18 and 5.84 primitives. The number of required primitives is significantly less in the close setup for both methods since the target object is already placed in a configuration amenable to flipping. One interesting observation is that, although the random setup is more difficult than close one, our method achieves a slightly better completion rate in the random setup. We infer the reason is that randomly placing the object in the workspace, as opposed to placing it close to the wall, enables the agent to move the object to a position where it has a higher chance of successfully picking it up. § CONCLUSIONIn this paper, we propose an approach for solving the occluded grasping task by learning hierarchical control policies that decompose the problem in two steps: choosing a sequence of parametrized manipulation primitives; and learning low-level control policies.To enhance learning efficiency, we devise a curriculum learning strategy to train the low- and the high-level agents sequentially.Our method transfers zero-shot to the real world and achieves a 98% task completion rate with varied box-shaped objects and a wide range of configurations in the real environment. Notably, compared to the state-of-the-art, we do not require the object to be placed near a supporting wall. With increasingly complex tasks, designing reward functions for numerous learned parameterized primitives can be challenging. Thus, a potential future direction is to remove the need for engineering reward functions when training the low-level agent. IEEEtran
http://arxiv.org/abs/2310.17785v2
{ "authors": [ "Shih-Min Yang", "Martin Magnusson", "Johannes A. Stork", "Todor Stoyanov" ], "categories": [ "cs.RO", "cs.LG" ], "primary_category": "cs.RO", "published": "20231026212823", "title": "Learning Extrinsic Dexterity with Parameterized Manipulation Primitives" }
Experimental Validation for Distributed Control of Energy Hubs Varsha Behrunani^1,2, Philipp Heer^2 and John Lygeros^1 Received XXX; accepted YYY ============================================================== In this position paper, we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark.The extent of the problem is unknown, as it is not straightforward to measure.Contamination causes an overestimation of the performance of a contaminated model in a target benchmark and associated task with respect to their non-contaminated counterparts. The consequences can be very harmful, with wrong scientific conclusions being published while other correct ones are discarded.This position paper defines different levels of data contamination and argues for a community effort,including the development of automatic and semi-automatic measures to detect when data from a benchmark was exposed to a model,and suggestions for flagging papers with conclusions that are compromised by data contamination. § INTRODUCTION At the core of NLP as a discipline, there is rigorous evaluation on different tasks. The experimental protocols involve strict control over the data, especially test data, which needs to be totally unseen during development, but also over training and development data. This is essential to assess the performance of a model in zero-shot, few-shot, or fully supervised settings. Since fine-tuning and prompting of Large Language Models (LLMs) became commonplace <cit.> it has been increasingly difficult to enforce those strict protocols. Pre-training LLMs is expensive, and therefore, most of the time, researchers use LLMs trained by third-party entities <cit.>, which are agnostic to the target tasks where those LLMs are going to be used.With the growing scale of LLMs  <cit.> the need for data has been solved by crawling the internet, reaching trillions of tokens <cit.>, and making it very hard to know whether a specific benchmark was used to train the LLM. This is applicable to all models, even if they document the source of the data at a high level, but especially for closed models with no or insufficient documentation.Data contamination has two consequences. The first one is that the performance of an LLM when evaluated on a benchmark it already processed during pre-training will be overestimated, causing it to be preferred with respect to other LLMs. This affects the comparative assessment of the quality of LLMs. The second is that papers proposing scientific hypotheses on certain NLP tasks could be using contaminated LLMs, and thus make wrong claims about their hypotheses, and invalidate alternative hypotheses that could be true. This second consequence has an enormous negative impact on our field and is our main focus.There are several measures that the community could take. A possible solution would be to avoid all research involving datasets which include published test data, and focus on datasets where the test data labels are not public. This solution will severely affect the number of NLP tasks for which benchmarks exist, at least until new benchmarks that avoid data leakage are produced.  <cit.> presents preventative strategies to avoid contamination in the future. In this position paper, we propose a complementary line of action which seeks to measure and document data contamination cases, specifying LLM, benchmark and evidence supporting contamination. This solution involves a registry of contamination cases[Such as the LM Contamination Index <https://hitz-zentroa.github.io/lm-contamination/>], collaborative manual work and research on automatic approaches. In addition, conferences should devise mechanisms to ensure that papers don't include conclusions involving contamination, and to flag past work where contamination has been discovered after publication.The paper starts by introducing background, followed by a definition of data contamination, contamination at different steps, methods to measure data contamination and a call for action.§ BACKGROUND Detection of contamination cases has been traditionally done by directly analyzing the training data  <cit.>, but the current scale of the pre-training data makes it difficult  <cit.>. Without proper documentation and search tools like ROOTS  <cit.> it is very difficult for any researcher to actually know whether their datasets are compromised on a given model. More recently, this task became even harder, as the best-performing LLMs are deployed as products, and therefore, their training corpora are kept secret. In this case, it has been shown that the high memorization abilities of LLMs can be used to generate portions of the training texts  <cit.>. Using this memorization property, <cit.> show that ChatGPT generates portions of popular NLP benchmarks. Furthermore, LLMs memorization has been studied on data-leakage scenarios  <cit.>.Regarding data contamination cases,  <cit.> exposed that the C4 corpus  <cit.>, a corpus used to pre-train several LLMs such as T5 <cit.>, contained the test splits of several benchmarks that were crawled from GitHub. Moreover, <cit.> acknowledged a bug in their filtering script that caused the contamination of several benchmarks during the GPT-3 training. Furthermore, <cit.> stated that parts of the BIG-bench <cit.> benchmark were inadvertently mixed into the training set, enough to stop them from evaluating the model on it. They also mention that they included parts of the training sets of MATH <cit.> and GSM-8K <cit.> as training data to improve mathematical reasoning <cit.>. Therefore, the performance results reported for GSM-8K cannot be taken as zero-shot results when compared to other models. Recently, <cit.> reported that several benchmarks have already been compromised in ChatGPT, including the popular CoNLL2003 <cit.>.There are several preprints that evaluate ChatGPT on CoNLL03 <cit.> and at least one conference paper published on ACL 2023 that evaluates GPT-3 <cit.> and Codex <cit.> on the same benchmark <cit.>. Appendix <ref> shows evidence for data contamination for those LLMs, and casts doubts on the conclusions of those papers.§ DEFINING DATA CONTAMINATION In general, data contamination refers to any breach in the strict control of datasets required by the experimental protocol. In this paper, we focus on the specific case where a LLM has processed the evaluation benchmark during its pre-training. However, different types of contamination exist and each of them has different implications. In this section, we present three types of contamination: guideline, text and annotation. Guideline contamination happens when the annotation guidelines for a specific dataset are seen by the model. Usually, for specialized annotations, highly detailed guidelines are required. The guidelines can usually be publicly found on the internet, even for datasets that are not public or require buying a license for their use, ACE05 <cit.> for example . The more details the guidelines have the more information and examples they provide. A model aware of the guidelines for a specific task or dataset has advantages over a model without such information. We should consider the guideline contamination, especially on zero and few-shot evaluations. Raw text contamination happens when the original text (previous to annotation) is seen by the model. Some examples of this type of contamination are the datasets based on Wikipedia texts. Wikipedia is commonly used as a source of pre-training data, but, it is also a frequent source of text to create new datasets. MultiCoNER 2 <cit.>, a Named Entity Recognition dataset based on Wikipedia links and Wikidata information, is an example of this phenomenon. Models that have already seen Wikipedia in its original form (including the markup annotations) have more information to better identify a part of the annotations (the entity boundaries) of the dataset. As pointed out by <cit.>, other datasets built from the web such as IMDB <cit.> and CNN/DailyMail <cit.> can be also compromised. This kind of contamination should be taken into account when developing automatically annotated datasets. Annotation contamination happens when the annotations (labels) of the target benchmark are exposed to the model during training. Depending on the splits of the benchmark that have been exposed, we can have the following cases: (1) When the evaluation split is involved, the experiment is completely invalidated. This is the most harmful level of contamination. (2) When the train or development splits are involved, this would not affect comparisons with other models that have been developed using those same splits, but it does invalidate conclusions claiming zero-shot or few-shot performance. § CONTAMINATION ON DIFFERENT STEPS Currently, the standard procedure to train and deploy language models has three main steps: pre-training a language model, fine-tuning the model to follow instructions and/or align with human feedback; and an iterative improvement step after deployment. Data contamination does not only occur in the pre-training step of LLMs, but can occur later in the training pipeline.§.§ Contamination during pre-training During the pre-training, there is a high chance that undesired data is fed to the model. Gathering huge amounts of text from the internet also has its counterpart: it becomes very hard to filter undesired data completely, and even deduplication is challenging <cit.>. Avoiding data contamination completely is not realistic, as it is impossible to know every dataset that the research community can test an LLM on. However, allowing the researchers to access and perform queries on the pre-training data may ensure that no corrupted evaluations are performed. In fact, keeping the pre-training data not available for LLM consumers may derive undesired influences on downstream tasks <cit.>.In addition, researchers building LLMs should avoid, at least, contamination from well-known standard benchmarks such as GLUE <cit.> or SuperGLUE <cit.>. As <cit.> showed, see their Table https://aclanthology.org/2021.emnlp-main.98.pdf2, various standard benchmarks were found in the C4 <cit.> corpus. §.§ Contamination on supervised fine-tuning The supervised fine-tuning or instruction-tuning step is another step where contamination can occur. Nevertheless, it is much less frequent as it is a required practice in the research community to document the training data in order to publish your findings. As an example of those, we can find the FLAN dataset collection <cit.>, OPT-IML Bench <cit.>, Super-Natural Instructions <cit.>, the P3 collection <cit.> and so on.Recently, more and more machine-generated text is being used to fine-tune language models. Some examples of these are Self-Instruct <cit.>, Unnatural Instructions <cit.>, Alpaca Data <cit.> and ShareGPT <cit.>. The aim of those datasets is usually to make public and smaller white-box models imitate black-box models such as ChatGPT <cit.>. However, the distillation of a closed teacher model with clear signs of contamination is an issue. More alarming, is the case that popular crowd-sourcing methods like MTurk have started using LLMs to generate data that was supposed to be manually generated <cit.>. §.§ Contamination after deployment The last step where the models can be exposed to contamination is applied mostly on LLMs as service products. With the recent improvements in the quality of LLMs, the models that were supposed to be part of bigger products become products by themselves (ChatGPT or Bard for example). It is worth noting that, although they are closed models, i.e. no information is known about the architecture or training details, the research community has evaluated them on standard benchmarks (<cit.>; among others). The monetary success of closed systems is closely tied to the performance of the model. Therefore, companies have a strong incentive to audit user inputs and retrain their system when the performance in a task is determined to be poor. Those models that are actually being accessed via API calls have been iteratively improved with user input, leading to evaluation data exposure. As a result, the models became aware of the testing data, at the point that you can easily recreate the dataset as we discuss in Section <ref> (see examples in Appendix <ref>). § MEASURING DATA CONTAMINATION For the reasons we already mentioned, it is necessary to measure the existent data contamination cases and to document relevant contamination evidence. In order to achieve this goal, we differentiate two cases. In the first case, we would have open models where there is public access to all the training data, including text used in pre-training, but also, if the LLM was trained on them, instruction tuning datasets and deployment datasets. In the second case, we would have closed models for which there is no access to training data. §.§ Open LLMsMost of the research on data contamination has been focused on analyzing pre-training data with string-matching operations <cit.>, as this provides direct evidence that the LLM was contaminated. Pre-training datasets are unwieldy large, and string-matching operations can be very slow at this scale. Therefore, several tools for data auditing have been released recently: The ROOTS Search Tool <cit.> and Data Portraits <cit.> among others. As an example of their usefulness,  <cit.> found that BLOOM <cit.> should not be evaluated on XNLI <cit.> due to contamination. These tools should be made available for all open LLMs, in order to allow for contamination case discovery.In addition, there is no currently agreed-upon methodology to measure the level of contamination. For cases where the full benchmark is not found, we propose to measure the level of data contamination usingbenchmark data overlap, that is, the percentage of the benchmark that can be found in the pre-training dataset <cit.>. §.§ Closed LLMs Despite most of the recent popular models like LLaMA <cit.>, GPT-4 <cit.> or Bard have not publicly released their pre-training data, very few works have actually worked on detecting data-contamination when the pre-training data is not available <cit.>. Although this scenario is much more challenging than the former, we foresee that it will become the most prevalent. Developing methods to measure the data contamination in this scenario must be crucial for future evaluations. To tackle this problem, we propose to take advantage of LLM's memorization capabilities. Appendix A shows some examples of using memorization to uncover data contamination for the CONLL2003 benchmark on three LLMs. In cases where the LLM does not produce the benchmark verbatim, it is left to the auditor to examine the output and judge whether the evidence supports contamination. The process is totally manual and could be scaled in a community effort. Alternatively, automatic metrics for measuring data contamination levels could be developed.As an initial step in this direction, we reuse and adapt the extractability definition presented in <cit.> for defining memorization. We define that an example s is extractable from evaluation dataset d and model m if there exists a sequence of k examples x immediately preceding s in d data such that s is generated when prompting model m with x. We can define the degree of contamination of model m for dataset d as the ratio of extractable examples with respect to the total number of examples in the dataset. One further question remains to be solved which is whether the lack of memorization of a benchmark ensures that the LLM was not trained on that benchmark. One hypothesis could be that the lack of memorization is correlated with the performance, even if the LLM was trained on the benchmark. Thus the LLM would not have any advantage with respect to another LLM that was not trained on the benchmark. This is currently speculation, so further research on this topic is necessary, given the extended use of closed LLMs in NLP research.§ CALL FOR ACTION We want to encourage the NLP community to: (1) Develop auto- or semi-automatic measures to detect when data from a benchmark was exposed to a model; (2) Build a registry of data contamination cases, including the evidence for the contamination; (3) Encourage authors to use the previous tools to ensure that the experimental protocol avoids data contamination to the extent possible; and (4) Address data contamination issues during peer review, and, in the case of published works, devise mechanisms to flag those works with the relevant evidence of data contamination and how data contamination affects the conclusions. As the problem affects our entire field, we also want to encourage the community to participate in workshops related to this topic, as for example, the 1st Workshop on Data Contamination[<https://conda-workshop.github.io>]. We think that developing the ideas that will arise from this community will play an important role in future NLP evaluations. § LIMITATIONSIn this paper, we address the problem of data contamination that occurs when evaluating LLMs on standard academic benchmarks. However, we are aware that there could exist other issues in current evaluations, but, they are out of the scope of this position paper. Related to our proposed solutions, we are aware that these are early-stage solutions and that the proposed effort is really challenging, therefore we call for further discussion and research on topics related to this issue.§ ACKNOWLEDGEMENTS This work has been partially supported by the Basque Government (Research group funding IT-1805-22) and the Spanish Government (ILENIA project). Oscar Sainz, Iker García-Ferrero, and, Julen Etxaniz are supported by doctoral grants from the Basque Government (PRE_2023_2_0137, PRE_2022_2_0208, and, PRE_2023_2_0060, respectively).acl_natbib§ EMPIRICAL DEMONSTRATIONS OF CONTAMINATIONThis section contains a few empirical demonstrations of contamination that were memorized by 3 different models: WizardCoder <cit.>, ChatGPT and GitHub Copilot. As can be seen in Figures <ref>, <ref> and <ref> all three models are able to perfectly generate back the first lines of the CoNLL03 dataset training split. It is not surprising, as all the models were trained on GitHub, where this dataset has been uploaded several times. §.§ Data contamination reported by other works Most of the data contamination analyses have been performed by the authors of LLMs. In the following list, we mention the different data contamination reports we are aware of:* GPT-3 <cit.>: Appendix C (arXiv version)* GPT-4 <cit.>: Appendix C* LLaMA 2 <cit.>: Appendix A.6* FLAN <cit.>: Appendix C* <cit.>: Section 4.2* GLaM <cit.>: Appendix D An updated version can be found in the LM Contamination Index.
http://arxiv.org/abs/2310.18018v1
{ "authors": [ "Oscar Sainz", "Jon Ander Campos", "Iker García-Ferrero", "Julen Etxaniz", "Oier Lopez de Lacalle", "Eneko Agirre" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027094829", "title": "NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark" }
eqgroup utphysempty Membranes, holography, and quantum information 1cmVassilis Papadopoulos Laboratoire de Physique de l'École Normale Supérieure, ENS, Université PSL CHAPTER: ACKNOWLEDGMENTSWhile an essential part of any respectable thesis, the "acknowledgments" section feels more like a trap. This is because, while the people that are mentioned on here will be very happy with their inclusion, those that are not will be sorely disappointed. It does not help one bit that 99% of people that will get their hands on this manuscript will read approximately 1% of it, this 1% being the "acknowledgments" section[N.B. : I don't blame them]. As literal hours[Not a lot of hours] separate me from the deadline as I write this final section, I am afraid that I will not be able to include the full list of people deserving to appear on this page. Thus, if you feel that your name should have been mentioned, know that it is not an oversight on my part just a lack of time, so you can redirect your complaints to the EDPIF for setting the deadline exactly on the day I am finishing the writing.With this introduction out of the way, I would like to begin by thanking my parents, as it is their own love of mathematics and physics that was infused in me from a young age that ultimately lead me to this moment, submitting my PhD thesis (or at least its first draft). I should probably also thank them in advance, as they do not know it, but they will be the main contributors to my pot de thèse (they cook much better than I do). I should thanks also my brother and sister for being a great company ever since I can remember (which is however not reason enough not to go through the entire manuscript, so get to work).I would like also to thank my girlfriend Pauline, which has had to endure me during these last three grueling PhD years. I know it must not have been easy to deal with my random and completely unconventional working hours, as well as my periodic obsessions with random subjects. She has always been present in the difficult and less difficult moments and has made the last three years much better than they would have been without her. Unfortunately, little does she know that my terrible working hours are not due to the fact I was preparing a Ph.D, but simply because I am unable to manage my time. I hope that this terrible revelation will not make her break up.I should also thank my closest colleagues, with which I have shared about half of my waking time, at least when Covid wasn't in the way. On the one side, we have had myriad of interesting scientific discussions, and they certainly have elevated the quality of the work presented in this thesis. On the other, they have always been a great company, always filled with self-made memes, (less than) subtle jokes, and a few (too many) beers. It would take a full manuscript if I had to make a little clever joke about each one, so I will just list the names (randomly shuffled by use of a quantum random number generator) : Manuel, Augustin, Ludwig, David, Arthur, Hari, Zechuan, Gabriel, Farah, Gauthier, Marko, X. (please replace the X with your name if you have been unjustly forgotten. I swear this is not a judgment of your quality, I just forget most important things). I should also thank my fellow "PhD students in high energy", that I have met mainly during the Solvay school, but also throughout my PhD when Covid allowed it. They are way too many to cite, but each of them contributed to this thesis by way of (often heated) discussions.My thanks also go to my friends from my previous studies, from EPFL and ENS. Although some have been lost to time and distance, all of them have contributed to making my studies more enjoyable and fun. I will cite them here in order, according to the probability that they read this paragraph of my thesis: Robin, Federico, Stéphane, Croquette, Simon, Adrien, Juliette, Arthur, Simon, André, Basile, Hortense, Antoine, Adrien, X.Another subset of friends which I must cite here is my friends from school. This is because they firmly believe that it is thanks to the countless hours I have spent during our high-school years explaining to them our math lessons that have prepared me for this PhD. Thank you Javier, Felix, Matteo, Mattia, Eleonore, Raphael, Chloe, X.Besides my friends, I should also thank the people that have contributed scientifically to my PhD, be it through discussion, e-mail exchange, or even anonymous review. Again, citing them all would require a whole new manuscript so I will abstain myself, but I would like to especially thank Pietro, Marco, and Julian for very stimulating discussions during my short visit in Geneva, as well as Zhongwu who was an amazing collaborator. Special thanks also to Mario, Frederic and Guilhem, with which it was a great pleasure to teach.I must of course thank the jury, composed of Marco Meineri, Shira Chapman and Giuseppe Policastro and in particular the "rapporteurs", Marios Petropoulos and Johanna Erdmenger, which will have to go through this manuscript in detail. I may be wise to increase my chances to render this paragraph of thanks conditional to me being awarded the PhD.Finally, I would of course like to thank my PhD advisor, Costa, which has been my main scientific reference and collaborator for the past three years (indeed the tradition in string theory is for the advisor to take only one PhD student at a time, a Padawan, one might say). Despite Covid preventing live discussions for a while, he managed to always be there (through the magic of Zoom) when I had questions and offered great insights which always managed to unblock me if I ever got stuck. I remember always leaving with dozens of possible directions to explore after every talk we had, although I must say that sometimes his expectations of my abilities were a bit too high, especially when it comes to numerical computation. CHAPTER: ABSTRACTIn this thesis, we study Interface Conformal Field Theories (ICFT) and their holographic dual, which is composed of two asymptotically Anti-de-Sitter (AdS) spaces glued through a thin gravitating membrane, or domain wall. We restrict our study to simple minimal models, which allow for analytic control while providing universally applicable results. Our analysis is set in 2D ICFT/3D gravity, but we expect much of the results to be generalizable to higher dimensions. We first consider this system at equilibrium and at finite temperature, in the canonical ensemble. By solving the equations of motion in the bulk, we find the allowable solution landscape, which is very rich compared to the same system without an interface. Classifying the different solutions among 3 thermodynamical phases, we draw the phase diagram outlining the nature of the various phase transitions. We then examine a simple out-of-equilibrium situation arising as we connect through an interface two spatially infinite CFT's at different temperatures. As we let them interact, a "Non-Equilibrium Steady State" (NESS) describes the growing region where the two sides have settled into a stationary phase. We determine the holographic dual of this region, composed of two spinning planar black holes conjoined through the membrane. We find an expression for the out-of-equilibrium event horizon, highly deformed by the membrane, becoming non-killing. This geometry suggests that the field theory interface acts as a perfect scrambler, a property that until now seemed unique to black hole horizons.Finally, we study the entanglement structure of the aforementioned geometries by means of the Ryu-Takayanagi prescription. After reviewing a complete construction in the vacuum ICFT state, we present partial results for more general geometries and at finite temperature. For this purpose, it is necessary to introduce numerical algorithms to complete the computation. We outline the main difficulties in their application, and conclude by mentioning the Quantum Null Energy Condition (QNEC), an inequality that links entanglement entropy and energy, that can be used to test the consistency of the models. CHAPTER: INTRODUCTIONEver since the seminal paper of Coleman and De Luccia <cit.>, studies involving thin gravitating domain walls have appeared in a multitude of different contexts. For instance, they have been used in attempts to provide an alternative to compactification, by localising gravity on lower dimensional "brane-worlds"<cit.>. They also appeared in efforts to embed inflation and de Sitter geometries in string theory, by studying inflating bubbles<cit.>, while they also enter in some of the swampland conjectures<cit.>.More recently, they played an important role in toy models of black hole evaporation, in which an AdS black hole is connected to flat space to allow for its evaporation <cit.>. In this thesis we explore yet another facet of these gravitating walls, as holographic duals of conformal interfaces<cit.>. As such, they act as the border between two AdS space of possibly different radii. In the full UV complete version of the duality, the walls would presumably be smooth, interpolating continuously between the two different spacetimes. We will make the simplifying assumption that the transition region is "thin", meaning it cannot be resolved at the energy scales we will consider. In this "bottom-up" approach, we consider an effective model, and posit the duality on the grounds that there exists some UV theory from which this model descends. The pros of such a philosophy is that it allows for much more freedom on the model that we consider, the cons being of course that we are not assured that the holographic duality is applicable. Nonetheless, experience, the wealth of examples of holographic dualities, as well as independent checks of results seem to point to the usefulness of such bottom-up models.The main interest of this thesis is focused on studying the holographic dual of a two-dimensional Interface CFT, which is composed of two asymptotically AdS spacetimes connected through a thin membrane/wall. We do not consider a particular realization of this duality, but rather focus only on universal quantities. This as the advantage of offering results that are applicable to any example of such a system, while keeping things sufficiently simple to have an analytical handle on them. The initial driving goal in considering such models was for their application in the Island constructions<cit.>. However, they are also powerful playgrounds to study aspects of the holographic duality, such as the Ryu-Takayanagi (RT) conjecture<cit.>. In addition, they offer deep and unexpected insights into the behavior of ICFT at large coupling, with potential applications in condensed matter physics. The richness of such seemingly simple models has been a great surprise in these 3 years of study, and certainly much is yet to be discovered about them.I begin in chap.<ref> by reviewing the essential tools that will be needed to understand our work, as well as the motivations behind it. At the risk of being pedantic, I decided to start at the very basics, trying to have in mind the material that a student just starting in the field would need to apprehend the rest of the work. In secs. <ref>-<ref> I review general facts about asymptotically Anti-de-Sitter spaces, which constitute the gravitational half of the holographic duality. I introduce black hole solutions, and explain their thermodynamics, emphasizing the three-dimensional case that will be of particular interest. Then, in sec.<ref> I define and review the basic tools of Conformal Field Theory, after which I briefly describe in sec.<ref> the modifications that occur when one introduces an Interface. We focus the review on the universal properties of such models, which is what is needed to formulate the minimal models.Having introduced both sides of the duality, in sec.<ref> I formulate the AdS/CFT correspondence, only mentioning the most crucial results. The next section <ref> presents the "bottom-up" approach to holography, where models "sur mesure" are considered as effective theories descending from a precise realization of the duality. I present the main ingredients of the "minimal" version of duality in the case of ICFT.In sec.<ref> we introduce entanglement entropy, and its role in QFT and CFT. I proceed with presenting the holographic way of computing it, by means of the Ryu-Takanayagi prescription and mention its quantum-corrected version. I finish in sec.<ref> by outlining a direct application of these corrections culminating in the "Island formula", which allows the computation of the fine-grained entropy of Hawking's radiation. I sketch how this formula seems to resolve the black hole information paradox by recovering a unitary evaporation.Having all the tools in hand, in chap.<ref> I study the minimal ICFT model at finite temperature and at equilibrium through its holographic dual, which is composed of two AdS bulks (slices) connected on a membrane. I derive and solve analytically the Israel equations determining the shape of the gravitating membrane, which is dual to the interface. I classify the obtained solutions into 3 distinct thermodynamical phases, Hot, Warm and Cold, which are differentiated by the presence or absence of a black hole, and whether it intersects the membrane. A further phase structure is obtained by looking at the number of rest points for inertial observers (which acts as an order parameter), although I show they are not thermodynamic in nature by. We perform an analysis à la "Hawking-Page"<cit.>, where the canonical parameters are the temperature as well as the relative size of the two CFTs on the boundary. The competing bulk solutions include gravitational avatars of the Faraday cage, black holes with negative specific heat, and an intriguing phenomenon of suspended vacuum bubbles corresponding to an exotic interface/anti-interface fusion. With the help of a numerical algorithm, I determine the dominant one at each point in phase space, displaying the phase diagram of the system for some chosen examples of the ICFT parameters.In chap.<ref>, I consider the same ICFT model, but allow now for out-of-equilibrium solutions. I restrict to the tractable case of a non-equilibrium stationary state (NESS) which allows for an analytical resolution of the Israel equations, which we exhibit. Focusing first on the case of a single interface, the holographic dual contains a wall that necessarily falls into the flowing horizon. This restriction allows the recovery of the energy-transmission coefficients of the dual interface, which had already been obtained perturbatively <cit.>. By inspecting the dual horizon, we argue that by entangling outgoing excitations the interface produces entropy at a maximal rate, a surprising property that is usually exclusive to black hole horizons, but that could appear because of the strong coupling. Of great interest is also the far-from-equilibrium, non-killing event horizon in the bulk, which is highly deformed by the introduction of the wall, sitting behind the apparent horizon on the hotter side of the wall. We finish by looking at the thermal conductivity of a pair of interfaces, which jumps discontinuously when the wall exits the horizon, transitioning from a classical scattering behavior to a quantum regime in which heat flows unobstructed.In the final chap.<ref>, we present partial results on computations of (H)RT surfaces in the context of the geometries described in the previous chapters. For 3D bulks, (H)RT surfaces are simply spatial geodesics, thus we begin by showing how to compute them in any locally AdS geometry, by operating a coordinate change to the Poincaré patch. Following this train of thought, we review in detail the construction of RT surfaces in the vacuum ICFT state<cit.>, and comment on their relation with the sweeping transition of chap.<ref>, as well as the Island construction. We also show how to bootstrap it to compute the entanglement structure in more general states. We move to the application of the (H)RT prescription to the NESS state of chap.<ref>, concluding it requires the use of numerical methods and outlining some promising algorithms. We finish by introducing the QNEC and mentioning why it would be interesting to consider it in the ICFT geometries.We end with a conclusion, reviewing the work from a broader perspective, and pointing out possible future research directions.CHAPTER: BASICS OF HOLOGRAPHYHolography, also known as "AdS/CFT correspondence" or "Gauge-gravity duality", was first discovered by Maldacena <cit.> in 1997, and is one of the essential tools that has driven progress in Quantum Gravity in the 21st century. In a nutshell, holography describes an equivalence between a String theory in D-dimensions, and a quantum field theory in (D-1)-dimensions. We describe in this section the minimal ingredients needed to formulate this correspondence. We begin by describing Anti-de-sitter space. We then give a very briefreview of some concepts in Conformal Field Theory, and what happens we introduce an Interface. Equipped with the necessary concepts, we succinctly describe Maldacena's derivation and set the stage for the "minimal" version of the holographic correspondence that we will be using extensively. After that, we describe an application: how to compute entanglement entropies in CFT using the "Ryu-Takanayagi prescription", an holographic technique. Finally, we briefly present some recent progress toward the resolution of the black hole information paradox, which is based on the discovery of the "Island formula" for the entanglement entropy. § ANTI-DE-SITTER SPACE, GENERALITIES Anti-de-Sitter (AdS) space can be efficiently described as the maximally symmetric Lorentzian manifold with (constant) negative curvature. Maximally symmetric Lorentzian manifolds of zero and positive curvatures are respectively Minkowski space and de-Sitter space. We will be mainly interested in the former, but efforts to develop some kind of holography in the other spacetimes are ongoing <cit.>.Anti-de-Sitter space arises in gravity as a solution of the vacuum Einstein equations with a negative cosmological constant, usually denoted Λ. The equations can be derived from the Einstein-Hilbert action, in D-spacetime dimensions :S = 1/16π G_D∫_M d^Dx √(|g|)(R-2Λ)+1/8π G_D∫_Md^D-1y√(|h|)K .The integration is done on a Lorentzian manifold M, with boundary M. We denote g =detg_μν the determinant of the metric, R the Ricci-Tensor of g_μν and Λ the cosmological constant, which we will take to be negative. We include the counter-term that is necessary to make the variational problem well-defined, where h_μν is the metric induced on M, assumed to be timelike, and K=K_μνh^μν is the trace of the extrinsic curvature[See Appendix <ref> for more details on the definition of Extrinsic curvature].Varying (<ref>), we recover Einstein's equations with a cosmological constant in the vacuum :G_μν= R_μν-1/2g_μνR+Λ g_μν=0 .Contracting with g^μν we can compute the value of the Ricci scalar. For the specific case of AdS (which is maximally symmetric), we can exploit this fact to recover the full Riemann tensor : R= 2Λ D/D-2≡ -(D-1)D/ℓ^2 ,R_ρμν = -1/ℓ^2(g_ρνg_μ-g_ρg_μν) ,where we introduced the "AdS radius" ℓ, which is the characteristic length scale of the AdS spacetime.Another, more useful way of thinking about this spacetime is by embedding it in D+1 dimensions. Consider a flat spacetime of signature (-,+,+,..._D-1,-). Denoting its coordinates by X^M, the metric reads :ds_D+1^2 = -(dX^0)^2-(dX^D)^2+(dX^1)^2+(dX^2)^2+..._D-1≡ dX^M dX^N g_MN .The isometry group of this spacetime is the Poincaré group in D+1 dimensions (with the appropriate signature). We now would like to find a spacelike hypersurface that preserves the O(2,D-1) symmetry while breaking the translation. In that way, the induced metric will automatically have a D(D+1)/2 dimensional isometry group (the dimensionality of O(2,D-1)) and thus it will be maximally symmetric. Taking this into account, it is easy to see that the sought-out surface is of the form :X^M X_M=-(X^0)^2 - (X^D)^2 + (X^1)^2+(X^2)^2+..._D-1 = -ℓ^2 .The advantage of working with this embedding is that it makes explicit many important coordinate systems, according to how we decide to parametrize the embedded surface. Note that the D(D+1)/2 Killing vectors of the Poincaré symmetry are simply :J_MN = X_M/ X^N-X_N/ X^M 0≤ M < N ≤ D+1. Here we defined X_M=X^K η_KM, thus depending on the nature of X^M and X^N (spacelike or timelike), J_MN will either induce boosts or rotations. §.§ Global coordinates The first set of coordinates that will be of interest is the so-called "global" coordinate system. It parametrizes the hypersurface as :X^0= ℓcoshρcosτ,X^D = ℓcoshρsinτ , X^i= Rsinh(ρ) Ω^i(i∈ 1,D-1) ,where Ω_i are coordinates describing the (D-2)-dimensional unit sphere, ∑_i Ω_i^2 = 1. They can of course be explicitly parametrized by D-2 angles. One can easily verify that (<ref>) is a solution to (<ref>), and that it covers the full hypersurface, explaining the name "global" for this coordinate system. The induced metric then takes the form : ds^2 = ℓ^2(-cosh^2ρ dτ^2 +dρ^2+sinh^2ρ dΩ_D-2^2) ,where dΩ^2_D-2 is the standard metric for the unit (D-2)-sphere. We can immediately identify τ as the timelike coordinate. The natural range inherited from (<ref>) is {ρ>0, 0≤τ<2π}. This topology is problematic for a physical spacetime, as we can have closed timelike curves along the time-direction τ. Thus, we will consider the same metric, with the range of τ uncompactified to take values -∞<τ<∞. This is simply a topological change, that does not affect the local geometry, so this spacetime is still a solution to the Einstein equations.In this coordinate system, not all of the isometries are manifest. We can identify the SO(D-1) group of rotations of the (D-2) sphere, along with the translation of τ which make up an SO(2) before decompactification, and ℝ after. We have of course SO(2)× SO(D-1) ⊂ SO(2,D-2). If one wants to recover the full isometry group, one simply has to project the Killing vectors (<ref>) on the hyperboloid, and re-express them in the new coordinate system. However, the isometries beyond the evident ones take very complicated form, and the Killing vectors often cannot be integrated to recover the associated finite symmetry.One last important remark, in the context of AdS/CFT, is that the geometry of the boundary ρ→∞ is conformally equivalent to a cylinder ℝ× SO(D-1). This is simply seen by Weyl rescaling (<ref>) and taking ρ=ρ_0→∞. Another coordinate system that we will use extensively can be obtained by the change of coordinates ℓsinhρ=r, ℓτ = tand yields the following metric :ds^2 = ( -(1+r^2/ℓ^2)dτ^2+dr^2/1+r^2/ℓ^2+r^2dΩ_D-2^2) .While these coordinates are interchangeable with (<ref>), they are a little bit more intuitive since r is essentially a radial coordinate. In addition to that, we will see that the coordinates describing Black Hole solutions will have a metric very similar to this one. §.§ Poincaré patchAnother parametrization, which does not cover the entirety of the hyperboloid (<ref>), are the so-called "Poincaré coordinates" :X^0= 1/2z(z^2+ℓ^2+x^ix_i-t^2) ,X^i= ℓ x^i/z,i∈ 1,D-2 ,X^D-1 =1/2z(z^2-ℓ^2 +x^ix_i-t^2) ,X^D =ℓ t/z ,where we denote x^ix_i = ∑_i=1^D-2(x^i)^2. This parametrization cannot cover the full hyperboloid, because we must either choose z>0 or z<0 as the point corresponding to z=0 is undefined. Thus, we have two charts for z>0 and z<0 which together cover the full hyperboloid. See fig.<ref> for a depiction of the region covered by one of the patches.Note that after we unwrap the time-direction, we need an infinite number of Poincaré patches to cover the same manifold as the global coordinates (<ref>). The convention is to pick the Poincaré patch with z>0.In this coordinate system, the metric is conformally flat :ds^2_poinc=R^2/z^2(-dt^2+dz^2+dx^i dx_i) ,which is conformally flat. The boundary is located at z=0, and its topology is now ℝ^{1,D-1}, endowed with the flat metric.The topology change of the boundary should not surprise us since this coordinate system covers only a part of the full manifold and of its boundary. Of course, there exists a diffeomorphism that maps (<ref>) to part of (<ref>). It can be worked out by comparing (<ref>) and (<ref>), but it will not be useful for us in what follows. Later we will see that the Holographic correspondence maps diffeomorphisms of AdS to conformal transformations of the boundary metric. As a teaser, we can already notice that the two boundary metrics can be related by a conformal transformation.§ ASYMPTOTICALLY ANTI-DE-SITTER SPACE, 3D CASEThe special case of three-dimensional Anti-de-Sitter space will be of particular interest for the purposes for this thesis. One might have thought that there is little qualitative difference between AdS in different dimensions. This is true for D ≥ 4, but below that threshold things change because there is no dynamical bulk gravity.In D dimensions the Riemann tensor has D^2(D^2-1)/12 degrees of freedom. When D=3, that number is 6, the same as the Ricci tensor. Because of that it is possible generically to write the Riemann tensor as:R_μνρ = S_μρg_ν - S_νρg_μ + S_νg_μρ - S_μg_νρ ,S_μν = R_μν-R/4g_μν ,As a result, if g_μν satisfies (<ref>), the full Riemann tensor is specified. Every solution of (<ref>) in 3D looks (locally) like AdS_3. Since the Riemann is completely fixed, there is no room for local degrees of freedom, like gravitational waves in higher dimensions. In fact, in 3-dimensions gravity can be re-formulated as a Chern-Simons theory<cit.>, which is a 3-dimensional Topological Quantum Field Theory. We will not delve in the details of this re-formulation. What is important for our purposes is that although the 3d theory does not have local degrees of freedom in the bulk, it does have degrees of freedom localized on the boundary of the spacetime. Although any two different solutions of (<ref>) can be connected by a gauge-transformation (i.e. a diffeomorphism), if the gauge transformation is non-vanishing on the boundary, then it will change the physical state on the boundary. Furthermore, two solutions will always look similar locally, but they may differ globally, the typical example being the "BTZ" black hole <cit.> solutions we will discuss shortly. §.§ Symmetries of AdS_3Much of the discussion of Sec.<ref> still applies also in D=3 dimensions. However, there is an enhancement of the (asymptotic) symmetries of the spacetime. The isometries are the same as in the higher dimensional case, SO(2,2) for D=3. In this low-dimensional case, it is sometimes convenient to rewrite the isometry group as SL(2,ℝ)× SL(2,ℝ). This is done by looking at the embedding space (<ref>) as 2×2 matrices as follows : (X_0,X_1,X_2,X_3)↦ g=[ X_1-X_0 X_2-X_3; X_2+X_3 X_1+X_0 ]∈ Mat_2× 2 .In this representation, the hyperboloid describing the AdS_3 embedding is simply :Det(g)= -ℓ^2 .Then, we can verify that there is a group action of SL(2,ℝ)× SL(2,ℝ) acting onMat_2×2, that preserves the metric (<ref>) :(g_L,g_R)∈ SL(2,ℝ)× SL(2,ℝ)↦ρ(g_L,g_R)ρ(g_L,g_R)g = g_L g g_R .The map ρ is a homomorphism (double cover) of SL(2,ℝ)× SL(2,ℝ)→ O(2,2), which induces an isomorphism SL(2,ℝ)× SL(2,ℝ)/ℤ_2→ O(2,2). As in this thesis we won't worry about discrete isometries of spacetime, we won't be very careful about which connected component we are considering. We will consider SO^+(2,2) as the isometry group.Finally, is easy to see that the map ρ leaves invariant the equation (<ref>). This more "group-theoretic" way of dealing with AdS_3 isometries is useful as it simplifies and clarifies some computations.There is, as we hinted previously, an extended symmetry group of AdS_3 (and in fact also of asymptotically AdS_3 spacetimes ), which is harder to see geometrically. These extra "asymptotic symmetries"<cit.> correspond to the infinite extension of the 2d conformal algebra. Going into depth into this subject would be outside the scope of this thesis, but we give a brief summary of the main ideas, as their discovery was a prelude to the holographic correspondence.Naively, in a gauge theory one usually thinks of any two states that are related by a gauge transformation as the same state, described differently. In truth, one can show that this statement only holds for "small" gauge transformations, which vanish at the boundary of the manifold we consider<cit.>. This can also be seen by looking at the conserved currents, and the associated charges. For gauge transformations, the conserved current can be written as j^μ = S^μ + _ν k^μν, where S^μ vanishes on shell, and k^μν=-k^μν is a two-form. Then the associated charge will be expressible as an integral on the boundary of the spacetime. Therefore, it will be vanishing for the "small" gauge transformations, thus showing they have no associated conserved charge. For "big" gauge transformations, the charge may be non-vanishing, which shows that they can act as bona-fide symmetry, rather than being simply a redundancy in parametrization.Let us concentrate on General Relativity, where the gauge group are diffeomorphism. We begin by getting rid of the redundancy in the description by choosing some gauge-fixing conditions that picks out a single representative of the "gauge-orbit" of any given state. In this way, we get rid of the unphysical "small" gauge transformations. We denote as "residual gauge group" the gauge symmetries which remain un-fixed by this procedure, which will therefore be non-vanishing at the boundary.Among the residual gauge symmetries, the transformations that alter the boundary conditions of the problem are discarded. The surviving diffeomorphisms then belong to the "asymptotic symmetry group" of spacetime, aptly named as it concerns gauge transformations acting on the (asymptotic) boundary. Of course, the asymptotic symmetry group will then depend on the boundary conditions we have chosen for the metrics. Picking a set of boundary conditions that allows for interesting solutions, while removing unphysical ones is still an ongoing problem <cit.>, and is mostly done by trial and error.In this thesis, we will stick with the "Fefferman-Graham" <cit.> prescription for gauge-fixing and boundary conditions in AdS, which is the relevant prescription in the context of AdS/CFT. We denote in this prescription the coordinates as x^μ = (z,x^i). We set the range z>0, the boundary of AdS being located at z=0. Then the gauge-fixing reads <cit.> : g_zz = ℓ^2/z^2g_z i=0 . As expected, we have three independent gauge fixing conditions, for the three independent parameters of the diffeomorphism gauge group. Thus, a gauge-fixed metric will take the form : ds^2 = ℓ^2/z^2dz^2+g_ij(z,x^i)dx^i dx^j . We can now proceed to compute the residual gauge symmetry. It is immediately clear that included in this residual symmetry group there will be general change of coordinates in x^i. The full equation that an infinitesimal diffeomorphism x^μ→ x^μ + ξ^μ must satisfy in order to preserve this gauge structure are simply : L_ξ g_zz=0L_ξ g_z i = 0 .This can be solved generally, yielding an infinitesimal description of the residual gauge group : ξ^z = (x^i)zξ^i = ξ^i_0(x^i)-ℓ^2 _k∫_0^z dz'/z'g^ik(z',x^l) ,where (x^i) is an arbitrary function, as are the ξ^i_0.To determine the asymptotic symmetry group, we must first describe the Fefferman-Graham boundary conditions :g_ij = ℓ^2/z^2(g_ij^(0)+z^2 g_ij^(2)+O(z^4)) ,g^(0)_ijdx^i dx^j=e^2ϕη_ijdx^i dx^j= e^2ϕ(-dt^2+d^2) ,where we write {x^i} = (t,φ). The coordinate φ will be periodic of period 2π. This choice is simply to conform to the global AdS coordinates (<ref>) asymptotic geometry. Since we are considering the conformal family of metrics, we can recover the "planar" case by the correct choice of e^2ϕ and a conformal transformation.As we have already stated, it is a subtle matter to choose appropriate boundary conditions. To distill an interesting set of constraints, one usually looks at several solutions of the Einstein equations, and try to choose conditions that remove unwanted solutions without being too restrictive. In this case, the F-G boundary conditions are a good way to describe a spacetime that looks asymptotically like AdS_3<cit.>. Indeed, for the example ϕ=0, the leading order metric in (<ref>) looks like AdS_3 in Poincaré coordinates (<ref>).We are now set to compute the asymptotic symmetry group. A generic residual gauge symmetry (<ref>), will preserve the F-G boundary conditions iff : L_ξ g_ij = O(z^-1) , ⇔ L_ξ g_ij^(0)=0 .Expanding the conditions we get the following set of equations for the killing vectors: -_iϕξ^i_0= _t ξ^t_0=_ξ^_0 , _t ξ_0^ = _ξ^t_0 .By taking partial derivatives and combining the equations we derive the necessary condition :-_t^2ξ^i_0+_^2ξ^i_0=0 ,⇒ξ_0^i = f_-^i(-t)+f_+^i(+t) .Already, we recognize the equations of the conformal Killing vectors of η_ij in 2D. Plugging back into (<ref>) forces the free functions to be the same up to a constant, and we find for the general solution : ξ^t_0 =f_+(x+t)-f_-(x-t) , ξ^x_0 =f_+(x+t)+f_-(x-t) ,= f_+'(x+t)+f_-'(x-t)+ξ^t_0_tϕ +ξ^_0_ϕ .To study the algebra of the Killing vectors, it is convenient to consider instead the lightcone basis, w^+=+t, w^-=-t. ξ^+_0 =f_+(w^+) , ξ^-_0 =f_-(w^-) ,= 1/2(f_+'(w^+)+f_-'(w^-))+f_+(w^+)_+ϕ + f_-(w^-)_-ϕ .To identify a basis of Killing vectors, we expand in Fourier series the functions f_- and f_+, by exploiting the periodicity of φ :f_±(w^±) = ∑_n∈ℤ_n e^in w^± .This allows us finally to define a basis of Killing vectors ξ_n and ξ_n : ξ_n= i e^inw^+(_w^++z(in+_+ϕ) _z) , ξ_n=i e^inw^-(_w^-+z(in+_-ϕ) _z) . We can now go on to compute the algebra of this family of Killing vectors. It is important to note that since we are computing the asymptotic algebra, all computations should be done at the level of the first leading order in z. Indeed, the algebra will not be closed at higher orders.We find the following Lie Brackets :[ξ_n,ξ_m]=0 ,[ξ_n,ξ_m]=(n-m)ξ_n+m ,[ξ_n,ξ_m]=(n-m)ξ_n+m ,which are readily identified as two copies of the Witt algebra. In the special case of two dimensions, the asymptotic symmetry group is thus much bigger than the isometry group of the vacuum, AdS_3. Doing the same derivation in higher dimensions will show that the asymptotic symmetries of asymptotically AdS_D spaces correspond to the isometries of AdS_D, that is SO(D-1,2). As a foreshadowing of the Holographic correspondence, we can see that this matches the conformal group in D-1-dimensions, the symmetry group of a CFT_D-1.The only thing missing from our derivation is the recovery of the central charge c, which appears in the quantization of CFT's, where the Witt Algebra is centrally extended to Virasoro. In the gravity perspective we consider here this can, remarkably, be obtained at the classical level by looking at the algebra of the conserved charges<cit.> associated to the Killing vectors (<ref>). Carrying this computation yields the famous Brown-Henneaux formula, which expresses the central charge of the Virasoro algebra as a function of the Anti-de-Sitter radius : c = 3ℓ/2 G . There is also a way to obtain this formula through our covariant formalism, by looking at the transformation of g_ij^(2) (see (<ref>)) under the asymptotic symmetries. It can be computed easily in when the metric is vacuum AdS, namely g_ij=ℓ^2/z^2η_ij in (<ref>). Under the asymptotic Killing vector (<ref>) :g_++^(2)' =-1/2(ξ_0^+)”' ,g_–^(2)' =-1/2(ξ_0^-)”' ,g_+-^(2)' = 0 ,and g^(0) is left invariant (as it should). Now, through the arguments of sec.<ref>(or by computing the boundary Noether current associated to the asymptotic symmetry), one can identify g^(2) with a "boundary" stress-energy tensor as g^(2)=T_ij/8π G ℓ. Using (<ref>), we deduce that this stress-energy tensor does not transform covariantly, but has an additional contribution to its transformation law. Assuming it is the stress-tensor of a (dual) CFT consistency with (<ref>) forces us to the identification (<ref>), recovering the Brown-Henneaux formula. §.§ Other vacuum solutionsDespite the lack of local degrees of freedom of Gravity in 3D, we have seen that this does not mean that the solutions are completely frozen. Solutions may differ in their global structure, as well as in their behavior at the conformal boundary of spacetime. The best known non-trivial example is the celebrated BTZ black hole<cit.>.§.§.§ The BTZ black holeFor vanishing cosmological constant, the vacuum solutions of 3D Gravity are trivial and admit only the Minkowski vacuum. For a negative cosmological constant, the space of vacuum solution is reacher, as was discovered first by the authors of <cit.>. The vacuum BTZ solution can be described by the following metric :ds^2= -h(r)dt^2+ℓ^2dr^2/h(r)+r^2(dφ-Jℓ/2r^2dt)^2 , h(r)= (r^2-Mℓ^2+J^2ℓ^2/4r^2) = (r^2-r_+^2)(r^2-r_-^2)/r^2 , r_±^2 =1/2(Mℓ^2±√(M^2ℓ^4-J^2ℓ^2)) . In this notation r_+ and r_- are respectively the outer and inner horizon radii, and M and J can be identified (through the conserved charges of asymptotic symmetries) with the mass and spin of the black hole solution. Note that to avoid naked singularities, we need to satisfy Mℓ≥ |J|, with the equality corresponding to an "extremal" black hole.The crucial ingredient that distinguishes (<ref>) from a mere though non-trivial reparametrization of AdS_3 is that the angle coordinate φ is periodic, 2π. Without this identification, the solution becomes a "black string", and the event horizon disappears, as maximally extending the spacetime would reveal that the region r<r_+ is not causally disconnected. In other words, the apparent horizon of the black string solution is simply a coordinate artifact, and the geometry of the solution simply describes a portion of the regular AdS_3 spacetime. While this is true, note that the coordinate change linking these two geometries acts non-trivially on the boundary; therefore the boundary degrees of freedom of the two geometries are not equivalent.Let us illustrate this claim by exhibiting a coordinate parametrization of the embedding of AdS_3, as in (<ref>) :X_0= ℓ√(A(r))coshφ/ℓ ,X_1= ℓ√(B(r))sinht/ℓ ,X_2= ℓ√(A(r))sinhφ/ℓ ,X_3= ℓ√(B(r))cosht/ℓ , A(r)=r^2-r_-^2/r^2_+-r^2_- B(r) = r^2-r_+^2/r^2_+-r^2_-t = r_+ t-r_-φφ = -r_-t+r_+φ . The parametrization (<ref>) covers the exterior (r>r_+) region of the black string metric. To cover the interior as well, one needs alternative parametrization that however connects smoothly to (<ref>). Crucially, we see that in this parametrization φ cannot be considered periodic, as it appears in hyperbolic functions. This shows that the black string solution is really vacuum AdS_3 in disguise. Despite this fact, it is still interesting from the point of view of the boundary degrees of freedom, that will differ between the two solutions as they are related by diffeomorphisms that do not vanish on the boundary!The parametrization (<ref>) provides also additional insight on the geometrical construction of novel vacuum solutions, including the BTZ black hole. Following <cit.>, consider a Killing vector ξ of the hyperboloid (<ref>). "Integrating" the infinitesimal coordinate change x^μ→ x^μ+ξ^μ yields a one-parameter subgroup of the isometries of AdS_3, whose elements we denote by e^tξ. Following the discussion Sec.<ref>, we can see e^tξ as an element of SL(2,ℝ)× SL(2,ℝ). Let us now define the "identification subgroup", whose elements are : {e^tξt=2kπk∈ℤ} . As the name implies, the new solution is then constructed by quotienting AdS_3 along the identification subgroup, meaning that points separated by the action of elements of (<ref>) are identified. As this procedure does not modify the geometry locally, the quotiented spacetimes are automatically solutions of the Einstein vacuum equations.The only caveat to this procedure is that the identification may generate causality paradoxes, for instance when causal curves become closed under the identification. A necessary condition to avoid this problem is to require that ξ be spacelike, ξ^μξ^νη_μν>0. Indeed, if this condition fails to be satisfied, then we would perform identifications of points lying on killing vector orbits which are causal, producing closed timelike curves.The BTZ solution can be obtained through this process using : ξ_BTZ = 1/ℓ(r_+J_10-r_-J_23) , where J_MN is defined in (<ref>). Note that this is a tangent vector of the AdS_3 hyperboloid, and in BTZ coordinates it simply corresponds to _φ. We see then that the identification along the orbits of this Killing vector is indeed realized by setting φ to be periodic.Computing ξ_BTZ·ξ_BTZ one realizes that it is not everywhere positive. ξ_BTZ·ξ_BTZ = r_+^2/ℓ^2((X^0)^2-(X^2)^2)+r_-^2/ℓ^2((X^3)^2-(X^2)^2)=r_+^2-r_-^2/ℓ^2((X^0)^2-(X^2)^2)+r_-^2 .We can check that plugging in the parametrization (<ref>) gives ξ_BTZ·ξ_BTZ=r^2, which is indeed strictly positive.We can then excise the regions ξ_BTZ·ξ_BTZ<0 from AdS_3. This procedure seems unphysical, since it generates a geodesically incomplete spacetime. Indeed, geodesics crossing the region ξ_BTZ·ξ_BTZ=0 are abruptly stopped. This problem is resolved because in the resulting geometry the region ξ_BTZ·ξ_BTZ=0 becomes a singularity, whose nature is quite different from the higher dimensional black holes. It is a singularity in the causal structure, since beyond that point one encounters closed timelike curves. Contrary to the higher-dimensional counterparts, the curvature remains finite at the singularity since the solution is locally AdS_3.§.§.§ Spectrum of solutions and conical singularitiesLet us now restrict to the simpler case J=0. This yields a 1-parameter group of solutions of metric (<ref>):ds^2 = -(r^2-Mℓ^2)dt^2+ℓ^2 dr^2/r^2-Mℓ^2+r^2dφ^2 .with φ a 2π periodic coordinate as explained earlier. For M>0, the singularity is behind a horizon and except at r=0 it is a regular solution. For M=0, the solution is still regular except at r=0, but the horizon disappears. This solution is sometimes referred as the "zero-mass" black hole. What is interesting is that we do not recover Anti-de-sitter space when we send the mass M to zero, something very different from what happens in higher dimensions. In 3D, the black hole spectrum is separated by a "gap" from the Anti-de-Sitter vacuum. Indeed, setting M=-1 in (<ref>) we recover AdS_3 in global coordinates, as in (<ref>). What about masses in the interval -1<M<0 ? Expanding them near the origin r=0 yields the metric :ds^2= M dt^2 -1/Mdr^2+r^2dφ^2 .Redefining r = 1/√(-M)r and φ=√(-M)φ, we exhibit a conical singularity of deficit angle(1-√(-M))2π. These solutions thus exhibit naked conical singularities, and are not considered in general as valid classical solutions. However they will be important in the quantization, as they still are valid saddle points of the Einstein-Hilbert action. This metric can be generated by placing an excitation on the AdS_3 vacuum, of energy -1<M<0. The conical singularity will presumably only appear away from the source, and be resolved as we get close to it.Solutions with M<-1 are completely unphysical, one way to see it is by noting that a conical singularity is created by a string. If the string has positive tension, the deficit angle is positive, while if it has negative tension there is an angle excess. The latter is thus unphysical. From the boundary perspective we will present, M =-1 is just the Casimir energy of the vacuum CFT on the circle. Adding excitations can only increase the energy in a unitary theory.§ HAWKING-PAGE PHASE TRANSITION AND BLACK HOLE THERMODYNAMICSHaving presented the black hole solutions in AdS_3, in this section we focus on the thermodynamical properties of the solutions, following the paper of Hawking and Page<cit.>. We will work in the canonical ensemble, thus we consider equilibrium solutions at fixed temperature T. We consider for this analysis only non-spinning solutions,§.§ Adding temperatureBlack hole solutions, like BTZ (<ref>) have a naturally associated temperature, which is the temperature of their Hawking radiation <cit.>. While this is a semi-classical effect, it is possible to recover this temperature in a much simpler manner using the following trick. Consider the Wick-rotated BTZ metric (<ref>), which essentially amounts to making the change of variables τ=it, where we call τ the "Euclidean time". ds^2 = (r^2-Mℓ^2)dτ^2+ℓ^2 dr^2/r^2-Mℓ^2+r^2dφ^2 .Notice that after Wick rotation the Signature becomes (+++), hence the name "Euclidean".Expanding close to the horizon at r=√(Mℓ^2)+√(M)/2ℓρ^2, the metric reads, at first order in ρ :ds^2 = dρ^2+Mρ^2dτ^2+Mℓ^2 dφ^2 . In the (ρ,τ) plane, the geometry looks locally like flat space, with τ playing the role of the angular coordinate. If we want to avoid conical singularities, then we must have the identification τ→τ+2π/√(M). As we know, considering a QFT on an Euclidean background with periodic time τ→τ+β is one way to compute the partition function at finite temperature T=1/β (more details in sec.<ref>). For this reason, we interpret the period of the euclidean time τ, enforced by (<ref>), as the temperature of the BTZ horizon. Although this derivation is heuristic at best, it can be checked that it corresponds to the semi-classical derivation. We find that the temperature of the BTZ black hole of mass M is simply T=√(M)/2π.Another saddle point of the Einstein-Hilbert action that contributes to the canonical ensemble is derived from the vacuum solution, pure AdS_3 (<ref>). Giving a temperature to the vacuum solutions is again done formally by wick-rotation. Unlike the BTZ case, the wick-rotated AdS_3 metric (<ref>) does not impose any constraint on the periodicity of τ, as it is everywhere well behaved.(r^2+ℓ^2)dτ^2+ dr^2/1+r^2/ℓ^2+r^2 dφ^2 . We can thus arbitrarily choose the "temperature" of this space-time by setting the periodicity of τ. The resulting spacetime is nicknamed "thermal AdS". In the Lorentzian version, it is not possible to "see" the temperature without adding some fields on the background. Indeed, since pure gravity does not have gravitons, there is no degrees of freedom that can have a temperature, and that is why we define it through the Wick-rotation. This procedure remains nonetheless correct even in the presence of bulk fields.In the absence of additional fields, these two solutions are all that we need for the thermodynamic analysis. While there are other saddle points to the Euclidean action <cit.> it can be shown they are always subleading, and so we can discard them in a classical treatment. §.§ Euclidean action and Free EnergyWe have determined that the two competing solutions at temperature T are respectively the non-spinning BTZ black hole and thermal AdS. To determine which one is dominant in the canonical ensemble, we must compute their respective Free Energies F. From a standard thermodynamical argument, the solution with lowest F will be the dominant one.To compute the free energy at temperature β, we consider the Wick rotated system with the compactified time coordinate, as explained in the previous section. Then, the Free Energy is defined as :F = -T ln(Z) ,where Z is the partition function of the system at inverse temperature β. Using the path integral formulation, we have the following path integral identity (see sec.<ref> for more details): Z = ⟨ e^-β H⟩ = ∫Dg e^-S_E(g) ,where Dg denotes the path integral measure for metrics, and S_E(g) the Euclidean action. We use the saddle point approximation to give it a value. The two competing saddle points are the BTZ black hole and thermal AdS. The goal then is to compute the Euclidean action for both of these solutions.The full Einstein-Hilbert action (<ref>) will be in general divergent in asymptotically Anti-de-Sitter space. To compare Euclidean actions, we then need to introduce a cutoff at r=r_max. To get finite answers in the limit r_max→∞, we can add a counterterm to the action, which does not affect the equations of motions. The correct prescription is described in <cit.> and results in the following Euclidean action[Note that performing the Wick-rotation in gravity is also a non-trivial operation, which might not be well-defined in general. To get to the Euclidean action (<ref>), we first linearize the theory around the AdS background, and then perform the Wick rotation for the linearized theory, before re-expressing everything in terms of the (now euclidean) Ricci scalar)] :S_E=-1/16π G [ ∫_Md^3x √(g)(R-2/ℓ^2)+2∫_r=r_max d^2y√(h)(K-1/ℓ)] ,where the counterterm is simply the -1/ℓ substraction in the boundary term. The integrand over M will be the same for both geometries, which satisfy R=-6/ℓ^2 and g = ℓ r^2, see (<ref>). One difference will arise because of the range of the radial coordinate, which spans 0<r<r_max for thermal AdS and r_hole=√(Mℓ^2)<r<r_max for BTZ.Starting with thermal AdS : ∫_Md^3x √(g)(R+2/ℓ^2) =- ∫_r=0^r_max∫_φ=0^2π∫_t=0^βℓ rdrdφ dt4/ℓ^2=-4πβ r_max^2/ℓ . For the boundary term, the boundary surface is described by r=r_max, from which we deduce that the normal covector, taken to be outward facing is proportional ton_μ∝ (0,1,0) . The normalized vector reads :n^μ =1/√(1+r_max^2/ℓ^2)(0,1,0) . Parametrizing the boundary metric in terms of the two remaining coordinates,h_ijdx^idx^j = (ℓ^2+r_max^2)dt^2+r_max^2 dφ^2 . One can compute the extrinsic curvature K following the prescription in the Appendix <ref>. We find after some straightforward calculations : √(h)K = ℓ(1+2r_max^2/ℓ^2) . Adding the contribution of the counterterm, and expanding in r_max :2∫_r=r_max d^2y√(h)(K-1/ℓ) = 4πβℓ(1+2r_max^2/ℓ^2-r_max^2/ℓ^2(1+ℓ^2/2r_max^2))+O(r_max^-2) ,=4πβℓ(r_max^2/ℓ^2+1/2) . Adding all the terms we see that the diverging part indeed drops out, allowing us to safely take the r_max→∞ limit to obtain :F_AdS=S_E^AdS/β =-ℓ/8G . The computation for BTZ proceeds in a similar way so we skip the details. For the bulk part we find : ∫_Md^3x √(g)(R+2/ℓ^2) = -4πβ (r_max^2-Mℓ^2)/ℓ . For the boundary part we have : √(h)K=ℓ(-M+2r_max^2/ℓ^2) . Adding everything up as before we end up with :F_BTZ = -Mℓ/8G . Comparing the free energies, we finally deduce that there is a phase transition at the critical value M=1, corresponding to a critial temperature T=1/2π. Below this value, the dominant saddle is thermal AdS, while above it the black hole geometry is favored. This is the so-called "Hawking-Page phase transition" <cit.> and it will be a guiding thread throughout the first part of the thesis.Just as a check, let us compute the energy and entropy of the states using thermodynamic identities. The energy E of the states is given by : ⟨ E⟩=/β(β F) = Mℓ/8G ,where for thermal AdS M=-1. We can then compute the entropy of the solutions with F=⟨ E ⟩ - S/β. The entropy vanishes for thermal AdS as expected, while for the BTZ solution S=√(M)ℓβ/4G=2π r_+/4G. It is equal to the Area of the horizon divided by 4G, which is the Bekenstein-Hawking formula (<ref> that we will introduce later in the text. Finally, let us point out that this phase transition is not restricted to the 3-dimensional case. In higher dimensions D>3, the temperature of the black hole as a function of the AdS radius and horizon radius r_+ can be easily shown to take the following form T = (D-1)r_+^2+(D-3)ℓ^2/4π r_+ ℓ^2 . From (<ref>), there is a temperature below which there is no black hole solution. This critical temperature is realized for r^+_ min^2=ℓ(D-3)/D-1. Furthermore, given any temperature above this threshold we will have two dinstinct black hole solutions called "small" and "big" black hole. The big one is the one that will be relevant for the Hawking Page transition. Indeed, the "small" black holes are always thermodynamically unstable. One can see this by computing the derivative T/ r_+ which will be negative for r<r_min. As the small black hole radiates, its horizon will shrink, and its temperature increase, speeding up the radiation. Put it differently, in the canonical ensemble T/ r_+ will be proportional to the specific heat of the solution, and negative specific heat signals an instability as described above. Small black holes are in this sense similar to the Schwarzschild black hole in flat spacetime.§ CONFORMAL FIELD THEORY Conformal Field Theory is the second essential ingredient necessary to the holographic correspondence. In this chapter we introduce the very basics that will be required to understand the bulk of the work. As our analysis is generally more focused on the gravity side, we won't dwell too much on details in this section. For a full study of the subject with emphasis on 2D CFT, see <cit.>. §.§ The conformal groupThe Coleman-Mandula theorem <cit.> is a no-go theorem that applies to lorentzian Quantum Field Theories that have a mass gap, as well as some mild assumptions on scattering amplitudes. It states that the largest group of spacetime symmetries is the Poincaré group, and any internal symmetry must appear as a direct product with it (colloquially, spacetime and internal symmetries "don't mix"). The two famous "loopholes" of the theorem's assumptions are supersymmetry, where the extended symmetry algebra is a superalgebra, and conformal symmetry when the mass gap is zero.If we remove this assumption, then we can have a spacetime symmetry group bigger than Poincaré, namely the Conformal group (we will denote it by CFT_D-1,1, to emphasize the signature of the spacetime). Conformal transformations are purely coordinate transformations that preserve angles. An example are dilatations, x^μ→λ x^μ. To find the defining equations, consider first a euclidean metric (where the notion of angles is familiar, although the same derivation applies in Minkowski signature), and two vectors v_1^μ and v_2^μ. The angle between them can be computed using the formula : cos(þ)=v_1^μ v_2^ν g_μν/√(v_1^μ v_1^ν g_μν v_2^μ v_2^ν g_μν) .We see that the angle will be preserved iff the scalar product of vectors are rescaled, namely v_1^μ v_2^ν g_μν→ e^2ϕ(x)v_1^μ v_2^ν g_μν. The peculiar parametrization of the scale parameter will come in handy later.Under a change of coordinates x→ x', the transformed vector fields components read v'^ν(x')= x'^ν/ x^μv^μ(x). Plugging this in the "rescale" condition yields :v_1'^μ v_2'^ν g_μν(x')= v^(x)v^(x) x'^μ/ x^ x'^ν/ x^g_μν(x')!=v^μ(x)v^ν(x)e^2ϕ(x)g_μν(x) , ⇔x'^μ/ x^ x'^ν/ x^g_μν(x')=e^2ϕ(x)g_μν(x) ,where in the second line we exploited the fact that the equality must hold for any two vector fields.The condition on the metric (<ref>) is the defining condition for conformal transformations. Even though this is given as a condition on the metric, one must keep in mind that conformal transformations act purely on the coordinates, leaving the metric unchanged. In that way, they are not to be confused with diffeomorphisms, which also preserve angles in a trivial way by also acting on the metric, nor with Weyl transformations which act solely on the metric, rescaling it locally.Given an infinitesimal change of coordinates x'^μ = x^μ+ξ^μ (and thus an infinitesimal rescaling ϕ), (<ref>) reduces to : L_ξ g_μν = ξ^ρ_ρ g_μν+g_μρ_νξ^ρ+g_νρ_μξ^ρ = 2ϕ g_μν .Vectors that satisfy (<ref>) are aptly named conformal Killing vectors.Let us now specify to the case of the Minkowski metric η_μν, as the conformal field theories that we will be interested in will live on a conformally flat background. _μξ_ν+_νξ_μ = 2ϕη_μν . In that case, tracing (<ref>) with η_μν immediately gives an expression for ϕ : ϕ = _ρξ^ρ/D≡f/D . Acting with ^ν on (<ref>) yields yet another condition : □ξ_μ = 2-D/D_μ f. We immediately notice that the 2-dimensional case will be special; a fact that echoes the radically different asymptotic symmetry group of AdS in 3-dimensions, and is yet another hint of the holographic correspondence. We consider for now D>2 dimensions. Contracting (<ref>) with ^ν gives □ f = 0. Finally, applying the □ operator on (<ref>) gives us the simple equation : _μ_ν f = 0⇒ f=a+b_μ x^μ .To conclude we apply _ρ to (<ref>) and choose a suitable linear combination of the equations obtained by permutations of μ,ν,ρ to get :2_μ_νξ_ρ=η_μρ_ν f+η_νρ_μ f-η_μν_ρ f=η_μρb_ν+η_νρb_μ-η_μνb_ρ = c_ρμν ,where we used (<ref>).Finally, a double integration gives us the form of the general conformal transformation : ξ_μ = a_μ + b_μρx^ρ+c_μνρx^ν x^ρ .Plugging back into the original equation (<ref>) yields an additional condition on b_μν :b_μν = η_μν+ω_μνω_μν=-ω_νμ .Expanding everything in terms of the independent infinitesimal parameters : ξ^μ = a^μ +ω^μ_ ν x^ν+x^μ +(2(b_ν x^ν) x^μ-x^2b^μ) .In addition to the expected translations and rotation, we find the additional dilatation (associated to theparameter) and the so-called "special conformal transformations" (associated to the parameters b_ν). Counting the number of free parameters, we obtain (D+1)(D+2)/2 which is then the dimensionality of the conformal group in D dimensions.Before specializing to the 2-dimensional case that will be of most interest to us, let us point some more facts in the higher dimensions. Consider the action of the conformal group on a scalar field φ(x), where its transformation is induced by the change of the coordinates, φ'(x')=φ(x) :e^-iw_aG^aφ(x) =φ'(x) ,where G^a are the generators of the conformal group, and w_a an infinitesimal parameter, so that x'^μ = x^μ+w_a f^μ a(x). Taylor expanding (<ref>) furnishes a representation of the generators G_a, leading to the following expressions in the case of the conformal group: P_μ = -i_μ ,M_μν= i(x_μ_ν-x_ν_μ) ,D=-ix^μ_μ , K_μ = i(x^2_μ-x_μ x^ν_ν) . The indices on the generators can of course be raised and lowered by the metric.Computing the Lie bracket then reveals the commutation relations which define the conformal algebra. For reference, we include the non-vanishing Lie brackets :[M_μν,M_ρσ]=i(η_μρM_νσ-η_νρM_μ+η_νM_μρ-η_μM_νρ) ,[D,P_μ]= iP_μ , [D,K_μ] = -iK_μ ,[K_μ,P_ν]= 2i(η_μνD-J_μν) ,[K_μ,M_νρ]= i(η_μνK_ρ-η_μρK_ν) ,[P_μ,M_νρ]= i(η_μνP_ρ-η_μρP_ν) . Let us make a last comment on CFT_D-1,1 before specializing to D=2. One might notice that the group's dimensionality matches the dimension of the orthogonal group SO(D+2). This is not a coincidence, as one can show that it is indeed isomorphic to SO(D,2). Let J_AB,-1≥ A,B≥ D+1 denote the generators of SO(D,2), then one can verify that the map (<ref>) is a Lie Algebra isomorphism :J_μν =M_μν ,J_-1,μ =1/2(P_μ-K_μ) ,J_(D+1),μ =1/2(P_μ+K_μ) ,J_-1,(D+1) =D ,where the -1 and (D+1) indices denote the new timelike and spacelike coordinates respectively. It is through this isomorphism that we are able to identify the asymptotic symmetry group of AdS_D+1 with CFT_D-1,1. §.§ The conformal group in 2 dimensionsMost treatments of two-dimensional CFTs consider a metric with Euclidean signature, as it allows the use of powerful complex analysis techniques. Therefore, in this section we will consider the following background metric :ds^2 = dτ^2+dx^2 . Lorentzian results can of course be recovered by the correct Wick rotation. Let us now go back to (<ref>). For D=2, there are two independent equations (<ref>) :_τξ^x=-_x ξ^τ ,_τξ^τ = _x ξ^x . If we consider then τ, x as the coordinates of the complex plane z=x+iτ, (<ref>) become exactly the Cauchy-Riemann equations for the complex function ξ=ξ^x+iξ^t. Therefore, the general solution is given byξ^x+iξ^τ=ξ(x+iτ) . To go back to real space, one replaces τ=it in (<ref>). Then ξ^τ_τ = ξ^τ(-i)_t ≡ξ^t _t, so that vector components are re-scaled with i under the Wick rotation. Let us define, in real space, the lightcone coordinates w^+=x+t, w^-=x-t. By taking the real and imaginary part of (<ref>), we conclude :ξ^x+ξ^t = ξ^+= ξ̅(x+t) = ξ̅(w^+) ,ξ^x-ξ^t = ξ^-= ξ(x-t) = ξ(w^-) . It follows that in Minkowski spacetime the solutions divide into right-moving and left-moving transformations, which can be chosen independently (in real space, ξ and ξ̅ are independent functions). Again, the comparison with the asymptotic symmetries of AdS_3 (<ref>) is flagrant. In fact, from the form of the metric in lightcone gauge ds^2=dw^+dw^-, one readily infers the finite form of the conformal transformations :(w^+)'= f^+(w^+) ,(w^-)'= f^-(w^-) ,so that the scaling factor is e^2ϕ = f^+'(w^+)f^-'(w_-). This splitting into left and right-moving transformations will follow us throughout this chapter. All states and excitations will also split accordingly and we will mostly concentrate on one of the two sectors. This property of two-dimensional CFTs is sometimes referred to as "holomorphic factorization".Let us introduce a last piece of machinery before finally entering into the field theory proper. We already saw that in the euclidean picture, conformal transformations can be seen as holomorphic functions. To simplify this notation and take the analogy even further, we can define the formal change of coordinates to the complex 2-plane :z=x+iτ ,z̅=x-iτ . Although the notation z̅ is suggestive, in this change of coordinate z̅ should be considered as an independent complex variable to z. For this to make sense, we must consider the euclidean plane to be also complexified, namely x,τ∈ℂ. This is unphysical, and at the end of the day we should impose reality on the original coordinates. This condition takes the form z̅=z^* where the * operator is the bona fide complex conjugation.This seems like a lot of trouble but it will simplify the notation greatly. In these complex coordinates, the metric is written as :ds^2 = dz dz̅ .In this form, passing to Lorentzian space is as simple as z→ w^-, z̅→ w^+. Conformal transformations in complex space :z' =f(z) , z̅' =f̅(z̅) . At the risk of hammering the point a bit too much, f and f̅ are considered independent functions until the end, where f̅ must be identified with the complex conjugate of f. This coincides nicely with (<ref>). §.§ Primary fieldsIn this section we get a first look at the restrictive power that the conformal symmetry will impose on the fields of the theory. We begin with a classical treatment.The irreducible representations of CFT_2 will be constructed around the core concept of primary fields. These fields will be labeled by two numbers h and h̅. To understand how they come about, consider first the transformation induced on a scalar field ϕ(z,z') by a conformal change of coordinates. For now we look only on the change brought by the coordinate change, thus we consider an otherwise invariant field ϕ'(z',z̅')=ϕ(z,z̅). Then with z'=z+ξ(z) (and equivalently for the anti-holomorphic part)ϕ'(z,z̅)= ϕ(z,z̅)-ξ(z)ϕ-ξ̅(z̅)ϕ , ⇔δϕ = ∑_n∈ℤa_n l_n ϕ(z,z̅)+a̅_n l̅_n ϕ(z,z̅) ,where we used the Laurent expansion of the parameter ξ(z) = ∑_n a_n z^n+1, which defines for us the generators of the conformal symmetry, l_n = -z^n+1_z and likewise for l̅_n. The algebra is the Witt algebra described in (<ref>).An important distinction is to be made here between the "global" and "local" conformal transformations. As it can be easily checked, the only generators that are well defined both at z=0 and z=∞, and hence on the whole complex plane are l_-1, l_0 and l_1 (and likewise for the antiholomorphic part, so we will stop mentioning it from now). Together, they form the only non-trivial finite subgroup of the Witt algebra, which is SL(2,ℂ).In Minkowski spacetime, the global subgroup SL(2,ℂ)× SL(2,ℂ) reduces to SL(2,ℝ)× SL(2,ℝ), which is the isometry group of AdS_3 (<ref>).One useful parametrization of the finite global conformal transformations is :z' = az+b/cz+d , ad-bc=1 . We would like now to expand the transformation rules (<ref>), by introducing "internal" quantum numbers that will affect the transformation of the fields. Usually, the transformation rules of "primary fields" are simply defined right away as : ϕ'(z',z̅') = (dz'/dz)^-h(dz̅'/dz̅)^-h̅ϕ(z,z̅) . Although this definition is natural (the field is rescaled with the local scaling factor, to the power of its "scaling dimension") we would like to provide a little more context as to the origin of this formula.To do so, we consider something analogous to the "little group trick" to find the representations of the Poincaré group. For this purpose, we consider the subgroup of conformal transformations that leave the origin z=0 invariant. It is straightforward to see that it is generated by the l_n, n≥ 0. Let us denote the operators at z=0 by l_n. We must now choose the action of the l_n on ϕ(0). Primary operators will act as the "highest weight" of the representations we will construct. Thus it is natural to define : l_0ϕ = -hϕ , l_nϕ = 0. Indeed by the commutation relations [l_0,l_n]=-n l_n, they act as lowering operators for the eigenvalue of l_0, sending -h→ -h-n[Alternatively, we could call them raising operators of the scaling dimension, h→ h+n].To recover the action of the algebra on ϕ(x), all we need to do is translate the operators, exploiting the action of l_-1 which is the translation generator. Thus, we define the generic l_n, n≥ 0 at position z as :l_n = e^-zl_-1l_ne^zl_-1 . To compute (<ref>), we use the Hausdorff formula :e^-ABe^A = B +1/1![B,A]+1/2![[B,A],A]+... . In our case A=zl_-1, so the u'th term in (<ref>) can be easily shown to give : z^u/u![...[l_n,l_-1],l_-1],...],l_-1_u]=z^u (n+1)!/u!(n-u+1)!l_n-u . As the action of l_n, n>0 on ϕ is trivial by definition, the only non-trivial term in the series of commutators will be the u=n and the u=n+1. After that, the series terminates as we commute l_-1 with itself. Putting this together gives us the action of l_n on ϕ :l_nϕ(z) = ((n+1)z^n l_0 +z^n+1l_-1)ϕ=-((n+1)z^n h + z^n+1_z)ϕ(z) . It remains to find the action of the l_-n, n>0. This can be done by noticing that under the transform z=1/w, l_-n(z)=-z^-n+1_z = w^n-1w^2_w=w^n+1_w. Hence the point w=0⇔ z=∞ is fixed by the l_-n. The translation operator at w=0 is now given by -l_1(z=∞)=-_w, while the l_0(∞) eigenvalues remain the same, as can be seen by taking the z=∞ limit in (<ref>) for n=0.By going through the same procedure, we find that the expression (<ref>) is valid also for n<0. Now, noticing the action of l_n is associated to the infinitesimal transformation z'=z+ z^n+1 :ϕ'(z)-ϕ(z)= δϕ =l_nϕ = -(h_z(z^n+1)ϕ+z^n+1_zϕ) ∀ n∈ℤ , ⇒ δϕ = -(hϕ_zξ(z)+ξ(z)_zϕ) ∀ ξ(z), z'=z+ξ(z) . Integrating (<ref>), we recover the formula (<ref>). To get (<ref>), we used that since (<ref>) is valid for all generators of the conformal transformations, it will be valid for a generic one.The condition (<ref>) is very powerful, and it places very stringent constraints on the correlators of primary fields. Consider for instance a correlator of holomorphic fields : ⟨ϕ_1(z_1)ϕ_2(z_2)...ϕ_n(z_n)⟩= G(z_1...z_n) .Denoting by w(z) a global[Many thanks to Marco Meineri for graciously pointing out that we should underline the fact the transformation should be global... One can see that (<ref>) will fail for more generic conformal transformations, as they do not leave the vacuum invariant. Alternatively, we could keep this formula in the generic case, but the correlators should be computed in the state obtained by acting with the conformal transformation on the vacuum (which acts trivially in the case of a "global" conformal transformation).]conformal transformation of the coordinates, and then using (<ref>) with (<ref>) yields the functional equation :G(w_1,...,w_n) = (dw/dz_1)^-h_1(dw/dz_2)^-h_2…(dw/dz_n)^-h_nG(z_1,...z_n) ,where we have used the conformal invariance of the theory through the following identity: ⟨ϕ_1(z'_1)…ϕ_n(z'_n) ⟩=⟨ϕ'_1(z'_1)…ϕ'_n(z'_n) ⟩.These functional equations fix the exact form of the 2 and 3-point functions, while there still remains some freedom for bigger correlators. We will only need the form of the two-point function : ⟨ϕ_1(z_1)ϕ_2(z_2) ⟩ = C_12/(z_1-z_2)^2hh_1=h_2 . The constant C_12 is usually set to one by the freedom to re-normalize the fields.Let us mention that for 3-point functions, conformal symmetry fixes them completely up to one constant, which will depend on the specifics of the theory. Intuitively, this is because conformal symmetry is able to map any three (z_1,z_2,z_3) to (1,0,∞). This is no longer possible for more than three points, so 4-point correlators are fixed up to an undetermined function of a conformally invariant cross-ratio. §.§ The stress-energy TensorAs we know from Noether's theorem, to each symmetry corresponds a conserved current. For the conformal symmetry, it is embodied by the stress-energy tensor, denoted T^μν. This operator is central in the study of CFT. The first reason is that it is an universal operator, as any CFT will at least have a stress-energy tensor in its operator spectrum. The second reason is that it acts as the generator of the conformal transformations.To determine the stress-energy tensor, we consider first only ordinary translations, x^μ'=x^μ+ξ^μ. The action S of our CFT is then of course left invariant by this change of coordinate which we write as δ S. This means that if we promote ξ^μ to depend on the coordinates x^ρ, we must have : δ S = ∫ d^2x T^μ_ν_μξ^ν ,such that it vanishes exactly when ξ is constant. If we now consider ourselves to be on-shell, then any variation of the fields should make δ S vanish, by definition. Then an integration by parts shows conservation _μ T^μν=0, that should hold on shell.Using this method, the resulting T^μν isn't always symmetric, although there still remains some freedom to modify it by terms that have no physical effect (i.e. they do not modify the conservation law and conserved charges). The symmetrized tensor that can be obtained is called the "Belifante tensor". There is also a more direct technique to obtain the "nice" stress tensor directly from the variation, outlined in <cit.>. Here we opt for another trick that is more straightforward.As we have stated before, in a CFT the metric is non-dynamical and fixed. Let us relax this condition just for a moment, and consider the same action S which now also depends on a dynamical metric g_μν. By construction, such an action will now be invariant under the diffeomorphism induced by x^μ'= x^μ+ξ^μ(x^ρ). Then, writing the total change of the metric : δ S = 0 = ∫ d^2x δ S/δ g_L_ξ g_+δ S_ ,where the δ S on the RHS is the variation of the action induced by other fields than the metric, as signified by the caption "fixed metric".Thus if we take (<ref>) and evaluate it at g_μν=η_μν we can determine the fixed metric variation as minus the change of the action when varying the metric. Using the expression of the Lie derivative (<ref>) : δ S_=∫ d^2x T^_ξ_=-2 ∫ d^2x δ S/δ g__ξ_ . Thus, up to an arbitrary normalization factor :T^μν = -2/√(-g)δ S/δ g_μν ,which is automatically symmetric. Furthermore, if we choose ξ_ as parametrising a conformal transformation, we have L_ξ g_=_ρξ^ρ g_, as well as δ S_=0 assuming conformal invariance. Then by (<ref>):0 = ∫ d^2x T^__ρξ^ρ . While strictly speaking this does not force T^_=0 as _ρξ^ρ is not arbitrary, in the overwhelming majority of cases this will hold, so we will consider T_μν to be traceless from now on. The converse is however true, namely that a traceless stress-tensor implies the theory is a CFT (classically)<cit.>.In complex coordinates, these constraints are explicitly solved as :T_zz̅=0 ,T_zz=-T(z)/2π ,T_z̅z̅=-T(z̅)/2π ,where the prefactors are introduced simply to obtain simpler expressions in what follows (and they are standard). The energy currents are thus also separated into holomorphic "right-moving" and anti-holomorphic "left-moving" currents.Let us now finally move to the quantization of the CFT. Until now, most derivations and objects were defined assuming the theory could be formulated through an action principle. For CFTs, such a formulation is often lacking, and the fundamental objects are local operators which we will denote by O(x). All the dynamics are then encoded in correlators of such operators as in (<ref>). As such, from now on when we write an operator identity such as O_1(z) = O_2(z) it should be taken to mean ⟨ O_1(z)…⟩ = ⟨ O_2(z)…⟩, where "…" is any insertions of operators away from z.In trying to compute such objects, the Operator Product Expansion (OPE) will be invaluable. It is an identity the describes what happens when we bring two operators to the same point :O_i(z)O_j(w) = ∑_k C_ij^k(z-w)O_k(w) . Equation (<ref>) can be seen as arising simply from locality; as the two operators get too close to be distinguished, their combined action becomes local and it can be written as a single operator.Returning to the stress-energy tensor, we would like to distill the quantum version of the conformal invariance constraints. This is done by simply deriving the Ward Identities associated to the conformal symmetry. The derivation is straightforward although lengthy, so we do not reproduce it here. We consider the Ward Identity for a conformal transformation z'=z+ξ(z) localised around z_1, thus we assume ξ(z_i)=0 ∀ i>1 for z_i in the correlator ⟨ O_1(z_1)… O_n(z_n)⟩. In other words, the conformal transformations only "hits" the first operator. The ensuing Ward identity can be expressed as an identity (<ref>) : δ_ξ O_1(z_1) = -_z_1[ξ(z) T(z)O_1(z_1)] ,where δ_ξ O_1(z_1) denotes the transformation of the operator under the conformal transformation, and _z_1(f(z)) denotes the residues of f(z) in z_1. So if we know the OPE of the stress energy tensor with any operator, we also know how the operator transforms under any conformal transformation through (<ref>). This explains our earlier claim that the stress-energy tensor is the generator of the conformal transformations.Conversely, if we know how O_1 transforms, we can deduce its OPE with T. Consider then a primary operator ϕ, whose transformation law is given by (<ref>). We derive from it the OPE :T(z)ϕ(w) = h ϕ(w)/(z-w)^2+_w ϕ(w)/z-w+ ,where "reg" denotes non-singular terms as z→ w. We will omit them from now on as they do not affect the physics in the limit z→ w that we consider most of the time.Operators that are not primary can also be assigned a scaling dimension with a similar derivation to (<ref>). However, the transformation law (<ref>) will hold only for pure dilations, namely ξ(z)=z. Thus the OPE with T(z) is only partially fixed, and we can state :T(z)O(w)= …+h O(w)/(z-w)^2+_w O(w)/z-w ,where … denotes now terms that are more singular than 1/(z-w)^2. An example of non-primary field can be obtained by differentiating a primary field. Indeed, applying _w to (<ref>) we obtain :T(z)ϕ(w) = 2hϕ(w)/(z-w)^3+(h-1)ϕ/(z-w)^2+(ϕ)/(z-w) ,which tells us that ϕ(w) has scaling dimension (h-1). This was to be expected as applying a derivative is equivalentto applying the operator -l_-1 on the primary field, which from (<ref>) acts as a lowering operator for h. Everything checks out !The last piece of machinery we will need to introduce is the OPE of T with itself. Under holomorphic dilations z'= z, T_μν being a two-tensor transforms as :T'_zz=(dz'/dz)^-2T_zz ,from which we deduce its scaling dimension to be h=2 (likewise for antiholomorphic, h̅=2). However, this is the "classical" dimension, which can also be obtained by dimensional analysis, but in general, this will not be the same in the quantum theory. One can however show that the dimension of a conserved current does not receive quantum corrections, and so this is true for the energy-momentum tensor.From (<ref>), we know part of the TT OPE. In general, we should allow all other possible singular terms. However the only operator guaranteed to exist in a CFT (other than T), is the identity or trivial operator, naturally of scaling dimension (h,h̅)=(0,0). By dimensional analysis, the only term we can add to the OPE is c/2/(z-w)^4. T(z)T(w)=c/2/(z-w)^4+2 T(w)/(z-w)^2+_w T(w)/z-w ,and likewise for T̅ with c̅.The constant c is the so-called "central charge" of the CFT. By considering several examples and also from general theorems <cit.>, it is apparent that this number somehow represents the number of degrees of freedom our theory has. This OPE also lets us derive the "quantum" version of conformal transformations, by expanding T(z) into modes. The algebra that results is the Virasoro algebra, which is the Witt algebra of the "classical" conformal generators, with a central extension proportional to c. We will not go any deeper into the CFT machinery and refer the interested reader to one of the many excellent reviews <cit.>.As for any operator, (<ref>) combined with (<ref>) allows us to find the infinitesimal transformation rules for T(z). After integration, we expect to find a modified version of (<ref>) because of the 1/(z-w)^4 term in the TT OPE. T'(w) = (dw'/dz)^-2(T(z)-c/12{w(z),z}) , {w(z),z} =2w”'w'-3(w”)^2/2(w')^2 ,where the operation {w,z} is called the "Schwarzian". As it turns out, the Schwarzian vanishes exactly under the SL(2,ℂ) subgroup of global conformal transformations. This makes sense because the vacuum of the theory will be defined as the state that vanishes under the action of l_-1, l_0 and l_1. Thus if we set ⟨ T ⟩ = 0 for the vacuum state, it will remain unchanged under the global conformal transformation.However, under more general transformations, the vacuum expectation value will change. Consider for example the following holomorphic map from the complex plane to the cylinder: w(z) = L/2πln(z), where e^2π/L Re(w) is the radial coordinate and Im(w)/L the "angle". Computing the Schwarzian gives:T_cyl(w) = (2π/L)^2( z^2 T_pl-c/24) . Assuming the expectation value in the plane vacuum state vanishes, ⟨ T_pl⟩ = 0 we obtain⟨ T_cyl(w)⟩ = -π^2 c/6 L^2 . This non-zero negative vacuum energy is to be interpreted as Casimir energy which appears because of the compactness of the cylinder. As we can see, it is proportional to c which reinforces its interpretation as the number of degrees of freedom of the theory.Let us compare this with the Energy of AdS found in (<ref>). The energy density in real space is ⟨ T_tt⟩ = ⟨ T_++⟩+⟨ T_–⟩. The energy of the state on the cylinder is then (<ref>) multiplied by L (and the factor of 2π from (<ref>)). For the AdS geometry of (<ref>), the boundary cylinder has period L=2π, then :L⟨ T_tt⟩=E_AdS⇔ -π c/6 L=-ℓ/8G⇔ c=3ℓ/2G .We see that we correctly recover the Brown-Henneaux formula (<ref>), that was obtained by looking at the asymptotic symmetry algebra! This is one of the simplest consistency checks of holographic duality.§ INTERFACE CFTWe would like to consider an extension to CFTs by introducing an interface that will allow us to bring to contact two distinct CFTs. The study of such a system through the holographic lens will be the main topic of this thesis.Generic defects in a CFT are inhomogeneities localized on a lower dimensional hypersurface. Interfaces are special defects of codimension one, that separate spacetime in two parts. We will consider conformal interfaces which preserve a subset SO(2,d-1)⊂ SO(2,d) of the conformal symmetries. This will relax the conditions on correlators and allow for more general forms. For instance, scalar operators can acquire a vacuum expectation value.The case that we will consider is an interface in two spacetime dimensions, as illustrated in fig.<ref>. The interface sits at position x=0 and is parametrized by the time t=τ (z=iτ in complex coordinates). The global conformal transformations that preserve the geometry of the interface form the group SO(1,2) and include τ translations and scale transformations.Consider the Action (<ref>) which has the generic form of the system depicted in fig.<ref>. S = ∫_x<0 dxdt L_1 + ∫_x>0dxdtL_2 + ∫_x=0 dτL_ int ,where L_1, L_2 and L_ int respectively represent the lagrangians of CFT_1, CFT_2 and the interface degrees of freedom [While the derivation we outline here relies on the Lagrangian formulation of the CFT, it is by no means necessary, see <cit.>. We decided to go the Lagrangian route to simplify the explanation.]. To derive the conditions imposed by the interface, let us first consider a generic change of coordinates, written as x^μ→ x^μ + ^μ. We further assume that the systems on both sides are conformally invariant, which implies the following form for the variation :δ S= ∫_x<0 dxdt T^μν_1_μ_ν + ∫_x>0dxdt T^μν_2_μ_ν , + ∫_x=0dτ D^μ_μ , where for now we didn't make any assumption about symmetry properties of the interface Lagrangian. In fact, there is some abuse of notation when writing this generic variation, as generically it will deform the interface. These deformations are also encapsulated in the quantity D^μ[To be a bit more careful, one should write the interface action as ∫ dx dτδ(x) L_ int. Then variations due to the deformation of the interface are accounted for in the variations of the Dirac delta.].Let us now assume we are making the variation around an on-shell configuration, s.t. δ S=0. After integration by parts, we obtain :0= -∫_x<0dxdτ_μ T_1^μν_ν -∫_x>0dxdτ_μ T_1^μν +∫_x=0dτ (D^μ+n_ν(T_1^μν-T_2^μν))_μ= -∫_x<0dxdτ_μ T_1^μν_ν -∫_x>0dxdτ_μ T_1^μν_ν +∫_x=0dτ (D^μ+(T_1^μ x-T_2^μ x))_μ ,where n_μ is the normal vector to the interface pointing away from side 1.Now, the volume and interface integrals should vanish independently, and by the arbitrariness of _μ we thus conclude to the conservation of the stress-energy tensor in each bulk, _μ T^μν_i=0.This also gives us a relation between D^μ and the values of the stress-tensors on the interface, but we would like to refine that using the fact that the interface preserves a subset of the symmetries. To this end, we specialize the _μ parameter to first represent a τ translation (_μ= δ_μ^τ). In that case, the interface Lagrangian is invariant by assumption (i.e. D^μ_μ =0 in this case) so that we obtain :0= ∫_x=0 dτ(T_1^xt-T_2^xt) =∫ dτ((T^1_++-T^1_–)-(T^2_++-T^2_–)) . In the x,t coordinates it can be seen that this condition amounts to the continuity of the time-averaged energy flow across the interface. We omitted it for ease of notation, but the stress tensors in the last integral are of course to be evaluated at (x=0,t=τ).We can also do a similar procedure with scale transformations, which also leave the interface invariant, ^μ =x^μ. On the interface at x=0, this becomes simply ^μ = δ^μ_ττ, which yields : 0= ∫_x=0 dττ((T^1_++-T^1_–)-(T^2_++-T^2_–)) . That exhausts the group of transformations that leave the interface invariant. A general solution to the conditions (<ref>), (<ref>) can be written as : (T^1_++-T^1_–-T^2_+++T^2_–)=_τ^2θ(τ) ,where θ is an operator on the interface, vanishing as τ→±∞. However, from dimensional analysis θ will have scaling dimension 0, so from (<ref>) we will have that ⟨θθ⟩ = C and thus ⟨_τθ_τθ⟩ = 0. Then from more sophisticated unitarity arguments <cit.> it can be shown that this implies _τθ = 0 as an operator equation. In the end, we will take that a conformal interface is such that :T^2_++(t)-T^2_–(-t)=T_++^1(t)-T_–^1(-t) , ⇔ T_2(iτ)-T̅_2(-iτ)=T_1(iτ)-T̅_1(-iτ) ,where we include the Wick rotation for completeness.Without the interface, the full symmetry group of the CFT is Virasoro×Virasoro. After the joining through the interface, we obtain an additional constraint on the stress-tensor which relates the left-moving and right-moving modes. This restriction means that we only have half as many independent modes, and the symmetry group is reduced to just one copy of Virasoro. This fact is of course also reflected in the asymptotic symmetry group of the dual <cit.>, which we will describe in more detail later.There are many ways to satisfy (<ref>). One extreme is to require independently that T_i(iτ)=T̅_i(-iτ) for each side. This is the case of a fully reflecting interface: an incoming right-moving mode is reflected and turned into a left-moving one as it hits the interface. The other extreme is a fully transparent interface, also called "topological", characterized by T_1(iτ)=T_2(iτ) and T̅_1(-iτ)=T̅_2(-iτ).An important universal operator associated with the interface is the "Displacement operator", denoted D. This operator will arise in the Ward identities of the stress-energy tensor, when considering conformal transformations that are broken by the interface. In that sense, it is the generator of the deformations of the interface. An easy way to derive it is to go back to the formula (<ref>), but now specialising ^μ as deformations in the x-direction. The variation D^μ_μ will no longer be vanishing, it will reduce to D _x in our case where the interface is one-dimensional (writing D^x≡ D). Then, following similar steps to(<ref>) :0 =∫ dτ(T_1^xx-T_2^xx+D)_x ,=∫ dτ(T^1_++-T^2_+++T^1_–-T^2_–+D)_x ,⇔ D = -2(T^1_–-T^2_–) ,where in the last line we used that _x is arbitrary, as well as the conformal interface condition, (<ref>). This derivation clearly shows that the Displacement operator is the generator of the coordinate transformations that deform the interface. Note that in (<ref>), the stress-tensor appearing has not been rescaled according to (<ref>), so the equation differ by a sign w.r.t. <cit.>. Rescaling the stress-tensors by -2π, and the displacement operator by 2π recovers the agreement. In what follows, the stress-tensors are normalized in the usual CFT convention (<ref>). The displacement operator, while it is a genuine operator of the interface, is determined by the stress tensors on either side. Its two point function depends on an undetermined constant, ⟨ D(t_1) D(t_2) ⟩ = C_D/(t_1-t_2)^2, which will depend on the specifics of the interface.Let us make a last comment on a different point of view for ICFT's, through what is called the "folding trick". By applying a parity transformation on CFT_2, we bring both CFT's on the same side. The full system is then described by the a tensor product CFT_1⊗ CFT_2 living on x<0, where the two CFTs are completely decoupled except at the boundary x=0. Thus in this picture, we deal with a tensor product CFT in x<0, and the interface becomes a boundary. Both formulations are equivalent<cit.>. §.§ Reflection and transmission of energyOne of the key quantities that will interest us in ICFT will be the transport of energy across the interface. In higher dimensions, this depends on the nature of excitations incident on the interface. But in two dimensions it turns out to be an universal quantity<cit.>, under some mild assumptions explained below.To properly define the transmission and reflection coefficients, we must set up a scattering experiment on the interface, and measure the transmitted and reflected energy at infinity. While there are no proper asymptotic states in a CFT, it is possible to prepare a scattering experiment as explained in reference <cit.>, which we briefly outline here. First, we must define the observable; the energy received at infinity. The energy density is of course given by T^tt=T_++(w^+)+T_–(w^-). As excitations in the CFT propagate independently in the lightlike directions it is thus natural to integrate the energy in these direction. We defineE = ∫_-∞^∞ dw^- T_–(w^-),E = ∫_-∞^∞ dw^+ T_++(w^+) . The line over which the integration takes place is of course irrelevant for the integral, as T_– (resp T_++) depend only on w_- (w_+). Thus, these quantities indeed represent the total energy of the left-mover and right-movers respectively. In the interface picture, we will be able to define such operators for both CFTs, which we will label with i=1,2. The picture depicting the locations of the various integrations in this case is fig.<ref>.We must now prepare the state that we will scatter on the interface. We would like to control its initial energy, but this is made harder by the presence of the interface. Indeed, stress-tensor interactions have only a power law decay in the conformal theory and thus the initial state will already be affected by the interface, which would make it hard to single out the scattering event to define the reflection and transmission coefficient.The way to resolve this problem is to create the initial state infinitely away from the interface. Do to so we introduce a compactly supported kernel function k(x) : ∫_-∞^∞ |k(x)|^2 dx = 1, k(x)=0|x|>r . We prepare a generic state by applying any operator O_1 to the vacuum, and localize it using (<ref>). To prepare a scattering experiment that will scatter at x=0,t=0, we prepare the state in the past by following back in time the lightrays intersecting this point. Instead of the lightray coordinates w^+, w^-, let us pass to the complex coordinates which make the notations more palatable (real space coordinates can be recovered by the "change of coordinates" w^-=z, w^+=z̅). Our scattering state is :|O_1,L⟩_I = O_1^L|0⟩_I=∫ dzdz̅k(z)k(z̅+L) O_1(z,z̅)|0⟩_I ,where subscript I on the vacuum states denotes that it is the vacuum in the presence of the interface.The useful scattering state will then be obtained in the L→∞ limit of (<ref>). In this limit, we can safely assume that the influence of the interface vanishes, and thus that the prepared state doesn't get contributions from the interface effects. In other words, in this limit we should be able to drop the subscript I, and consider the state as prepared in the true (i.e., no broken symmetries) vacuum of the CFT. We only need the "soft version" of this limit, meaning that we will allow ourselves to remove the index I only in correlation functions. Crucially, this implies the identity lim_L→∞⟨ O_1,L||O_1,L⟩_I = ⟨ O_1,L||O_1,L⟩.With that in mind, we are ready to define the reflection and transmission coefficients, which can now be defined very intuitively (especially with the help of (<ref>)) : T_1= lim_L→∞⟨ O_1,L|E_2|O_1,L⟩_I/⟨ O_1,L|E_1|O_1,L⟩ , R_1= lim_L→∞⟨ O_1,L|E_1|O_1,L⟩_I-⟨ O_1,L|E_1|O_1,L⟩/⟨ O_1,L|E_1|O_1,L⟩ , T_2= lim_L→∞⟨ O_2,L|E_1|O_2,L⟩_I/⟨ O_2,L|E_2|O_2,L⟩ , R_2= lim_L→∞⟨ O_2,L|E_2|O_2,L⟩_I-⟨ O_2,L|E_2|O_2,L⟩/⟨ O_2,L|E_2|O_2,L⟩ ,where T_i are the transmission coefficients for excitations incident from side i, and R_i are the reflection coefficients. To obtain the coefficients for side i the scattering state is prepared on side i.From conservation of energy, we expect T_i+R_i=1, and this identity indeed holds thanks to (<ref>). The proof is quite technical, and I present it here for completeness, but the reader may want to skip it since the result is intuitively obvious. We begin by defining the following correlator for ease of notation :G_i(z)= ⟨ O_1(z_1,z̅_1) … O_n(z_n,z̅_̅n̅) T_i(z)⟩_I , G̅_i(z̅)= ⟨ O_1(z_1,z̅_1) … O_n(z_n,z̅_̅n̅) T̅_i(z̅)⟩_I ,where the insertion of operators in G_i are all on the i'th side. We will also need to re-express (<ref>) for these operators :G_1(iτ)-G̅_1(-iτ)= G_2(iτ)-G̅_2(-iτ) . By denoting G^L_i, the operator (<ref>) in the case of only two insertions of scattering operators (<ref>), we have : T_1 = lim_L→∞∫ dw G^L_2(w)/∫ dw ⟨ O_1,L|T_1(w)|O_1,L⟩ ,with similar identities for the rest of the formulas in (<ref>). We would like to explicit the effect of the interface in the definition of (<ref>). To do so, we will exploit the holomorphy to deform the contour in Cauchy's formula, as in fig.<ref>. This yields the following formulas (let us specify to i=1 here):G_1(z)= 1/2π i∮ dw G_1(w)/w-z=-∑_w=z_i Res(G_1(w)/w-z)+1/2π i∫_w=-i∞^i∞dw G̅_1(-w)+G_2(w)-G̅_2(-w)/w-z , G̅_1(z̅)= 1/2π i∮ dw̅G_1(w̅)/w̅-z̅=-∑_w̅=z̅_i Res(G_1(w̅)/w̅-z̅)+1/2π i∫_w̅=-i∞^i∞dw̅G_1(-w̅)-G_2(-w̅)+G̅_2(w̅)/w̅-z̅ . For the integral on the interface, we used the identity (<ref>). To simplify further these integrals, we can again deform the contour for the integrands containing G_2. We deform the line to a circle in side 2, since that is where G_2 is defined (the stress tensor T_2 is only defined on side 2). However, since by assumption the insertions of the operators (<ref>) are only on side 1, the only singularity that can arise is from the denominator w-z. This yields the following identities :1/2π i∫_w=-i∞^i∞dw G_2(w)/w-z=0 ,1/2π i∫_w̅=-i∞^i∞dw̅G̅_2(w̅)/w̅-z̅=0 ,1/2π i∫_w=-i∞^i∞dw G̅_2(-w)/w-z=-1/2π i∫_w=-i∞^i∞dw G̅_2(w)/w+z=G̅_2(-z) ,1/2π i∫_w̅=-i∞^i∞dw̅G_2(-w̅)/w̅-z̅=-1/2π i∫_w̅=-i∞^i∞dw̅G_2(w̅)/w̅+z̅= G_2(-z̅) ,where (<ref>) is easily obtained by the aforementioned assumptions of holomorphy of G_2(G̅_2), and by deforming the contour to a circle in side 2. For the first two integrals, it can be shrunk to a point, while for the second we get a contribution from the pole at -z(-z̅). Recall that z̅ and z act as independent complex coordinates; therefore we could reparametrize an integral in dw̅ as an integral in dw.For the remaining contribution, we have to deform the contour on side 1, and we will encounter potential singularities at every operator insertion : 1/2π i∫_w=-i∞^i∞dw G̅_1(-w)/w-z=-1/2π i∫_w̅=-i∞^i∞dw̅G̅_1(w̅)/w̅+z=-∑_w̅=z̅_i Res_z̅_i(G̅_1(w̅)/w̅+z) . Plugging (<ref>) in (<ref>) : G̅_1(z̅)+G_2(-z̅) =-∑_w̅=z̅_i Res_z̅_i(G̅_1(w̅)/w̅-z̅)-∑_w=z_i Res_z_i(G_1(w)/w+z) ,G_1(z)+G̅_2(-z) =-∑_w=z_i Res_z_i(G_1(w)/w-z)-∑_w̅=z̅_i Res_z̅_i(G̅_1(w̅)/w̅+z) . Notice the two equations are related through a complex conjugation, so we will keep only one of them from here on. Let us restrict now to the case where in (<ref>) there are only two insertions of the scattering operator. To obtain the residues needed in (<ref>), we employ the OPE of O_1^L with the stress-tensor. This will give use some singularities in w=z_i, and what remains of G_1(w) are correlators of the form ⟨ O_1^L ^n O_1^L⟩_I. In the limit L→∞, those are far-removed from the interface we can ignore the subscript I. Walking backwards, with the same logic as (<ref>), but without the interface integral, we find : G̅_1(z̅)+G_2(-z̅) =∫ dz⟨ O_1^L|T̅_1(z)|O_1^L ⟩+∫ dz ⟨O_1^L|T_1(-z)|O_1^L ⟩=⟨ O_1^L|E_1|O_1^L ⟩+⟨O_1^L|E̅_1|O_1^L ⟩ . Then, using (<ref>) it suffices to express T_1+R_1 in terms of the G_i, to obtain the conservation of energy identity :T_1+R_1=1 . Naturally, an analogous relation exists for side 2, with an equivalent proof. That these identities hold is a consistency check of the definitions (<ref>).We turn now unto the proof of the universality of (<ref>). We will only sketch it in a more restricted case, and the full proof can be found in <cit.>. From the conservation (<ref>), we can simply focus on the transmission coefficient T_1. Then, the main quantity we are interested in is the 3-point correlator of T_2 with scattering operators on side 1. The easier case, to which we restrict, is that the operators O_1^L are purely holomorphic. We would like to compute (<ref>) : ⟨ O_1^L(z_1)T_2(z)O_1^L(z_2)⟩_I . We can do so by fusing the two defect operators using the OPE. As this is taken in the L→∞ limit, only operators from side 1 may be produced. We will then obtain a linear combination of two-point function of the resulting operators with T_2(z). However, for the two-point function of holomorphic operators, even in the presence of the interface, (<ref>) still holds <cit.>. Thus only the operators of scaling dimensions (h=2,h̅=0) will contribute to the sought-out 3-point function. Assuming that there is no other spin 2 field than the stress-energy tensor, the only operator contributing to (<ref>) is T_1.Thus the two-point function that will determine the transmission coefficient is :⟨ T_1(z)T_2(w)⟩_I = c_12/(z-w)^4 , ⇒ ⟨ O_1^L(z_1)T_2(z)O_1^L(z_2)⟩_I = α(z_1,z_2)c_12/(z-z_1)^4 ,where the quantity c_12 is determined by the specifics of the interface, and α(z_1,z_2) comes out of the OPE fusion of the O_1^L. Doing the same thing for the three-point correlator with T_1, and taking the ratio we obtain : T_1 = c_12/c_1 ,where c_1 is the central charge of CFT_1. In (<ref>), any specifics of the initial scattering operators are completely lost; hence this quantity is completely universal, and depends only on the properties of the interface through c_12. Of course here the proof is only for holomorphic operators, but it can be extended for arbitrary ones. The universality of the transport coefficients is the main take-away from this section.We will limit our attention to non-chiral CFTs for which the central charge is the same for left and right movers. Repeating the procedure for a scattering coming from side 2, we would obtain : T_2=c_12/c_2 . Comparing the formulas give :c_1T_1=c_2T_2 . This important equation is known as the "detailed balance condition". It ensures that at equilibrium the net heat flow across the interface is zero, as we will see in our discussion later.§ THE ADS/CFT CORRESPONDENCEThe first and most famous example of what is now known as "AdS/CFT correspondence" was discovered by Maldacena in his seminal paper <cit.>. It posits that "N=4SU(N)Super-Yang-Mills (SYM)" is dual to "Type IIB Strings on AdS_5× S^5". The meaning of "duality" in this context is that the two theories describe exactly the same physics, with different mathematical formulations. In other words, any physical quantity that be can be computed from one theory, can also be obtained from the other one. The set of "rules" that dictate how to translate quantities from one theory to the other are usually referred to as the "holographic dictionary".We will denote by ' the parameter governing the string tension T=1/2π', g_s the string coupling.Let us sketch the way that this duality was first unearthed. We begin by considering the low-energy limit of N stacked D3-branes ('→ 0). For the D-brane prescription to hold, we must assume that g_s N = 4π≪ 1 (whereis the 't Hooft coupling, see (<ref>)), such that their back-reaction on the geometry is negligible. From the point of view of String theory in flat space, we can show that the only excitations that remain in this limit are the massless closed strings, which contribute to a 10D supergravity sector, and the open strings ending on the N branes, giving rise to the SU(N) SYM sector.Consider now the alternative point of view of strings on the black hole background generated by the stack of N D3-branes, which is the picture that holds when the backreaction is large, g_s N≫ 1. In the same low energy limit (as observed by an asymptotic observer) we recover again a 10D Supergravity sector (from the massless strings with low energy that don't see the black hole). There is also a second sector of strings propagating in the near-horizon region of the black hole, which has the geometry of AdS_5× S^5. Because of the infinite redshift at the horizon, independently of their energy these excitations will look soft to the asymptotic observer at infinity. We thus conclude that this sector is the full String theory on an AdS_5× S^5 background. Then, by "canceling" the 10D supergravity sectors of the two points of view, and since the SYM theory (living on the branes) should remain well-defined at any , we can assume that this description still applies when g_s N≫1, allowing us to identify it with String theory on AdS_5× S^5.Although this derivation heuristically establishes the equivalence of the two theories, we get only a small piece of the dictionary. We can see that the isometry group of the background, SO(2,4)× SO(6) is mapped one-to-one to the bosonic subgroup of the superconformal group; the 3+1-dimensional conformal symmetry is isomorphic to SO(2,4), and the SU(4) R-symmetry is identified with the SO(6). The more precise statement is about the equivalence of the asymptotic isometries SO(6) of the 5-sphere. Still at the ground level, let us look at how the parameters of the two theories are related. From type IIB on AdS_5× S^5 we have : the string coupling g_s, the string tension ' and the AdS radius ℓ. From N=4SU(N) : the size of the gauge-group N and the Yang-Mills coupling g_YM. From Maldacena's derivation, those are found to be mapped as : √() = √(g_YM^2 N) = ℓ^2/' ,4π g_s= /N , ℓ/l_p = N^2 ,where we introduced the 't Hooft coupling , which is the relevant coupling in the limit of large N. As we will not enter in the details of the large N limit of field theories, the interested reader can look at <cit.>.The string-theory side of (<ref>), is well-understood only in the classical Supergravity limit, g_s≪ 1, ℓ^2≫α'. This corresponds to the ≫ 1, N≫ limit of the field theory, which is a large N strongly coupled limit. On the other hand, the weak 't Hooft coupling limit of the field theory, ≪ 1, corresponds to String theory on an AdS_5 background where the ' corrections are large. The duality thus interchanges weak with strong coupling on the two sides. To verify the duality, which is believed to hold for all ranges of parameters, one must thus be able to perform strong-coupling calculations on one of the sides. This is possible for some protected observables which do not receive quantum corrections thanks to supersymmetry. In the planar N→∞ limit, more progress could actually be achieved by the exact solution of the spectrum for all values of <cit.>, confirming the validity of the duality.Conversely, assuming the hypothesis to be correct, the duality gives us a way to define string theory non-pertubatively on AdS spacetimes, a formulation that is still completely lacking in flat spacetime.Although Maldacena's derivation gave a compelling argument for the duality, it did not outline how to use it. In other words, there was still no full "dictionary" that explained how to match quantities in both theories, besides the symmetry groups and parameters. This was explained in the papers <cit.>. In a nutshell, the computation of correlation functions in the dual CFT is beautifully encapsulated by the formula (<ref>) ⟨exp(-∫ d^Dx ϕ_0(x)O(x))⟩_CFT=Z_string[ϕ(x⃗,z)|_z=0=ϕ_0(x⃗)] .There is a lot to unpack in this formula. Let us first look at the LHS. This is the generating functional for correlation functions of the operator O. By taking functional derivatives w.r.t. ϕ_0, we can compute any correlation function containing O.⟨ O(x_1)O(x_2)…⟩_CFT = δ/δ (-ϕ_0(x_1))δ/δ (-ϕ_0(x_2))…(⟨exp(-∫ d^Dx ϕ_0(x)O(x))⟩_CFT)|_ϕ_0=0 .On the RHS, Z_ string denotes the partition function of the string theory on the asymptotically AdS background. It has a boundary condition on the field ϕ(x,z); it should match ϕ_0 at the boundary of AdS which we assume to be located at z=0. Thus, for each field propagating in the String theory (examples are the dilaton, the graviton, or any other possible excitation of the string) there will be a corresponding operator in the dual CFT. Even defining Z_ string generically is a very difficult task. As we have already stated, computations on the string side are almost always done in the Supergravity limit. In this limit, we can use the saddle point approximation to find Z_ string. Schematically, in the Euclidean picture :Z_string[ϕ(x⃗,z)|_z=0=ϕ_0(x⃗)]=∫_ϕ(x⃗,z)|_z=0=ϕ_0(x⃗)Dϕ e^-S_E≈ e^-S^c_E(ϕ)∫Dϕ e^-δ^2 S/(δϕ)^2(δϕ)^2+O(δϕ^3) ,where S^c_E(ϕ) denotes the Euclidean action evaluated on the classical solution that satisfies the appropriate boundary conditions. Here we lumped all the fields into ϕ, thus among them, there should always be the metric g_μν as we are considering a string theory. This is the main reason the definition of the path integral (<ref>) is hard. The saddle point integration gives the leading order contribution in the small g_s (large N) expansion. The (δϕ)^2 term in the exponential accounts for the leading quantum corrections to the saddle point approximation.Even at the classical level, computing this quantity is not straightforward as it is generally divergent as we take the limit z→ 0, and requires regularisation and the addition of counterterms to make sense of it.This procedure is called "holographic renormalization"<cit.>. Indeed, these IR divergences that arise in the computations of the action are the holographic manifestation of the UV divergences of the dual field theory. This highlights another key aspect of the AdS/CFT correspondence, the "UV/IR connection"<cit.>, namely that UV effects in one theory are realized as IR effects in the other, and vice-versa.One last interesting remark about (<ref>) is that it lends itself to the interpretation that the CFT leaves on the physical boundary of AdS. In other words, we view the gravity theory as living in the "bulk" or "volume" of spacetime, and the field theory as living on its boundary. Whether this interpretation has any real "physical" signification is up for debate. Nonetheless, this point of view is very useful for visualization, and we will use the associated jargon extensively. Let us conclude by illustrating (<ref>) with the basic example of a scalar field. By solving the wave equation in pure AdS(since what matters to us is the asymptotic behavior, where spacetime always looks like empty AdS), one finds that the propagating field ϕ has two independent modes with the following leading behaviours on the boundary (in units 8π G=1): ϕ(x,z)= (A(x)+O(z^2)) z^Δ+(B(x)+O(z^2)) z^D-Δ,Δ = D/2+√(D^2/4+ℓ^2 m^2) .The subsequent O(z^2) corrections are algebraically determined from the equation of motion in terms of A(x) and B(x)[There is an exception in the case where Δ is an integer. Then, there is a logz^2 correction to A(x). This coefficient is related to the conformal anomaly in the field theory, but we omit those details here.].We will assume d<2Δ, which is the Breitenlohner-Freedman bound <cit.> necessary for the scalar field not to destabilize AdS. Note that this allows for tachyonic fields with m^2<0 which is not the case in flat space. Above this bound, the dominant solution at z→ 0 is the B(x) mode, which is the one that will be matched to ϕ_0(x). To do so we must regularise, and we do so by putting a cut-off at z=. We see then that for the boundary condition to be satisfiable, we should replace ϕ_0(x)→lim_→ 0^D-Δϕ_0(x). Then the boundary condition simply states B(x)=ϕ_0(x). Note that A(x) is still undetermined. In the Euclidean theory it is determined by requiring regularity of the solution in the interior, while in the Lorentzian theory there could be additional freedom that corresponds to the choice of the boundary state. . If we go through all the steps to renormalize the action and compute S^c_E(ϕ), we can then compute the one-point function of the associated operator <cit.> with (<ref>) and (<ref>): ⟨ O(x)⟩ = δ S^c_E/δϕ_0(x)=-(2Δ -D)A(x)+C(B(x)) ,where C(B(X)) is a regularisation-scheme-dependent term. Note that we swept under the rug most of the computation, the point here being only to underline the fact that the second free coefficient A(x) determines the expectation value of the operator that is dual to the scalar field.§ THE BOTTOM-UP APPROACHIn the previous section, we presented a lightning overview of the AdS/CFT correspondence in a putative exact or "top-down" holographic setting. Here the correspondence is posited between UV complete gravity theory and a precise dual (S)CFT. While there are numerous such examples in string theory, these are often cumbersome to work with. Furthermore one would like to work in a more general framework, not restricted to a precise holographic setting. Such an approach is coined "bottom-up" holography. The basic idea is to take any AdS gravity model, with a chosen set of fields tailored to what we want to study. Then, we can identify with the holographic dictionary a putative CFT dual to our theory for which we can do computations in the gravity side. The putative dual CFT, if it exists, would most likely be strongly coupled. This is approach is useful in condensed matter physics<cit.>, where the microscopic UV Hamiltonian is rarely known. There is an alternative denomination in this case, "AdS/CMT"(AdS/Condensed Matter Theory).The bottom-up holography relies on much flimsier ground than the precise correspondence that we outlined in the previous section. Indeed, we implicitly assume that our hand-picked model is realized as an effective theory of some UV complete quantum gravity with a holographic dual. In other words, it somehow assumes that given any CFT or AdS gravity theory, one could embed it into a precise correspondence of UV complete theories. More modestly one assumes that the bottom-up theory retains some qualitative features of strongly coupled CFTs that are hard to compute ab initio. An example are strange non-Fermi liquid phases of matter<cit.>. §.§ Minimal holographyIn the class of "bottom-up" approaches, one stands out from the others as the most general one. In this model, which we will call "minimal holography", we allow only the bare minimum needed to be able to formulate an AdS/CFT duality. While the resulting model is not very rich, it makes up for it in generality. Indeed the minimality of the model will also mean that results derived within it will be applicable universally in a broad class of holographic models. Furthermore, we will consider the appropriate limit so that computations in the bulk reduce to classical gravity, namely large N (large central charge) and strong coupling for the CFT. Although we didn't review the large N limit, we will need only the fact that it can be seen as a "classical" limit. By that we mean that correlators of reasonable[By "reasonable", we mainly mean operators which are not composed of a parametrically large number of fields. If this number scales parametrically with N, the large-N limit cannot be easily taken and the "classical" limit doesn't apply anymore.] operators will factorize, ⟨ A B⟩ = ⟨ A ⟩⟨ B ⟩ +O(1/N)<cit.>. Thus all we will need to define our state completely will be the expectation value of the operators.In fact we will restrict ourselves to a single operator whic is always present in any local CFT, the Stress tensor T_μν. In a theory living in flat space, the only choices left to us are the CFT central charge c and its state defined by choosing the expectation values ⟨ T_μν⟩. Of course, these must respect the constraints imposed by the conformal invariance. In the 2-dimensional theories in which we work, these constraints are most easily expressed in lightcone coordinates, see (<ref>) : ⟨ T_+-⟩ = 0,⟨ T_++⟩ = ⟨ T_++⟩(w_+),⟨ T_–⟩ = ⟨ T_–⟩(w_-).As an example, the vacuum state will have a vanishing stress-tensor on the plane, or carry the Casimir energy (<ref>) on the cylinder.From the bulk point of view, the situation is equally simple. The only necessary field for a gravity theory to be defined is naturally the graviton. There will of course be a negative cosmological constant, and its value will be set to satisfy the Brown-Henneaux formula (<ref>). The action will be given by (<ref>) in Euclidean space, and by the Wick-rotated version in Lorentzian signature. As usual the metric will be the field dual to T_μν which is the unique spin-two field in our simple theory.To obtain the precise matching between the metric and the stress-tensor, we must follow the GKPW procedure. The first step is finding the general solution to the Einstein equations, with the constraint that the metric be Asymptotically AdS. We thus use the Fefferman-Graham gauge (<ref>) in which the constraints are explicit. In two dimensions, it can be shown that the generic solution to the equation of motions takes the form <cit.> : g_ij(z,x) = ℓ^2/z^2(g^(0)_ij+z^2 g^(2)_ij+z^4/4g^(2)_ikg_(0)^klg^(2)_lj) . g_ij^(0) is immediately fixed to be equal to the metric of the manifold on which the CFT lives, flat in our case. Thus, as we expected from the general procedure, there remains one degree of freedom to fix in g^(2)_ij. Einstein's equations fix the general form (<ref>). g^(2)_ij =1/2(R^(0)g^(0)_ij+ a_ij) ,^(0)_i a^ij=0,a_i^i=- R^(0) . In our case, R^(0)=0 since the g^(0) metric is flat. Going through the steps to compute (<ref>)we find, as should be expected by the suggesting constraints of (<ref>) :a_ij = 2⟨ T_ij⟩/ℓ .We see that the constraints on a_ij are the same as those imposed on the stress-tensor by conformal invariance. In particular, we recover the conformal anomaly in the case where the CFT is put on a curved manifold : ⟨ T_i^i ⟩ = -ℓ/2R^(0)=-c/24R^(0) . In the special case where g^(0) is flat, we can then explicitly write the full bulk metric as follows : ds^2 = ℓ^2 dz^2/z^2+ℓ^2/z^2(dx^-+z^2 ⟨ T_++⟩/ℓ dx^+)(dx^++z^2 ⟨ T_–⟩/ℓ dx^-) . §.§ An extra interfaceThe previous section describes the minimal holographic model valid for the calculation of energy-momentum correlators in any homogeneous CFT. We would like now to slightly expand this model to include CFTs with an interface, as described in section <ref>. We now have to deal with two CFTs (denoted by numbers 1, 2) which are joined on an interface. For each CFT_i, the procedure from the previous section goes through unchanged, so we have two copies of each equation. As for the interface, in the minimal case it will be characterized by a single λ (the tension of the dual brane, as we will see). This parameter will determine the value of the reflection and transmission coefficients<cit.>, defined in (<ref>), as well as the entropy or g-factor<cit.> of the interface. This is a special feature of our model, more generally the transport coefficients and entropy are independent quantities.The displacement operator D, defined in (<ref>) is also a universal operator, which is however completely fixed from the choice of the state for the CFTs on either side.For the holographic gravity dual of this minimal model, the derivation is a bit more heuristic. We know from (<ref>) that the metric close to the boundary will change as we cross the interface. Thus the metric dual to this configuration should be perturbed by the insertion of some object (dual to the interface) such that it modifies Einstein's equations to allow for the metric transition. In the UV-complete theory of gravity, this transition is expected to be smooth. But at low energies, when its internal structure cannot be resolved, the domain wall can be considered as infinitely thin.We thus postulate the "thin membrane" approximation <cit.>, in which the transition between the two metrics happens sharply along a membrane of codimension one, parametrized as x^μ(y^a). Naturally, this membrane should be anchored at the position of the CFT junction on the boundary, since we know from (<ref>) there is a sudden (although continuous) asymptotic metric change at that junction. To describe the dynamics of this object, we need to add a term to the Einstein-Hilbert action of the system. The minimal and most general way to do this is to simply imbue the membrane with a tension . The term describing its dynamics is then :S_mem = -∫_ Memd^(D-1)y√(-h) ,h_ab = g_μν x^μ/ y^a x^ν/ y^b . So the numberdetermining the minimal properties of the CFT interface acquires a much clearer interpretation in the bulk as the tension of a membrane. One might ask what is the dual of the displacement operator D, and this is naturally encoded in the metric h_ab of the wall. Since it is itself determined by x^μ_m(y^a) we can also see it as encoded in the shape of the membrane. As D was completely determined given the state of both CFTs, one should expect that to also be the case for the dual object, x^μ(y^a). As we will see in the main chapters, this expectation is true and the equations arising from (<ref>) added to the Einstein equations will completely fix the shape of the membrane. Let us finish by mentioning that one can augment the model, either by adding fields in the bulk and their dual operators, but also by adding fields restricted to the membrane, which should correspond to operators on the interface, from the CFT side (see <cit.> for an application of these more complicated walls).§ ENTANGLEMENT ENTROPY AND THE RYU-TAKANAYAGI PRESCRIPTIONWhile presenting the holographic correspondence, we did not yet talks about a key ingredient that was the first clue to the holographic nature of gravity: entropy, and more precisely, its quantum counterpart, the fine-grained or "Von Neumann" entropy. By trying to save the second law of thermodynamics in their study of black holes, Hawking and Bekenstein <cit.> soon realised that the entropy of a black hole had to be curiously proportional to its area, instead of its volume as it usually is for ordinary objects. The precise Bekenstein-Hawking formula formula reads :S_ BH = A/4G ,where A is the black hole's horizon area.This strongly suggests that the degrees of freedom of the black hole are localized on its surface. It is then natural to expect that this may hold more generally, since any system can be collapsed to a black hole if made compact enough.If such a generalization of (<ref>) exists, it should come about naturally from AdS/CFT; this was indeed the ingenious conjecture of Ryu and Takayanagi.§.§ Entanglement entropyLet us first recall the definition of von Neumann entropy. Consider a quantum system described by a density matrix ρ. For a "pure state", which can be described by a vector |ψ⟩ in the hilbert space, the density matrix is ρ = |ψ⟩⟨ψ |. However, it can also account for more general, "mixed" states, in which case the density matrix can be put in the general form :ρ = ∑_i p_i |ψ_i⟩⟨ψ_i|, Tr(ρ)=∑_i p_i =1 . Such a probability mixture of quantum states arises because of a lack of knowledge on the state. For instance, for a thermal state, we would have : ρ = ∑_ie^-β E_i|ψ_i⟩⟨ψ_i| ,where our ignorance comes from the fact we don't know the precise microscopic state of the system. Still, the knowledge of the macroscopic (inverse) temperature β allows us to assign probabilities to the possible microstates, which is encoded in (<ref>). We would like to define an "entropy", which quantifies our ignorance about the state of the system. Inspired from Shannon's entropy that does the same thing for probability distributions, we define the Von Neumann entropy, which we will also call "fine-grained entropy", and "entanglement entropy" when appropriate :S = - Tr(ρlogρ) ( = -∑_i p_i ln(p_i)) . The way we introduced (<ref>) looks for now completely unrelated to entanglement. To make the connection clear, we introduce the purification of a density matrix. Consider a generic density matrix as in (<ref>), constructed from states from the Hilbert space H_A. Then, introduce another arbitrary Hilbert space H_B that we will use to "purify" ρ. For this purpose it is sufficient that H_B be of the same dimensionality as H_A, but for any specific ρ a much smaller Hilbert space might suffice. Then we define the pure density matrix in H_A⊗ H_B : ρ_AB = |Ψ_AB⟩⟨Ψ_AB|,|Ψ_AB⟩=∑_i √(p_i)|ψ_i⟩ |b_i⟩ ,where the |b_i⟩ are vectors of H_B.Then, ρ_AB is called the purification of ρ. The state is obviously pure, and one can easily show that we recover the original density matrix by tracing on H_B, i.e. ρ =Tr_H_Bρ_AB. Notice that ρ_AB is not defined uniquely; in fact given any matrix U s.t. U^† U =I, the state |Ψ_AB⟩=∑_i√(p_i)|ψ_i⟩ U|b_i⟩ is a valid purification. One can show that all purifications can be written in this form.It is from here that we can justify the name "entanglement entropy" for S. Indeed, it can be seen as a measure of the amount of entanglement between the two systems A and B described by ρ_AB. When one traces out the degrees of freedom of B, one loses the information that was contained in the entanglement between the two systems, and we end up with a mixed density matrix of non-zero entanglement entropy.As an easy check, for a pure matrix ρ we get S=0, and for finite Hilbert spaces S is maximized when ρ is an uniform mixture of all states in H_A. The purification is then given by a state |Ψ_AB⟩ which is maximally entangled.Thus, we must be wary of what quantity does the Von Neumann entropy compute for us. If we consider a pure state, and compute the Von Neumann entropy of a subsystem, then that will in fact tell us about the amount of entanglement it had with the rest of the system. However, if we repeat the same procedure on an initially mixed state, there will be an additional contribution coming from our initial ignorance, unrelated to entanglement. While it is true that S can always be viewed as measuring entanglement through a purification (<ref>), one must keep in mind if the purifying system is physical, or merely a mathematical trick[There are ongoing efforts to find a quantity that can distinguish classical correlations from quantum entanglement, one example being the "negativity"<cit.>].In practice, computing S directly in QFT is often hopeless, in part because of the need to compute the logarithm of the density matrix, for which we need knowledge of the eigenvalues of ρ. An alternative way to compute it is as a limit of "Renyi entropies" S^(n):S^(n)= 1/1-nln( Tr(ρ^n))S = lim_n→ 1S^(n) . Of course, the limit in (<ref>) is at best ill-defined, since S^(n) is formally defined only for integer n. Underlying this procedure then there is a notion of analytic extension of S^(n), whose legitimacy we will not attempt to justify. For functions specified on the integers, Carlson's theorem provides necessary conditions for the existence of a unique extension <cit.>.Before proceeding, let us mention that in QFT the entanglement entropy will always be a UV-divergent quantity. This is simply understood from the fact that there are degrees of freedom at every scale, and in particular at arbitrarily small scales. So, as we pick out a region A of a QFT, there will be entangled pairs at arbitrarily small scales across the interface A of the region. We expect then a UV-divergent entanglement entropy proportional to the area of this interface, i.e. in a QFT in D=d+1 dimensions <cit.> : S_A =Area( A)/ϵ^d-1+… ,where ϵ is a UV-cutoff. Thus when we speak about entanglement entropy of QFT and CFT's in what follows, we implicitly assume that they are regulated by a UV-cutoff. For d=1, the formula is ill-defined, and one finds a divergence in log().The technical way in which S^(n) is calculated in QFT is called the replica trick. Consider the Hilbert space H within which ρ is defined. The replica trick involves introducing an extended Hilbert space H^⊗ n, composed of n "replicas" of H. We will show that ρ^n can be obtained by computing a partition function in this extended Hilbert space.To explain this method, we need first to understand how is the density matrix defined in QFT. The usual way to define a state in QFT is by preparing it with the help of an Euclidean path integral. Consider for simplicity a QFT with a single scalar field ϕ. Then, a basis for the Hilbert space is |ϕ(x)⟩ where ϕ(x) is an arbitrary function of the spatial coordinates. One can define a state |Ψ⟩ simply by specifying ⟨ϕ(x)|Ψ⟩ for any ϕ(x). This can be done through the path integral by writing |Ψ⟩ as an Euclidean evolution of a state |ϕ_1(x)⟩ over an arbitrary geometry: |Ψ⟩ = e^-β H|ϕ_1(x)⟩ = ∫^ϕ(τ=β)=??_ϕ(τ=0)=ϕ_1Dϕ e^-S_E(ϕ) . The "??" notation simply shows that the upper limit of the integral is unspecified. With (<ref>), overlaps can be computed naturally as : ⟨ϕ_2|Ψ⟩ = ∫^ϕ(τ=β,x)=ϕ_2_ϕ(τ=0,x)=ϕ_1Dϕ e^-S_E(ϕ) . In fact, (<ref>) defines for us the wavefunction Ψ(ϕ_2).To choose a specific state, we can modify the geometry of the Euclidean manifold on which the path integral (<ref>) is computed, as well as change the boundary condition ϕ_1. In addition to that, one may also include operator insertions on this manifold, i.e. one might add operators in the path integral (<ref>). A cartoon depicting this state preparation procedure is fig.<ref>. Although the procedure (<ref>) might seem strange, it turns out to be very useful to produce some of the widely used states.One example is the vacuum state, which can be obtained by making an infinite time evolution from τ = -∞ to τ = 0. As for the boundary condition at τ=-∞, it is irrelevant as long as we choose a state which has non-zero overlap with the vacuum state. Indeed, under Euclidean time evolution the coefficients of the energy eigenstates will be suppressed by e^-τ E, and after an infinite time only the vacuum state will remain :|0⟩ = ∫^ϕ(τ=0)=??_ϕ(τ=-∞)=ϕ_1Dϕ e^-S_E(ϕ) . To pass from states to density matrices, there is but a step. Formally a density matrix will contract with a bra and ket, and spit out a number. Thus analogously to (<ref>), it can be defined by a euclidean path integral with two free boundary conditions on each side. Here, the most famous example is the thermal density matrix : ρ=1/Ze^-β H=1/Z∫^ϕ(τ=β/2)=??_ϕ(τ=-β/2)=??Dϕ e^-S_E(ϕ) . From here on we will take (<ref>) as the prototypical example, but everything we will say applies for any density matrix defined with the Euclidean path integral method. Z is such that the density matrix has unit trace, see (<ref>).Product of density matrices are then computed straightforwardly. Pictorially, we glue the two manifolds at the free boundaries, combining them into one (bigger) path integral. For instance, ρ^2 would be the same integral as (<ref>), but on a time interval of 2β. Finally, taking the trace is also relatively straightforward. Morally it is simply Trρ = ∑_i ⟨ϕ_i|ρ|ϕ_i⟩, which from (<ref>) is the path integral with equal initial and final boundary condition, summed over all possible boundary conditions. This is equivalent to imposing periodic time on the euclidean manifold on which we do the path integral. Thus, the trace of ρ can also be expressed as a path integral, on the compactified manifold in the τ direction.As an example, for a theory defined on a spatial circle, the thermal partition function can be depicted as a path integral on a torus:Z(β)= Tre^-β H= < g r a p h i c s >,which justifies a posteriori the earlier claim we made that a QFT on an euclidean manifold with periodic time coordinate is at finite temperature. After this diversion we can go back to entanglement entropy. Consider now a QFT, whose state on a spatial slice is described by ρ, defined by the path integral method. We separate that spatial slice into two systems A and B. We now make the intuitive (although rather subtle) assumption that the Hilbert space of a local QFT splits into two pieces H=H_A⊗ H_B.As we have said, to compute the reduced density matrix on A, we need to trace out the degrees of freedom of B. This partial trace comes about again quite naturally from the diagrammatic picture; in the path integral (<ref>) we will impose periodic boundary conditions only on the spatial region B, while on the region A we leave it unspecified. This will indeed give us the density matrix ρ_A, as now we need to feed in states defined only on A.Then, we need to do compute Tr(ρ_A)^n. Once more, the simplest way to view this is diagrammatically. What we obtain is a path integral on a complex n-sheeted manifold (denoted R^(n)), where the sheets are connected consecutively through the cut in the A region, see fig.<ref>. Notice that as circling around the cuts yields a deficit angle 2π(1-n). By this we mean that passing through all the sheets we make an angle 2π n before returning to the initial point. In this way the region A for which we want to compute the Renyi entropy becomes a cut in the Euclidean geometry. While the path integral fig.<ref> can be done by considering fields living in the full Riemann surface, it is much more natural following the construction to consider also "replicated" fields ϕ_i(τ,x), one copy living in each sheet. The gluing of the sheets along the cuts is then expressed as a cyclic boundary condition on the replicated fields, ϕ_i(τ=0^+,x ∈ A)=ϕ_i+1(τ=0^-,x ∈ A). The action of the full system is given by a sum of the n copies of the action for each of the ϕ_i.This picture induces a "replica symmetry", that is simply the cyclic permutation ϕ_i→ϕ_i+1. Leveraging this symmetry, instead of considering the n-sheeted Riemann surface, we can consider a single sheet, with a cut spanning region A. Along this cut, we will have the aforementioned cyclic boundary conditions, which will link the fields ϕ_i that are otherwise uninteracting. In the 2d case, one can show that thanks to the replica symmetry, the shape of the cut is unimportant; only the two boundary points of A affect the computation of the partition function. This allows the interpretation of the cyclic boundary conditions as arising from the insertion of "twist" operators at A. The partition function (<ref>) is then reformulated as the correlator of two twist operators located at the two points of A. This correlator is formally computed in the theory on the complex plane, with n uninteracting fields ϕ_i. The cut and the associated boundary conditions are induced by the twist operator insertions.Whatever the method, the name of the game then becomes the computation of the path integral (<ref>). This is impossibly complicated in a general QFT. Some results can be obtained in lower dimensions, especially in cases where conformal or other additional symmetries help to simplify the expression, see <cit.>. Here, we are content with the formal description, and will focus on the way this quantity can be computed holographically. Let us begin by the result, and we will briefly sketch a proof. §.§ The Ryu-Takayanagi prescriptionThe Ryu-Takayanagi (RT) prescription <cit.> allows one to compute the entanglement entropy of a constant time slice A of a holographic CFT in a static configuration. The static requirement is crucial here, as it allows to define the state with the Euclidean path integral prescription which in turns allows use (<ref>). A time-dependent state will in general require adding the time evolution on top of the Euclidean path integral that prepares the state. There is still a path integral representation of this process called the Schwinger-Keldysh path integral <cit.>, but we will not worry about it here.Consider the bulk gravity solution dual to our CFT state. The metric describing it will be static as well. Then, a natural foliation into spacelike slices is given by the surfaces orthogonal to the time killing vector _t which is guaranteed to exist by definition of the staticity. Denote the entanglement entropy of a sub-region A of the spatial slice t=0 in the CFT as S_A, and the associated spatial slice in the bulk as M_D-1. Then the RT prescription states :S_A= A/4G A =Min_S∈S( Area(S)) ,where S = {S⊆M_D-1A S =A}. See fig.<ref> for a visualization of S in the case of D=3-dimensional bulk. The prescription (<ref>) is very powerful, since the computation reduces to a minimization problem in classical geometry. It is in any case much simpler than (<ref>), and indeed in higher dimensions it is often the only way for computing entanglement entropies, provided the theory is holographic. We should however caution that this formula is only valid when the bulk dual reduces to classical (super)-gravity, hence in the large N and strong 't Hooft coupling limit. Both 1/N and stringy corrections are not included in this prescription.In chap.<ref> we provide several examples where the validity of this formula is confirmed. Impressively, we can do better; there exists a "proof" of (<ref>)<cit.> leveraging the Euclidean construction fig.<ref>. For the holographic derivation of the Renyi entropy, we would like to compute Tr(ρ^n). According to the dictionary, we should find a dual bulk for which the boundary metric approaches that of the n-sheeted Riemann surface R^(n). Calling this bulk manifold M_n :Tr(ρ_A)^n = Z^CFT_R^(n)/(Z^CFT_R)^n=_holographyZ^gravity_M_n/(Z^gravity_M)^n . Let us remember that in the saddle approximation, computing Z^gravity simply amounts to evaluating the Einstein-Hilbert action for a solution of the equations of motion.As an example, consider the simpler to visualize case where A is the full spatial slice, and the state is thermal. The replica geometry in question is an asymptotically AdS space where the time coordinate has an extended periodicity of 2π n instead of the usual 2π in the simple case. This geometry is sketched in fig.<ref>. Let us get a feeling for the replica bulk geometry which we denote by M_n. Looking at the metric close to the boundary, as we go through the cut on a τ circle we expect the deficit angle to carry into the bulk. This naturally implies the existence of a codimension 2 surface extending the branch points into the bulk. As we cross it, we go through thebulk geometry replicas creating a deficit angle in the bulk.To restrict our search space, we will assume that the replica symmetry of the n-sheeted boundary geometry extends into the bulk. This is a reasonable assumption to find the dominant saddle, as it usually respects the maximum amount of symmetry. Replica symmetry breaking will in general contribute to higher 1/N corrections to the formula <cit.>. Proceeding as in the field theory construction, one can consider the quotiented geometry M_n, on which we consider n copies of the action. The branch cut resulting in the geometry should carry a deficit angle of 2π/n, which will enforce the cyclic boundary conditions for the n copies of the fields. In the same vein of the "twist fields" construction, this branch cut can be naturally generated by introducing codimension 2 object with tension T^(n)=n-1/n, which will backreact the geometry appropriately<cit.>.The result of this construction is that one has to consider the Euclidean action (<ref>) in order to compute Z^gravity_M_n : I^n = I^M_n_EH+T^(n)/4G∫_N d^D-2x√(h) ,where N is a codimension-two spacelike surface, h_ij its induced metric and I_EH is the Einstein Hilbert action including the Gibbons-Hawking term as well as counterterms. According to the holographic prescription, we must find solutions of (<ref>) with boundary conditions set by the CFT state on the boundary. In particular, N should be anchored to A, as we mentioned.The equations of motion for the metric are the usual ones away from N. Because of the staticity, we can immediately restrict N to the τ=0 spatial slice. On this slice, the equations of motions from (<ref>) then tell us that the surface should be of minimal area.The final step lies in the evaluation of Z^n_grav= e^-I^n = e^-n I^n, and its subsequent differentiation. Consider I^n evaluated on shell. The differentiation _n I^n then can be seen as a variation of all the fields including the metric. Since we are on-shell, δI^n will vanish for a generic variation of the fields, up to boundary terms. Here we have two boundaries; the asymptotic boundary and the location of the bulk branch cut. The first one is canceled by the Gibbons-Hawking term and does not contribute. The location of the makes however a non-vanishing contribution.To compute its value, we excise a codimension 1 tubular region of radius ϵ around the cut, denoted N^. This region is an additional boundary of our spacetime, thus we know that the variation of the E-H action will produce a boundary term of the Gibbons-Hawking-Type : _n I^n = - 1/8π G_n ∫_N^ d^D-1x √(h_)K_ , where quantities indexed byrefer to the aforementioned tubular neighborhood of the branch cut.One can compute this contribution at leading order inby considering the branch cut induced metric h_ij and expanding it locally. By this method on shows that K_≈1/n thus : _n I^n =Area(N)/4 G n^2 ,where being on shell, N satisfies precisely the conditions explained in (<ref>) (minus the homology constraint, which is included a posteriori<cit.>). With Z^grav_M_n=exp^-n I^n, and expanding (<ref>) for n≈ 1 : S^(n) =1/1-nln( Tr(ρ_A)^n)=-1/1-n(nI^n-n I^0)=-1/1-n(n(n-1)_nI^n|_n=1+o((n-1)^2))=n Area(N)/4G+o(n-1)/n-1 . Taking the limit n→ 1 gives us finally the Von Neumann entropy according to (<ref>), and demonstrates the validity of the RT conjecture. §.§ Non-static spacetimes and HRT prescriptionWith the RT formula in hand, we have a powerful geometric method that computes entanglement entropies of holographic CFTs, provided that they are in a static configuration. The natural extension that one wants to consider is more general, non-equilibrium situations.There are a few obstacles that appear in principle to the naive generalization of the RT prescription. In the RT formula (<ref>), one can seek a surface of minimal area, because the search is restricted to a spatial slice of the spacetime. In a non-equilibrium situation, given a spacelike slice on the boundary, there is no preferred way to extend it into the bulk. So how can we choose which slice to search the minimal surface on? One might suggest to not restrict our search to a predefined spacelike slice; instead, we could search for the minimal codimension two spacelike surface anchored at A on the boundary. However, this problem is ill-posed because of the timelike direction to which we now have access in the bulk. Indeed, given any spacelike surface, we can deform it so that it "zigzags" along nearly lightlike directions; in this way, one can bring the area arbitrarily close to zero.The correct extension was found by Hubeny, Rangamani, Tkayanagi and is referred to as the HRT prescription. As explained in <cit.>, there are several equivalent ways to formulate it. The key takeaway is that the important property that must be preserved from the RT prescription is not the minimality of the area, but rather, its extremality. For any codimension one spacelike region A of the boundary (and for any CFT state), the HRT formula is the same as (<ref>), with S now being the set of codimension 2 spacelike surfaces which extremise the action ∫ d^D-2y √(h). As in general there might be several extremal candidates, we are instructed to pick the minimum of this set. Notice this trivially reduces to the RT prescription when the state is static, as by time-invariance and time-reversal symmetry one can restrict the extremal surface to lie in the constant time slice.In the case of the 3-dimensional bulk that will be of interest here, an extremal codimension 2 spacelike surface is simply a spacelike geodesic. Thus in order to compute HRT surfaces we will need to find spacelike geodesics anchored on the two boundary points, and compute their lengths.§.§ 1-loop correction and the Quantum Extremal SurfaceAs already stated, the result that we derived holds at leading order in 1/N. To find the next-to-leading order contribution, we need to consider the quadratic fluctuations around the saddle point. As usual, this can be done in the field theory by considering the path integral expression of the replicated partition function, and computing the quantum corrections to the saddle point. Through the AdS/CFT correspondence, these one-loop corrections should be also computable on the gravity side, by appropriately amending the RT prescription (<ref>).This expectation is correct, and we can again use the dictionary to translate the corrections of the field theory path integral to bulk quantities <cit.>. This time, we skip the justification and go straight to the result : S_A = A/4G+S_bulk(X)+O(1/N^2) , where A is defined in the same way as for (<ref>), and X is the bulk region contained between the RT surface and the boundary region A, see fig.<ref>.S_bulk then denotes the entanglement entropy of bulk fields as computed by tracing their density matrix over the complement of region X. For instance, if the gravity theory has a propagating scalar field, it will contribute to S_bulk with a contribution of the form -ρ_X ln(ρ_X), where ρ_X is the density matrix of the scalar field restricted on X. The important thing to realize is that in (<ref>), we begin first by finding the extremal RT surface in the same way as before. After that, we compute the additional S_bulk contribution. Note that the derivation of <cit.> is applied in the static case, but we will assume it generalizes to arbitrary states.This leading-order prescription for quantum corrections admits an elegant generalization to all orders called the Quantum Extremal Surface(QES) prescription<cit.>. The proposal is to extremize the quantity :S^q(X) =Area( X)/4G+S_bulk(X) , where X is again the region enclosed by the candidate surface X and the region A, as in (<ref>). Note that in the case of the HRT prescription, the region X is found by identifying a Cauchy slice which contains the entirety of X.The key difference from (<ref>) is that instead of finding the X that extremalizes the Area functional, we search for X that extremalizes S^q(X). If X satisfies this condition, it is called "quantum extremal". Call the set of all quantum extremal regions X. Then the entropy S_A is : S_A =Min_X∈X( S^q(X)) .The claim of <cit.> is that (<ref>) can be used to compute the entropy of A at all orders, so long as S^q(X) is also computed at all orders. At first leading order in N, we recover the RT prescription. as at this order S_bulk(X) does not contribute to S^q(X). It is also possible to show that at the next leading order, we recover the quantum correction (<ref>). This is not as straightforward however, because the quantum extremal surface and the classically extremal one of (<ref>) will not coincide in general. However, it can be shown<cit.> that this difference contributes only to the O(1/N^2) corrections, confirming that the two prescriptions coincide up to O(1/N^2) corrections.While this formula is satisfying in its simplicity, using it to compute quantum corrections in the gravity theory is difficult since it entails calculations of QFT entanglement entropies in S_ bulk(X). Nonetheless, we will see in the next and last section of this chapter that it is a crucial ingredient in one of the groundbreaking advances in the black hole information problem.Let us finish by touching on the issue of "bulk reconstruction"<cit.>. This is the question of how is the bulk encoded in the boundary CFT, and what portion can be reconstructed if we consider only part of the boundary system. The subject is extremely rich, but in essence, the prevailing theory is that bulk reconstruction and entanglement entropy of the dual CFT are intimately related. It is conjectured<cit.> that given a boundary region A (as in fig.<ref>), the reconstructible region of the bulk (also called the entanglement wedge of A) is precisely X. More generally, if one considers also the time direction, the reconstructible region is the region which lies in the Causal diamond of X; namely the points which are causally connected only to points in X.§ THE BLACK HOLE INFORMATION PARADOX AND THE ISLAND RESOLUTIONThis section is a lightning review of the Island formula, a new prescription to compute the entanglement entropy of the Hawking radiation.It was introduced in <cit.> where it was argued that it could help resolve on aspect of the black hole information paradox.§.§ The information paradoxLet us begin by very quickly reviewing the aforementioned paradox. For in-depth reviews, see <cit.>. The black hole information paradox was first brought forth by Hawking <cit.>, who noticed that the thermal evaporation process evolved pure states to mixed ones, thereby violating unitarity.Naively, we would tend to explain away this problem by appealing to the fact that we do not know the underlying quantum gravity theory, and that this problem may go away in this framework. However, the paradox can be stated in a controlled manner, by considering a "nice" Cauchy slicing of the evaporating black hole spacetime, on which the curvatures and energies are much lower than the planck length ℓ_p. Such a slicing can be found for most of the black hole's lifetime, when it is big enough such that the curvatures away from the singularity are not extreme. It will break down when the black hole becomes Planckian, but we will be able to formulate the paradox long before that time.Consider a black hole that is formed by collapse of a pure state, |ψ⟩_matter. We let it evaporate, and collect the Hawking radiation that is produced. The evaporation process can be seen as generated by successive emission of pairs of Hawking quanta, which are produced at the horizon. For our purpose, we can assume the state of the two Hawking pairs to be maximally entangled : |ψ⟩_pair=1/√(2)(|0⟩_in |0⟩_out+|1⟩_in |1⟩_out) . In practice, the Hawking pair state will be more complicated (see <cit.>) but the essential fact that we wish to capture is that the state has an amount of entanglement of order unity. Then, the quanta labeled "out" makes it out to infinity and it is what constitutes the Hawking radiation, while the quanta labeled "in" falls into the horizon and towards the singularity.One important fact that distinguishes this process of evaporation from something more mundane like a piece of burning wood, is that the particle creation happens mostly near the horizon, which is located very far from the matter constituting the black hole for the overwhelming majority of its lifetime. One can make this precise by looking at the geometry of a collapsing shell of matter, and the accompanying "nice" Cauchy slicing, but we will not get into such detail here. This picture and the crucial assumption of locality suggest that the total state of the Hawking pair and the black hole matter factorizes :|Ψ⟩_ total=|ψ⟩_ matter⊗|ψ⟩_pair . Then after n timesteps, the total state will be of in the form (<ref>):|Ψ⟩_ total=|ψ⟩_ matter⊗|ψ⟩_pair^⊗ n ,where |ψ⟩_pair^⊗ n stands for the emitted n quanta. This is a rough argument, but it has been argued that corrections, e.g. due to the fact that two consecutive pairs can be created "near" one another, will not alter the final conclusion <cit.>.To obtain the state of the Hawking radiation, one has to trace out the black hole degrees of freedom (|ψ⟩_ matter) as well as the infalling Hawking modes. Tracing out |ψ⟩_ matter does not produce any entanglement, but for each partially traced hawking pair, we increase the entanglement by ln(2). This shows that as the black hole evaporates, the entanglement entropy of the collected radiation increases. This is the crux of the paradox. Letting the black hole evaporate up until the point where it becomes Planck-sized (and our approximation ceases to apply) leads to a mixed density matrix ρ_ rad for the collected Hawking radiation. It contains an arbitrarily high amount of entanglement, determined essentially by the size of the initial black hole. Assuming that the evaporation continues until the black hole's disappearance, we reach a contradiction with unitarity: the evaporation turned a pure state |ψ⟩_ matter into a highly mixed state ρ_ rad. Of course, in our explanation we swiped under the rug all the details that render this derivation truly convincing, but they can be found in the review <cit.>.One possible alternative is that the black hole does not disappear, but some remnant (whose precise description depends on the theory of quantum gravity)<cit.> is left behind. Although this is a logical possibility, such remnants would have unbounded degeneracy (due to the fact that they should be able to have arbitrary amounts of entanglement), while having bounded energy and size, which would make them extremely exotic compared to usual matter. To make things worse, if one believes this argument and the Bekenstein-Hawking formula, the paradox appears much before the black hole approaches planckian sizes. Indeed, the thermodynamic entropy of the black hole (given by (<ref>)) gives us an upper bound on its Von Neumann entropy, as the thermodynamic version can be seen as a "coarse grained" value of the entanglement entropy[The thermodynamic entropy of a state can be defined as the maximal possible entanglement entropy of the microscopic states allowable, given the macroscopic constraints on the state.]. Therefore, somewhere after the half-point of the evaporation, the entanglement entropy of ρ_rad will exceed the thermodynamic entropy of the Black hole. But when a pure state is partitioned into two, the two subsystems have the same entanglement entropy. Thus we have a clash already at the mid-point of evaporation with the usual laws of quantum mechanics. This was made precise by Page <cit.>, who modeled the Black hole dynamics as applying random unitaries on its Hilbert space, emitting random quanta as time passes. With this model, he recovered the so-called "Page curve" for the entanglement entropy of the radiation, depicted in fig.<ref>. This is the curve that the entanglement entropy of the radiation should follow assuming the black hole has random unitary dynamics.The goal of any potential resolution of Hawking's paradox should be to reproduce the curve fig.<ref> for the entanglement entropy of the radiation. This seems an impossible task without abandoning at least one of the following seemingly well-tested concepts :*Locality of physics*The equivalence principle*Unitarity For instance, proposals like fuzzballs<cit.> and firewalls <cit.> do away with the equivalence principle, while preserving locality and unitarity. The Island proposal that we will present shortly finds an interesting loophole that allows it to recover unitary evaporation while preserving all 3 of the aforementioned principles. It does so by modifying the procedure by which one computes S_rad, the entanglement entropy of the Hawking radiation. What this implies is that in a theory with gravity, (unexplained) gravitational effects render incorrect the entropy computation of the radiation in the naive way we have outlined above. While the Island prescription tells us the "correct" way to compute this entropy, it does not explain the full story of how the radiation is purified. It is nevertheless striking that a semiclassical calculation manages to reproduce the Page curve which one expected to be only computable in quantum gravity. §.§ The Island prescriptionThe Island prescription for the entanglement entropy of the radiation is best stated in the context of AdS/CFT, where it was first derived. The setup is that of an asymptotically AdS spacetime coupled to a flat "bath" at its boundary. The coupling at this boundary is such that it is transparent to any outgoing excitations; in particular, Hawking radiation reaching the boundary passes through and is collected at future lightlike infinity.The reason for including this auxiliary system is that otherwise the black hole would reach equilibrium with its radiation reflected at the AdS boundary[Strictly speaking, this is only true when the dimension of the bulk is ≤ 3, see (<ref>). In higher dimensions, one could in principle consider "small" black holes which do evaporate even in AdS. This unstable case is however much less understood from the holographic perspective, and furthermore one cannot easily separate the radiation from the black hole Hilbert space. Thus all applications of the Island formula are up until now in setups that include a bath system to collect the radiation.]. We skip over the details of how to actually confection such a setup (see[In these works, they start with an eternal black hole geometry and couple it to the vacuum bath at a finite time, letting the black hole evaporate. Alternatively, one could start with the coupled system, and prepare a shell collapsing to a black hole.]<cit.>), and show in fig.<ref> the portion of the Penrose diagram that will be of interest to us.If we consider the holographic dual of an AdS_D black hole spacetime, we obtain a (D-1)-dimensional CFT in a thermal state (which we will call "DBH" for "Dual Black Hole") coupled to D-dimensional bath system (which we will call "bath"). We assume that the initial state is a tensor product of the two, and is pure. For convenience, we take the bath to also be a CFT, in the vacuum state. We then expect DBH to radiate into the bath, until it cools down to zero temperature. Our goal is to compute the entanglement entropy of the radiation we collect in the bath, which on the gravity side is precisely the entanglement entropy of Hawking radiation. To do so, we will apply the prescription (<ref>), the region A being the full DBH system. This computes the entanglement entropy of the DBH system, but since the state of the full system is pure, it is also equal to the entanglement entropy of the radiation.With this reasoning, we obtain the "Island formula" <cit.>:S_rad =Min{ ext_I( Area(I)/4G+S_ matter( rad∪I))} ,see fig. <ref> for a clarification of the different quantities entering (<ref>), and for an explanation of the term "Island" in the name.Let us stress that this is simply a rewriting of the QES formula (<ref>), from the perspective of the complementary system. The two prescriptions are one and the same, simply viewed from two different perspectives. Armed with (<ref>) and the setup in fig.<ref>, we can perform the explicit computation of the entanglement entropy in simple AdS_2 toy models based on Jackiw-Teitelbom gravity<cit.>. The main reason for considering such simple 2D models is that they allow the computation of S_matter which is otherwise impossible, as it boils down to a computation of entanglement entropy in QFT. We will not reproduce the full computations here, but let us indicate qualitatively how it reproduces the Page curve. We will make the argument both by considering (<ref>), and by using the Island formula (<ref>), in order to show that they are indeed two facets of the same coin.Consider first early times, just after the black hole formation and before it had time to evaporate much. The only candidate quantum extremal (H)RT surface is a vanishing X sitting at r=0 (we will call this the "trivial" surface). Then, the region X spans the full gravitational Cauchy slice, which contains a few outgoing Hawking quanta on their way to the bath, and most importantly all the ingoing hawking quanta on their way to the singularity. In this way, S_bulk(X) roughly grows by ln(2) for each emitted Hawking quanta. By invoking the purity of the state on the full Cauchy slice including the bath part, we recover Hawking's result for the radiation entropy, in the early times of evaporation.Applying the Island formula, the arguments are similar. In the early times, the extremizer of (<ref>) is the empty set Island, I=∅. Thus we find that the "true" entanglement entropy of the radiation is simply given by the usual Hawking's result, in which we compute S_matter( rad) in the usual way by the Von Neumann formula (<ref>). This gives of course a steadily growing entropy because of the state's entanglement with the infalling modes.At later times, when a non-negligible amount of radiation has been emitted, another QES appears, which is characterized by X sitting close to the horizon[According to the toy model, this surface could be either slightly inside or slightly outside the horizon<cit.>, but this does not alter the conclusion.], or alternatively by an island I spanning the black hole interior, see fig. <ref>.Let us explain this new extremal surface (called "Island" surface) from the perspective of (<ref>). In the case depicted in fig. <ref>, Area( X) gives a contribution akin to the Bekenstein hawking entropy ≈A/4G where A is the horizon area. This is a trade-off for a substantial reduction in S_ bulk(X); indeed, in this way we excise most of the ingoing Hawking pairs, removing most of the contribution due to S_ bulk(X) in (<ref>). The main contribution to the trivial surface is thus the entanglement entropy of the ingoing Hawking pairs, while the main contribution to the Island surface is the Area of the horizon. At the page time, these will be equal, and the minimal surface will transition from the trivial to the Island one.The computation using the formula (<ref>) proceeds likewise. This time, we introduce the Island region I as a means to reduce the contribution S_matter(Rad∪I). Indeed, by including ingoing Hawking quanta together with the radiation, we include both pairs of (<ref>), purifying the state and removing their contribution to the entanglement. Then, the main contribution comes from Area(I), which is roughly the Bekenstein Hawking entropy when I is extremal.With these facts established, we can easily see that we recover the page curve for S_rad. Before the Page time, the entanglement entropy of the radiation follows Hawking's result, and steadily increases as we collect more. However, at the Page time, the Island surface takes over as the dominant contribution to S_rad, and the entanglement entropy of the radiation goes down with the horizon area as the black hole continues to evaporate, recovering exactly the Page curve fig.<ref>.This is a satisfying result, especially because the QES prescription (<ref>) (equivalently the Island formula (<ref>)) can be "proved" by an Euclidean path integral construction akin to the one showcased in sec.<ref>. In one sentence, the QES prescription arises when one considers wormhole geometries connecting different replicas<cit.>. Such saddle points were not considered in the derivation of the RT surface, and should be allowable in the replicated geometry since they satisfy the boundary condition, despite the topology change.One final interesting remark concerns the entanglement wedge of the radiation in this evaporation model. Before the Page time, this entanglement wedge comprises only the radiation system, as we would expect. However, after the Page time, the entanglement wedge of the radiation contains also a portion of the black hole interior, namely the entanglement wedge of the Island region (see fig. <ref>). This seems to imply at first glance that some non-local effects are taking place; but at no point in the derivation did we abandon locality. We must conclude that the degrees of freedom of the black hole interior are somehow encoded in the Hawking radiation, which is not really surprising since we expect the information inside the black hole to be accessible from the radiated quanta. However, this is exactly what the Island prescription does not tell us, how precisely the information escapes from the black hole. In this sense, the black hole information paradox is still somehow an open question, as we have shown the information comes out, but not how.Let us finish by mentioning that one does not necessarily need to consider an evaporating black hole geometry to study Hawking's information paradox. A version of this paradox can be obtained in the maximally extended eternal black hole geometry roughly as follows<cit.>. The full geometry contains two boundaries, which house the dual system composed of two CFTs in a thermofield double state<cit.>. This is a purification of the thermal state of each individual CFT.To set up the information paradox, we couple the geometry to two CFT baths (see fig.<ref>), which are in thermal equilibrium with the black holes. Because of this equilibrium, the eternal black hole geometry is not perturbed. Crucially, however, there is radiation exchange between the bath and black hole. We continuously collect Hawking radiation in the bath, and the energy lost in this way is compensated by ingoing radiation due to the thermal state of the bath. What we do next is consider the Entanglement entropy of the two copies of the CFT baths as a function of time (two relevant Cauchy slices are sketched in fig.<ref>). According to Hawking's computation, this entropy will increase indefinitely as we collect more and more radiation from the black hole. This is a paradox, because the thermodynamic entropy of both black holes is 2A_BH/4G, where A_BH is the area of one horizon. Therefore we should expect by unitarity that the entanglement entropy of the outgoing radiation does not exceed this value.Again, the Island prescription saves the day. At late times, an Island surface spanning the interior of the double-sided black hole will purify the outgoing Hawking's radiation, while contributing a factor of 2A_BH/4G to S_ rad from the Area(I)/4G term of (<ref>). This successfully caps the entropy of the radiation at just the right value to preserve unitarity. §.§ Doubly holographic modelsOne major inconvenience of (<ref>) is that it requires the computation of entanglement entropies of quantum fields on a curved background. The cases where this can be done are few and far between, which is the reason for using the simple JT gravity model. As explained previously, in two and three dimensions gravity is quite different, and so the extrapolation of the computations to higher dimensions is far from obvious, but see <cit.> for some interesting work in this direction.As it turns out, a case where the computation of the entanglement entropy is easier to handle is the case of a holographic system. The idea introduced in <cit.> is to consider the DBH+bath system as a holographic BCFT model. Then, the bulk dual of such a system is an asymptotically AdS spacetime, capped off by an End-Of-the-World (EOW) brane, which is dual to the boundary of the CFT. The clever trick lies in a third representation of the system, in which, roughly speaking, we "unfold" holographically only the CFT boundary. In this picture, we have the CFT bath joined through the interface to a gravitating system of the same dimension (the "brane-bath" picture). Choosing the boundary state accordingly, we could generate a black hole on the brane, and thus setup an evaporation experiment. The three dual systems are depicted in fig.<ref>.System 1 and 2 already appeared in the previous section. From the holographic dictionary for BCFT's, systems 1 and 3 in fig.<ref> are related by a duality. The intermediate system 2 is not as obvious, as it is not clear we can consider the gravity dual of the theory living on the boundary while leaving the connected bath CFT untouched. In <cit.>, system 2 is obtained by starting with system 3, and integrating out the bulk degrees of freedom, which produce an effective action on the EOW brane. By tuning the brane's tension and Lagrangian, one can setup a Randall-Sundrum <cit.> type scenario, where the brane's worldvolume effectively acquires a dynamical graviton.If one accepts the legitimacy of this "doubly holographic" system, this is the perfect playground to study black hole evaporation, as the computation of the entanglement entropy of the radiation is completely geometrized through the (H)RT prescription in system 3. We skip over the details, but one needs to amend the RT prescription as we are in presence of a gravitating EOW brane in the geometry. It turns out<cit.> that the RT surfaces are allowed to end on the EOW brane, and that this produces an additional contribution of A(_int)/4 G_ brane, where G_ brane is the Newton's constant for the brane's effective gravity and A(_ int) is the area of the region _ int intersected by the RT surface.We can now see from these models the simple origin of the Island prescription qualitatively as follows. We want to add a black hole in the gravity part of system 2 to recreate the eternal black hole setup of the previous section. We thus begin with two copies of system 1, in the thermofield double state. The bulk black hole of system 3 intersects the brane, which generates the black hole that lives on it (see fig.<ref>). We then use the RT prescription in system 3 to compute the entanglement entropy of both bath states.There are two extremal surfaces (see fig.<ref>). The first one (called the Hartman-Maldacena surface <cit.>) passes through the wormhole in the black hole interior to connect the two baths directly. This contribution will yield the initial growth of the entanglement entropy of the bath. Indeed, as when we evolve forward in time the wormhole throat is stretched out, increasing the RT surface's area accordingly.The other extremal surface has a constant contribution as we evolve forward in time. Depicted in fig. <ref>, it curves and terminates on the EOW brane. The full RT surface is composed of two disconnected copies of the one depicted in fig. <ref>. They will have a contribution of approximately 2A_BH/4G_ brane coming mainly from the anchor point which will be close to the brane horizon. Thus in the doubly unfolded system 3, the appearance of the Island region I on the brane is not so surprising anymore. In the bulk, this happens when the entanglement wedge of the two bath CFTs includes a portion of the EOW brane. The results obtained are the same as if we used the Island prescription in system 2. The advantage of the doubly holographic model is that we can perform the computation in arbitrary dimensions, as the computation is purely geometrical.Let us conclude this section by mentioning that while the Island prescription does successfully recover the Page curve, it is still being debated whether we are indeed solving the information paradox in doing so. One criticism already mentioned is that in low dimensions where gravity is "non-standard"<cit.>, behaving very differently thant D≥ 4. This problem is formally addressed with the use of doubly holographic models, but as was noticed in <cit.> the coupling of the gravity to the CFT bath in system 2 gives a mass to the graviton due to the non-conservation of the gravity stress-energy tensor induced by the energy seeping out in the bath. It was furthermore argued in these works that attempting to make the graviton massless would make the Island disappear. Another possible issue in the doubly holographic models is simply the intermediate point of view of system 2. This is an effective description that involves integrating out degrees of freedom in the bulk, so it is not evident how robust it is.CHAPTER: PHASES OF HOLOGRAPHIC INTERFACES This chapter is based on 2101.12529 This chapter is dedicated to the study of minimal holographic ICFT models that were described in sec.<ref>, for the restricted set of equilibrium states at finite temperature. The analysis that we will perform is very similar in spirit to the Hawking-Page analysis of AdS at finite temperature described in sec.(<ref>). The presence of the interface that might seem innocuous at first will on the contrary enrich the system quite considerably, and produce a plethora of different equilibrium solutions that we will describe in detail. Our analysis is mainly focused on the gravity side in the limit where it is classical, and as such it should be seen as describing phenomena at strong coupling in the dual field theory. The initial motivation to consider such a model was for potential application to the Black Hole information paradox. As we have seen in sec.<ref> the setup in which the Island construction is applied involves connecting two spacetimes, one containing the black hole and the other acting as a bath to collect the radiation. Even more directly, the doubly holographic constructions necessitate the introduction of EOW branes, which are a special case of the domain walls we consider here. Initially, the hope was to produce natural setups in which one could let a black hole on one side of the interface evaporate, and somehow collect the radiation on the other side. Although we found solutions on which this could be realized, the separation provided by the membrane is not sufficient to be able to repeat the arguments of sec.<ref>. Later, <cit.> managed to exploit the model in order to setup an Island computation in the doubly holographic picture, in which the two CFTs constitute a "double bath" for the radiation.Of course, the study of this model at finite temperature is also interesting on its own, if only because it is yet another probe into the physics of ICFT. In that context, it can also provide an interesting model for some condensed matter systems. Indeed, two quantum wires joined at an impurity would be modeled by an ICFT at criticality <cit.>. As an added bonus, entanglement entropy computations which have become an important tool are extremely facilitated when one has access to the holographic dual<cit.>. The study of those is delegated to chapter <ref>.The main results of this chapter are the fully analytic description of dual geometries of the equilibrium ICFT state. The phase space and the nature of phase transitions is analyzed in detail, and we finish by the (numerical) computation of the phase diagram for selected values of the parameters. We comment on the interpretation of the different phases in the holographic context. To lighten notations, throughout this chapter we use units in which 8π G =1.§ ICFT MODEL AND TOPOLOGY OF SLICESLet us define precisely the model we will be studying starting from the field theory description. We consider CFT_1 and CFT_2, two CFTs coexisting at thermal equilibrium on a circle. The manifold on which they live can thus be seen as a "striped torus", see fig.<ref>.We will be working in the minimal holography bottom-up model described in sec.<ref>. As such, the available parameters are the respective central charges c_1 and c_2, as well as the states on each side, determined by ⟨ T^i_μν⟩ in the large-N limit. With the further assumption that we are in an equilibrium state, the only non-vanishing components are ⟨ T_++^i⟩ = ⟨ T_–^i⟩, namely there is no net flux of energy. We have additional parameters related to the Manifold geometry; the temperature T and the size of each CFT, L_1 and L_2. Only dimensionless parameters will be conformally invariant, so we can consider τ_1 = TL_1 and τ_2 = TL_2 as the two physically meaningful quantities. As for the interface, it is minimally characterized by one numberas explained in sec.<ref>. Without loss of generality, we henceforth assume c_1≤ c_2. We will refer to either theory and their associated bulk as being on "side 1" or "side 2".The gravity dual will then be composed of two asymptotically Anti-de-Sitter spacetimes, as well as a gravitating membrane of tension(also referred to as "string" or "wall" in the text), anchored at the interface location on the boundary. Using the minimal holographic dictionary, more specifically the Fefferman-Graham prescription, we can determine the metric of the spacetime as a function of ⟨ T_++^i⟩ (we will opt for the reverse process in this case to work with parameters adapted to the gravity theory). In a suitable coordinate system, the corresponding Euclidean metric for each side can be written in the form (<ref>) : ds^2= (r^2-Mℓ^2)dτ^2+ℓ^2 dr^2/r^2-Mℓ^2+r^2dx^2 .According to the sign of M, this metric either describes thermal AdS (M<0) or the BTZ black hole (M>0) with an horizon at r=√(M)ℓ. To respect the boundary geometry, we have to require τ∼τ+1/T and x∼ x+L. Additionally to that, there are some constraints on the parameter M in order to avoid conical singularities as explained in sec.<ref>. These conditions impose :M=(2π T)^2,M>0 ,M=-(2π/L)^2,M<0 .Finally, the AdS radius ℓ_i for each side will be of course determined by the Brown-Henneaux formula (<ref>), which in our units system reads c =12 πℓ. Borrowing nomenclature from Coleman and De Luccia <cit.> in their study of vacuum decay, we will also refer to side 1 as the "true vacuum" and side 2 as the "false vacuum". This nomenclature is explained by considering the cosmological constant as arising from the vacuum expectation value of a minimally coupled scalar field, in which case lower values of ℓ_i correspond to a lower minimum for the scalar field potential.The parameter M is determined by the associated CFT state. Through the dictionary (see (<ref>)) we find the correspondence ⟨ T_–⟩ =⟨ T_++⟩ =1/4Mℓ, which results in an energy density of 1/2Mℓ. From (<ref>), when M>0 this energy depends on the temperature and is to be interpreted as a thermal energy density. When M<0, this energy is negative and scales as 1/L, it is a Casimir energy as explained in (<ref>). The minimal ICFT dual involves a membrane of tensionanchored at the CFT interface location on the boundary. One should specify its shape to complete the description of the bulk state. This is non-trivial in the general case, and one of the goals of this paper. We can however begin by specifying the different allowable topologies of each spacetime slice. Before doing so we will have to make a few additional restricting assumptions about the gravity dual model. While we know from the dictionary that there should be two membrane pieces attached on the boundary interfaces, there is no a priori requirement that they join smoothly in the bulk. Two membranes could in principle fuse at an angle instead of joining smoothly, as depicted in fig.<ref>. Although such wall junctions should enter in the full holographic picture of the model, we will restrict ourselves to consider only smooth walls. With that in mind, we can identify 5 types of "half-spaces", or "slices", which are delimited by the membrane, see fig.<ref>. The metric (<ref>) of slices labeled byE1, E2 has M<0, whileE2', H1, H2 have M>0. The different slices in (<ref>) are distinguished topologically according to which cycle (τ or x) is contractible in the bulk, if any. More simply put, they are distinguished by whether the center of AdS, or the black hole are excised from the geometry. In this sense,E2 andE2' are of the same "topological type", and that is why we distinguish them only by a prime. On the contrary,E1 andE2 are of different topological type because the center of AdS is not excised inE1, while it is inE2.Note that the regularity conditions (<ref>) are applicable only when the relevant cycle is contractible in the excised slice. In other words, forE2 andE2', the choice of the parameter M is unconstrained, even if it would produce a conical singularity in the full spacetime, since such a singularity is excised in the final geometry. On the other hand, forE1,H1 andH2, (<ref>) must be satisfied.To construct a generic solution dual to an equilibrium ICFT state, one should pick two slices from fig.<ref>, and glue them together along the membrane. This generates a bulk that has the required asymptotic metric as per the holographic dictionary. This is pictorially depicted in <ref>.Of course, the gluing of the slices must satisfy some conditions, which are a result of Einstein's equations coupled to the membrane of tension . We will see that as a result, not all slice pairings will be allowed. These constraints, coming from the gravity theory, tell us something about the strongly coupled physics of the field theory dual. Determining what exactly is that is far from obvious, and will be explored in the main text.§ THE GLUING EQUATIONIn this section, we re-derive the "matching conditions", i.e. the equations that determine how the gluing of the two spacetimes should be done. Of course, this study is not the first time these equations appear, and they are attributed to Israel <cit.> (hence the name "Israel matching conditions"). However, we provide here a derivation directly from the action principle of relativity. Usually, the equations are derived directly from the equations of motion, by integrating on a thin shell around the gravitating wall. Although the derivation from the action principle is also probably not original, we think it might be useful, especially in the case one would like to extend the equations to more general cases, such as interface junctions.The first step is determining what is the correct action to vary. The full system is comprised of two manifolds with gravity, and one membrane of constant tension . For the membrane, the action is simply proportional to the area of the surface, the proportionality value being the tension. For the two bulks, the action will simply be the Einstein-Hilbert one. Since the membrane constitutes a boundary of spacetime, we also need to include the Gibbons-Hawking boundary contributions to make the variational problem well-posed. Considering for simplicity there is no other boundary other than the membrane :S_ gr =1/2 ∫_𝕊_1d^Dx √(-g_1)(R_1+L^1_mat)+1/2∫_𝕊_2d^Dx√(-g_2)(R_2+L_mat^2) - ∫_𝕄 d^D-1s√(-h_m) + ∫_𝕊_1 d^D-1s √(-h_1)K_1 + ∫_∂𝕊_2 d^D-1s √(- g_2)K_2. Some precisions are in order. In (<ref>) we consider essentially two separate manifolds 𝕊_i, which are formally identified at their boundary, 𝕊_i. This contains in particular the less general point of view in which the 𝕊_i are simply two sections of a bigger manifold 𝕊. The manifold 𝕄 can thus equivalently be seen as equal to either 𝕊_i.L^i_mat are the Lagrangians for the matter content on each side. The h_i are the induced metric on the boundaries, and the K_i are their extrinsic curvatures. Note that with the sign conventions we have used in (<ref>), the normal vectors with which we compute K_i should be pointed outward from the bulk 𝕊_i. The D-1 coordinates "s" parametrize the boundaries. Because the matter content on each side can be completely different, there is no reason to believe that the solutions to the Einstein's equation g_1 and g_2 should be smoothly related across the separating membrane. How they are connected will be given by the variation of (<ref>). However, in writing this action, we implicitly made an assumption for it to be well defined. Indeed, we must assume h_1=h_2, since the boundaries are identified, 𝕊_1=𝕊_2=𝕄, there is only one metric for this manifold, and as such we must have h_1=h_2=h_m for consistency.This first consistency condition is called the "metric matching condition". Parametrise the surface by coordinates s^a as x_i^μ(s^a) on each side i. The identification of 𝕊_i is done by identifying x_1^μ(s^a)≡ x_2^μ(s^a). Then, the metric matching condition reads :h^1_ab(s)=g^1_μν x_1^μ/ s^a x_1^ν/ s^b=g^2_μν x_2^μ/ s^a x_2^ν/ s^b=h^2_ab(s) .This condition is a constraint on the equations of motion for the membrane, it is non-dynamical.With that out of the way, we proceed to vary S_gr w.r.t. to the metrics. Since the problem is completely symmetric on both sides, we will consider only the variation of g_1. We ignore the bulk matter fields as they simply give a stress-tensor contribution to the Einstein equations. Thus consider : S_gr = 1/2∫_𝕊d^Dx √(-g) R+∫_𝕊d^D-1s √(-h)(K-) .The variation of the bulk term is well-known, we have :δ√(-g)=1/2√(-g) g^μνδ g_μν ,δ R = -R^μνδ g_μν+^μ(^νδ g_μν-g^ν_μδ g_ν) . To obtain (<ref>), one identity is useful : δΓ^ρ_μν =g^ρ/2(_μδ g_ν+_νδ g_μ-_δ g_μν) .Thus the variation of the bulk term yields : 1/2∫_𝕊d^Dx √(-g)(R/2g^μν-R^μν)δ g_μν+1/2∫_𝕊d^D-1s√(-h) n^μ(^νδ g_μν-g^ν_μδ g_ν) .The first term gives us the Einstein equations in the bulk, while we also get a contribution on the boundary. For the variation of the boundary term in (<ref>), we will use the formula (<ref>) for the Extrinsic curvature. The variation of the boundary metric h^μ_ν and the normal covector n_μ can be obtained from (<ref>): δ n_μ = 1/2n_μ n^ρ n^δ g_ρ , δ h^μ_ν = δ(-n_ν n_ g^μ)=n_ν n^ρ h^μδ g_ρ . See (<ref>) for the formula of h_μν. Notice also the identity h^_ν_ n_ = _ n_ν. Then : δ√(-h) = 1/2√(-h)h^ρδ g_ρ , δ(K) =δ (K_μνg^μν) =-K^μνδ g_μν+δ K_μν g^μν .The main difficulty lies in the computation of δ K_μν. We break it down into three pieces using (<ref>), after some simplifications : δ K_μν =δ h_μ^_ n_ν_A + h^_μ h_ν^_δ n_+h^_μδ h^_ν_ n__B- h^_μδΓ^ρ_νn_ρ_C .We find : A = K_ν^ n_μ n^ρδ g_ρ , B = 1/2 K_μνn^ρ n^δ g_ρ+n_ν K_μ^ρ n^δ g_ρ . Combining this with the identity n_ν K^ν_ρ =0 :g^μνδ K_μν =g^μν(1/2 K_μνn^ρ n^δ g_ρ+(n_ν K_μ^ρ+n_μ K_ν^ρ) n^δ g_ρ-h^_ν h^_μ n_ρδΓ^ρ_) ,=1/2 K h^ρδ g_ρ-h^n_ρδΓ^ρ_ . We can now combine all the expressions we have collected to recover : δ K= [1/2K h^ρ-K^ρ]δ g_ρ-h^n_ρδΓ^ρ_ . To be able to extract equations of motions, the combined sum of the variation must take the form A^ρδ g_ρ + D^μ c_μ, where D^μ denotes the covariant derivative that can be partially integrated w.r.t. the measure √(-h)d^D-1s. Indeed, after a bit of guesswork, one can re-arrange (<ref>) to find : δ K = -1/2K^ρδ g_ρ -1/2n^μ(^νδ g_μν-g^ν_μδ g_ν)-1/2h^_(h^μn^νδ g_ν) , ⇒ D_μ c^μ≡ -1/2h^_(h^μn^νδ g_ν) . From which we infer c^μ = 1/2h^μn^νδ g_ν. Conveniently, a big portion of (<ref>) cancels the boundary term of (<ref>). That is of course to be expected, it is precisely what we require of the Gibbons-Hawking term to do. Combining all the formulas together, we finally get the full boundary variation as : ∫_𝕊d^D-1s √(-h)[1/2(Kh^ρ-K^ρ- h^ρ)δ g_ρ+D_μ c^μ] .Let us make a few comments before concluding. In the case where the surface 𝕊 is the asymptotic boundary of spacetime, one usually imposes asymptotic "Dirichlet" boundary conditions for the metric. In other words, the surface metric is fixed, and thus the tangential variation vanishes, h^ρδ g_ρ=0. When this holds, the expression (<ref>) vanishes, so that the boundary term is cancelled exactly by the variation of the Gibbons-Hawking term, as it should. In our case, the boundary 𝕊 will itself have no boundary. Hence the divergence D_μ c^μ can be integrated away. In models where this condition doesn't hold (for example if we have two boundaries joining at a kink, or if we have a "dangling" membrane), the partial integration of this term will yield further conditions. We leave these cases for later study.Thus the additional equation of motion that we obtain once we consider both side is : K^1_ρ+K^2_ρ-(K^1+K^2)h_ρ=- h_ρ . Equation (<ref>), together with (<ref>) are the so-called Israel matching conditions. Note that more generally h_ρ can be replaced with T^mem_ρ where T^mem_ρ is a stress-energy tensor for the surface; it should be symmetric for consistency, although it need not be conserved as one can check that the covariant derivative of (<ref>) need not vanish. In general, we will contract (<ref>) with the tangent vectors t^μ_a, to obtain an equation expressed directly with surface tensors (instead of ambient space tensors).It may be useful to trace-reverse (<ref>): K^1_ab+K^2_ab=/D-2h_ab .§ SOLVING THE WALL EQUATIONSIn this section, we find the general solution of the domain wall equations in terms of the mass parameters M_j and the AdS radii ℓ_j, as well as the tension of the wall . Part of this analysis is related to that of ref.<cit.> by double Wick rotation; namely by exchanging the roles of t and x. While in Euclidean space the distinction between time and space is formal, when considering the Lorentzian interpretation of the result, the conclusions are completely different. We would like to solve the Israel matching conditions in our ICFT model. The full action is given by :§.§ The wall equationsS_ gr =1/2∫_𝕊_1d^3x√(-g_1)(R_1+2/ℓ_1^2)+1/2∫_𝕊_2d^3x√(-g_2)(R_2+2/ℓ_2^2) +∫_𝕄 d^2s√(-h) + ∫_∂𝕊_1 d^2s √(-h_1) K_1 + ∫_∂𝕊_2 d^2s √(-h_2) K_2 +1/ℓ_1∫_𝔹_1√(-h_1)+1/ℓ_2∫_𝔹_2√(-h_2) ,where we have included the counterterms that will be needed to get well-defined on-shell actions. They contribute only to the asymptotic boundaries, which we denoted by 𝔹_i, whereas 𝕊_i will include also the boundary constituted by the membrane. We denoted by a blanket notation h_i the induced metrics on the various boundaries.The bulk equations are already solved by picking the bulk metrics to be in the form of (<ref>). In general, the coordinates charts (τ_j,r_j,x_j) may be discontinuous at the interface, so we label them by their side. However, the assumptions of staticity mean the time-coordinate will be globally defined, so we write τ_1=τ_2=τ. The membrane will be parametrized by two parameters, τ and . From staticity, a generic parametrisation is written τ=τ, x_j = x_j() and r_j = r_j(). By abuse of notation, we will drop the τ parameter and identify it with τ. The two (Euclidean) metrics thus can be written as : ds^2_j = (r_j^2-M_jℓ_j^2)dτ^2+ℓ_j^2dr_j^2/r_j^2-M_jℓ_j^2+r_j^2 dx_j^2(r_j,x_j)∈Ω_j .The range of the coordinates Ω_j will depend on the specific geometry, and is delimited as follows :*by the embeddings of the static wall in the two coordinate systems, {x_j(σ), r_j(σ) };*by the horizon whenever the slice contains one, i.e. in cases H1and H2;*by the cutoff surface r_j≈1/ϵ→∞.The metric matching conditions then give us two (non-trivial) equations :r_1^2 -M_1 ℓ_1^2 = r_2^2 -M_2 ℓ_2^2 ≡ f(σ),f^-1ℓ_1^2r_1^'2 +r_1^2x_1^'2 = f^-1ℓ_2^2r_2^'2 +r_2^2x_2^'2 ≡ g(σ) ,where we defined f() and g() as the coefficients of the surface metric,ds_𝕄^2 = f()dτ^2+g()d^2 . In writing the equations in this manner, we still have a gauge freedom corresponding to reparametrizations of the membrane coordinates. We will use it to fix f()= henceforth. We name this the "blue-shift" or "red-shift" parametrization, since √()=√(g_tt^i) gives the blue-shift factor.To write down the other surface equation (<ref>), we need first to compute the extrinsic curvature. The normal (co)vector is given by : n^j_μ = 1/N_j(0,-x_j^'(),r^'_j()) , N_j= -ℓ_j^2 r^' 2_j/+r_j^2 x^' 2_j , where when we write r_j or x_j they are to be read as functions of .Notice that in our convention (<ref>) should be seen as being pointing outwards from the spacetime. Thus, in the pictures (<ref>), the normal vector should point away from the green region. In particular, since r^'>0, the part of the spacetime that is kept is the one lying on the smaller x side of the membrane. The computation of the extrinsic curvature tensor is relegated to the appendix (<ref>). Once done, one notices that (<ref>) furnishes only one independent equation : r_1^2 x_1^'/ℓ_1+r_2^2 x_2^'/ℓ_2=-√( g()) . In total, having fixed the parametrization, we are left with 3 independent equations to solve, for three unknown functions x_j^' and g(). Furthermore, the equations only involve first derivatives of x_j, so the integration constants are irrelevant choices of the origin of the x_j axes. For a given ℓ_j and , the wall embedding functions x_j(r_j) are thus uniquely determined by the parameters M_j. However, different choices of (M_1,M_2) may correspond to the same boundary data (L_1,L_2,T). Two such solutions are then competing phases of the system. §.§ Near boundary solutionIt is instructive to look at the limiting behavior of the wall near the asymptotic boundary. Indeed, as r_j→∞, the parameters M_j can be neglected and the metrics approach AdS_3 in Poincaré coordinates. In this limit, the solution to the gluing equation simplifies and reads <cit.> : r_1≈ r_2 (≈),x_j ≈ℓ_j(tan(ψ_j))/r_j , where ψ_j is the angle in the (x_j,ℓ_j/r_j) between the boundary and the interface, see figure <ref>.Naturally, the angles ψ_j are completely fixed by the Israel conditions : ℓ_1/cosψ_1=ℓ_2/cosψ_2≡ℓ_mtanψ_1+tanψ_2=ℓ_m , where one can identify ℓ_m with the radius of the membrane worldsheet, which will be AdS_2 in this limit. We have -π/2<ψ_j<π/2. In the assumption c_1≤ c_2 that we mentioned earlier, we have ℓ_1≤ℓ_2 and the equations (<ref>) impose |tan(ψ_1)|≥|tan(ψ_2)|, as well as ψ_1>0 provided the tensionis positive, which is the physical assumption. The sign of ψ_2 depends on the precise value of . Combining equations (<ref>) yields an equation for ℓ_m : √(1/ℓ_1^2-1/ℓ_m^2)+ Sign(ψ_2) √(1/ℓ_2^2-1/ℓ_m^2)= .For each value of the worldsheet radius ℓ_w, there corresponds two tensions , according to Sign(ψ_2). We obtain some bounds for the tension: _ min<<_0ψ_2<0_0<<_ maxψ_2>0 , where the three critical tensions read : λ_ min= 1/ℓ_1 - 1/ℓ_2 , λ_ max = 1/ℓ_1+1/ℓ_2 , λ_0= √(λ_ maxλ_ min) .Let us briefly pause to discuss the significance of these critical tensions.§ CRITICAL TENSIONSThemeaning of thecritical tensions λ_ min and λ_ max has beenunderstood inthe work of Coleman-De Lucia<cit.> and Randall-Sundrum <cit.>. Belowλ_ min the false vacuum is unstable tonucleation of true-vacuum bubbles, sothe two phases cannot coexist in equilibrium.[Ref. <cit.>actually computes the critical tension foradomain wall separatingMinkowski fromAdS spacetime. Their result canbecompared to λ_ min in the limit ℓ_2→∞.] The holographic description of suchnucleatingbubblesraises fascinating questionsin its own right, see e.g. refs.<cit.>. It has been also advocated that expanding true-vacuum bubblescould realize accelerating cosmologiesin string theory <cit.>. Sinceourfocus here is on equilibriumconfigurations, we will notdiscussthese interesting issues any further. The maximal tension λ_ max is a stability bound of a different kind.[Both theλ = λ_ max and theλ = λ_ min walls can arise as flat BPS walls in supergravity theories coupled to scalars <cit.>. These twoextreme types of flat wall,calledtype II and type III in <cit.>, differbythe fact that the superpotential avoids, respectively passes through zero asfields extrapolatebetween the AdS vacua <cit.>.] For λ > λ_ max the two phases cancoexist, but the largetension of the wall forces this latter to inflate <cit.>. The phenomenonis familiarfor gravitating domain walls in asymptotically-flat spacetime <cit.>, i.e. in the limitℓ_1, ℓ_2→∞. Again, such tensions are irrelevant in our search of equilibrium configurations.Themeaning of λ_0isless clear,its role will partially emergelater. For now, notethat it is the turning point at which the worldsheet radius ℓ_w(λ)reaches its minimal value ℓ_2. From (<ref>), it also the point where the wall becomes perpendicular to the boundary, in the false vacuum side. The range λ_ min< λ<λ_0 only exists for non-degenerate AdS vacua, that iswhenℓ_1is strictly smaller than ℓ_2. As explained in (<ref>), in this model the interface is described only by one parameter, so all its CFT data will be determined by it. These include the energy transmission-reflection coefficients (<ref>) as well as the interface entropy S_ int, or "g-factor", which is related with the number of localized degrees of freedom. Both of these quantities can be computed exactly. The entropy was computed for instance in <cit.> and reads : S_ int = 2πℓ_1 ℓ_2 [_ maxtanh^-1(/ max)-_ mintanh^-1(_ min/)] .It varies monotonically between -∞ and ∞ asvaries inside its allowed range.Using holographic techniques, the energy transmission coefficients were computed in (<cit.>) with the result : T_1→ 2=_ max+_ min/_ max+,T_2→ 1=_ max-_ min/_ max+ .Using the identities _ max+_ min=2/ℓ_1 and _ max-_ min=2/ℓ_2 on can check that these coefficients obey the "detailed-balance" condition c_1 T_1→ 2= c_2 T_2→ 1 (see (<ref>) for its microscopic origin).The larger of the two transmission coefficients reaches the unitarity bound when = _ min, and both coefficients attain their minimum when = _ max. Total reflection (from the false-vacuum to the true-vacuum side) is only possible if ℓ_1/ℓ_2 → 0, i.e. when the “true-vacuum” CFT_1 is almost entirely depleted of degrees of freedom relative to CFT_2, which confirms the universal results found in <cit.>. §.§ Turning point and horizonWe will now derivethe general solution of the equations (<ref>,<ref>,<ref>), and then relate the geometric parameters M_j to the data (T, L_j) of the boundary torus shown in fig. <ref>. We use the parametrization of the membrane in terms of theblueshift factorof the worldsheet metric (<ref>). Let σ_+ correspond to the minimal value of the blueshift attained on the membrane, which is either zero or positive. If σ_+ =0 the string enters the horizon. While it may be interesting to understand what happens to the string after entering the black hole (and we discuss this in Chapter 3), in the Euclidean picture in which we focus our analysis, spacetime is capped at the horizon. Hence, for the purposes of this Chapter, we will not look behind the horizon.On the other hand, if σ_+ >0 then, as we will confirm shortly,this istheturning pointof the membrane where bothx_1^' and x_2^' diverge, while remaining integrable.Astatic stringhas (at most)oneturning point, and is symmetric under reflection in the axis that passes through the centersof theboundaryarcs[In ref.<cit.> this corresponds to the time-reflection symmetry of theinstanton solutions.], asillustratedinfigure<ref>.It follows that the blue-shift parametrization is one-to-two. Henceforth we focus only on one half string, corresponding to the boundary solution (<ref>). The other half string is obtained by x_j → -x_j[Seemingly, the sign-inverted solution solves (<ref>) for negative tension. However, this is just the equations way of telling us that the conventions we chose (outer normal) are inverted. So in this case, the part of spacetime that is kept, is also sign-inverted. Thus by combining the two ℤ_2 solutions, we can enclose a portion of spacetime as depicted in fig.(<ref>)].Eqs.(<ref>) imply that 2 r_jr_j^' =1. Inserting in eq.(<ref>) gives(x_j^')^2 =r_j^-2( g(σ) - ℓ_j^2/4σ r_j^2) . Squaring now twice eq.(<ref>) and replacing (x_j^')^2 from the above expressions leads to a quadratic equation for g(σ), the σσ component of the worldsheet metric. This equation has a singular solution, g=0, and a non-trivial one : g(σ) = λ^2 [(2r_1r_2 /ℓ_1 ℓ_2 )^2- (r_1^2/ℓ_1^2+ r_2^2/ℓ_2^2 - λ^2 σ)^2 ]^-1=λ^2/ A σ^2 + 2Bσ + C .where in the second equality we usedeq.(<ref>), and A= (λ_ max^2 - λ^2 )(λ^2 - λ_ min^2), B=λ^2 (M_1+M_2)-λ_0 ^2(M_1-M_2) ,C= - (M_1 - M_2)^2 .Weexpressed thequadratic polynomial appearing in the denominator of (<ref>) in terms ofM_j, λ and the critical tensions(<ref>), in order to render manifest the fact that for λ in the allowed range, λ_ min< λ< λ_ max, the coefficient A is positive. This is required for g(σ) to be positive near the boundary where σ→∞. In addition,AC≤ 0 which ensures that the two roots of the denominator in (<ref>)σ_± = -B ±(B^2 - AC)^1/2/A ,are real, and that the larger root σ_+is non-negative.Inserting(<ref>) in (<ref>) and fixing the sign of the square root near the conformal boundary gives after a little algebra x_1'/ℓ_1 = -σ(λ ^2 + λ _0^2)+M_1-M_2/2(σ + M_1ℓ_1^ 2)√(A σ ( σ- σ_+)(σ-σ_-)) , x_2'/ℓ_2=- σ(λ ^2 -λ _0^2)+M_2-M_1/2(σ + M_2ℓ_2^ 2)√(A σ ( σ- σ_+)(σ-σ_-)) . We maynow confirm ourearlier claim thatif σ_+ > 0then both x_1^'∝ dx_1/dr_1and x_2^'∝ dx_2/dr_2 diverge at this point, with an integrable divergence. Thus the two ℤ_2 symmetric solutions can be joined smoothly at this point.Furthermore, since σ + M_jℓ_j^2= r_j^2 ispositive,[Except for the measure-zero set of solutions in which the string passes through the center of global AdS_3.] the x_j^' are finite at all σ >σ_+. Thus σ_+ is the unique turning point of the string, as advertised.Eqs.(<ref>) and (<ref>) give the general solutionof the string equations for arbitrary massparameters M_1, M_2 of the green and pink slices. These must be related to the torus parameters by interior regularity, and by the Dirichletconditions at the conformal boundary. Explicitly,theboundaryconditions for thedifferent slice typesof figure <ref> read:L_j= 2∫_σ_+^∞ dσ x_j^'^' ,L_j= nP_j+2∫_σ_+^∞ dσ x_j^' ,L_j =Δ x_j|_ Hor + 2∫_σ_+^∞ dσx_j^' . Theintegrals in these equations are the opening arcs, Δ x_j, between the two endpoints of a half string. They can be expressed as complete elliptic integrals of the first, second and third kind,see appendix <ref>.For the slices E1, H1 where x_j is a periodic coordinate, we havedenoted by P_j >0 its period, and by n the string winding number. Finally, for strings entering the horizon we denote by Δ x_j|_ Hor the opening arcbetween the twohorizon-entry pointsin thejth coordinate chart. Possible phases of the ICFT for given torus parametersT, L_j must be solutions to one pair of conditions. Apart from interior regularity, we will also requirethat the string does not self-intersect. In principle, two stringbits intersecting at an angle ≠π could join intoanotherstring. Such string junctions would bethe gravitational counterparts of interface fusion<cit.>,and allowing themwouldmakethe holographic model much richer.[Genericallythe intersection pointin one slice will correspond to two points that must be identified in the other slice; thismayimpose further conditions.] To keep, however, our discussionsimple we will only allow a single type ofdomain wall in this work. It would be particularly interesting to understand what membrane intersections means from the point of view of the field theory. The reader can easily convince him/herself that to avoid string intersections we must have P_j > L_j and n=1in (<ref>), and Δ x_j|_ Hor >0 in (<ref>).§ PHASES: COLD, HOT & WARMAmongthe five slice types of figure <ref>,H2 stands apart because it can only pairwith itself.This is becauseahorizonis a closed surface, so it cannot end on the domain wall.[Except possibly in the limitingcase where the wall is the boundary of space.] An equivalent way to see that is that if we were to pair any other slice withH2, an observer could escape the black hole by traversing to the other side through the membrane.We will now show that thematching equationsactuallyrule outseveral other pairs among theremaining slicetypes.One pair that is easytoexclude is [ H1,H1], i.e. solutions that describe two black holessitting on either side of the wall. Interior regularity would requirein this caseM_1 = M_2 = (2π T)^2. But eqs.(<ref>) and (<ref>) then imply that σ_+=0, so the wall cannot avoid the horizon leading to a contradiction. This gives our first no-go lemma: [width=13.5cm]Two black holes on either side of a static domain wall are ruled out.Note bycontrast that superheavy domain walls(λ > λ_ max) inflate andcould thuspreventthe black holes fromcoalescing.[Asymptotically-flat domain walls, which have been studied a lot in the context of Grand Unification <cit.>, are automatically in this range.] A second class of pairs onecan exclude are the `centerless geometries' [ E2,E2], [ E2,E2^'], [ E2^',E2] and[ E2^',E2^']. We use the word`centerless' for geometries that containneither acenter of global AdS, nor a black hole in its place (seefig. <ref>). If such solutions existed, all inertial observers would necessarily hit the domain wall since there would be neither a center whereto rest, norahorizon where to escape.[Inthe double-Wick rotated contextof Simidzija and Van Raamsdonk the [E2,E2] geometriesgive traversable wormholes<cit.>.]The argument excludingsuchsolutions is based on a simpleobservation: What distinguishes the centerless slicesE2 andE2^' from those with an AdS center ( E1) or a black hole( H1) is the sign of x_j^'at the turning point, sign(x_j^'|_σ≈σ_+) = + for E2,E2^' ,- for E1,H1 .This is not a deep result, rather simply a consequence of the conventions that we picked. Since we keep the slice of spacetime lying to the left of the string[More precisely, we keep the slice that lies in x<x(), where x() is the string in the (x,) plane.], the sign at the turning point instructs us whether to insert the ℤ_2 symmetric part on the left or the right of our first wall, in order for them to join properly. Then (<ref>) follows from this discussion.Now from eqs.(<ref>) one has(σ + M_1 ℓ_1^ 2) x_1'/ℓ_1+ (σ + M_2 ℓ_2^ 2) x_2'/ℓ_2<0 , sobothx_j^'cannot besimultaneously positive. This holds forall σ, and hence alsonear the turning point. This is our second no-go lemma: [width=13.2cm] 'Centerless'static spacetimes in whichallinertial observerswould inevitably hit the domain wall are ruled out.We canactually exploit this argument further. As is clear from (<ref>),if M_1>M_2 thenx^'_1 is manifestlynegative, i.e. the green sliceisof typeE1 orH1. Thepairs [ E2^',E1] and [ E2^',E2] for which the above inequality is automatic are thus ruled out. One can also show that x^'_2|_σ≈σ_+isnegative ifM_2 > 0 > M_1. This is obvious from (<ref>) in the range λ > λ_0, and less obvious but also true for λ < λ_0, as can be checked by explicit calculation.[The tedious algebra isstraightforward and not particularly instructive, so we chose not to present it here. The inequalities where proven to hold by simplification using Mathematica.] The pairs[ E1,E2^'] and [ E2,E2^'] for which the above mass inequality isautomatic, are thus also excluded. Recall that the energy density of the jth CFT reads ⟨ T_tt⟩ = 1/2ℓ_j M_j. Ruling outall pairsof E2^' with E1 orE2 impliestherefore thatin the ground state the energy density must be everywhere negative. When one L_j is much smaller than the other, the Casimir energy scales like E_0 ∼#/L_j. The fact that the coefficient # is negative means that the Casimir force is attractive, in agreement with general theorems <cit.>. This is the third no-go lemma: [width=13.7cm]A slice of global AdS_3cannot be paired with a horizonless BTZ slice. This implies that intheground stateof the putative dual ICFT theenergy density is everywhere negative.We have collected forconvenienceall these conclusions in fig.<ref>. The table shows the eligible slice pairs, or the allowed topologies of static-domain-wall spacetimes.It also defines a color code for phase diagrams. The light yellowphasesthat featurea wallbetween theblack hole and an AdS restpointarethe gravitational avatars of the Faraday cage. Indeed, aninertial observer lying on the thermal AdS slice can be "shielded" from the horizon, and rest happily in the other slice. In the full BTZ spacetime, an inertial observer will inevitably fall in the black hole. Such solutionsare easier to construct forlarger λ. Domain walls lighter than λ_0, in particular, can never shield from ablack hole in the `true-vacuum'side. Indeed, as follows easily from(<ref>),for λ <λ_0 and M_1>0> M_2, the sign of x_2^'|_σ≈σ_+ispositive, so geometriesof type [ H1,E1] are excluded.§ EQUATIONS OF STATEThe different colors in fig.<ref> describedifferentphases of the system, since the corresponding geometries are topologically distinct. They differ in howthe wall, the horizon (if one exists) and inertial observers intersect or avoid each other. Let us now think thermodynamics. For fixed Lagrangian parameters λ, ℓ_j, the canonical variables that determine the state of the system are the temperature T and the volumes L_1, L_2. As previously stated, because of scale invariance onlytwodimensionless ratios matter: τ_1 := TL_1 , τ_2:= TL_2 orγ:= L_1/L_2 = τ_1/τ_2 . On the other hand, natural variables for the interior geometry are the mass parameters M_j, and the size of the horizon Δ x_j|_ Hor, if any exists. When several phases coexist, the dominant one is the one with the lowestfree energyF = ∑_j (E_j - TS_j).The free energy F is determined from the renormalized on-shell gravitational action, as in sec. <ref>. The detailed computation is relegated to the appendix, app.<ref>. From this formula, and since the energy density is given by ⟨ T_tt⟩, we find : E_j/L_j = ℓ_j/2M_j and S_j =r^ H_j Δ x_j|_ Hor/4 G =2πℓ_j √(M_j)Δ x_j|_ Hor , where E_j denotes the total energy of slice j, while S_j denotes the total entropy. Note that in deriving the entropy we recover the Bekenstein-Hawking formula, as r^ H_j Δ x_j|_ Hor is the "area" of the horizon.Thus the microcanical parameters (Energy, Entropy) are those naturally related to the interior geometry.The entropies are scale invariant. The other key dimensionless variable is the mass ratio, viz.the ratio of energy densities per degree of freedom in the twoCFTs μ:= M_2/M_1 . The Dirichletconditions, (<ref>-<ref>),give for each type of geometry two relations among the above variablesthatplay the role of equations of state.[In homogeneous systems there is a single equation of state. Here we have one equation for each subsystem.] They relate the natural interior parameters S_j and μ to the variables τ_j and γ of the boundary torus. Note that in each phase of the system there will remain one free interior parameter per slice, sincefor horizonless slices S_j=0 and for slices with horizon M_j = (2π T)^2.In computing the phase diagram we will have to invert these equations of state.§.§ High-T phaseFor fixed L_j and very high temperature, we expect the black hole to growso large that it eats away a piece of the domain wall and theAdS rest points. The dominant solution is thus of type[H2,H2] and regularity fixesthemass parameters inbothslices, M_1=M_2= (2π T)^2.The boundaryconditions (<ref>) reduce in this case to simple equations for the openinghorizon arcs Δ x_j|_ Hor. Performing explicitly the integrals (see app. <ref>)) gives : L_1 - Δ x_1|_ Hor =- 1/π Ttanh^-1( ℓ_1(λ^2 + λ^2_0) /2λ) ,L_2 - Δ x_2|_ Hor = - 1/π Ttanh^-1( ℓ_2 (λ^2 -λ^2_0) /2λ) .For consistency we must have Δ x_j|_ Hor>0, which is automatic ifλ > λ_0. If λ < λ_0, on the other hand, positivity of Δ x_2|_ Hor puts a lower bound on τ_2, τ_2 ≥1/π tanh^-1( ℓ_2 (λ_0^2 -λ^2) /2λ):= τ_2^* . We see here a first interpretation of the critical tension λ_0 encountered in section <ref>. For walls lighter than λ_0 there is a region of parameter space where the hot solutionceases to exist, even as a metastable phase. It is worth noting however, that the consistency condition is only valid if we ignore wall fusion. If we were to repeat the analysis including those more general spacetimes, τ_2^* could become a critical temperature signifying the transition to another phase. The total energy andentropy in thehigh-Tphase readE_ [hot] = 1/2(ℓ_1 L_1M_1 + ℓ_2 L_2M_2 ) = 2π^2T^2(ℓ_1 L_1 +ℓ_2 L_2),S_ [hot] =4π^2T (ℓ_1 Δ x_1|_ Hor +ℓ_2 Δ x_2|_ Hor) =4π^2T^2 (ℓ_1 L_1 +ℓ_2 L_2)+ 2 log g_I ,where log g is given by (<ref>) and the rightmost expression of the entropy follows from (<ref>) andastraightforward reshuffling of the arctangent functions. This is a satisfying result. Indeed, the first term in the right-hand side of (<ref>) is the thermal entropyof the two CFTs (being extensive these entropiescannot depend on the ratio L_1/L_2), while the second term is the entropy of the two interfaces on the circle (this justifies a posteriori why we called log(g) an entropy in (<ref>)). The Bekenstein-Hawking formula captures nicely both contributions. Eqs.(<ref>)and (<ref>)show thatshiftingthe L_j at fixed T does not change the entropyifand only if ℓ_1 δ L_1 = -ℓ_2 δ L_2.Moving in particulara defect (for which ℓ_1=ℓ_2) without changing the volume L_1+L_2 is an adiabatic process while moving a more generalinterface generates/absorbsentropy bymodifying the density of degrees of freedom.§.§ Low-T phase(s) Consider nextthe ground state of the system, at T=0. The only geometries which exist at zero temperature are of the horizonless types: the double-center geometry [E1, E1], orthe single-center ones [E1, E2] and[E2, E1](see fig.<ref>). Here the entropies S_j=0, and the only relevant dimensionless variables are the volume and energy-density ratios,γ and μ.Note that they are bothpositive sinceL_j>0 and M_j<0 for both j. The Dirichletconditions forhorizonless geometriesread √(| M_1|)L_1 = 2πδ_𝕊_1,E1- f_1(μ) , √(| M_1|)L_2 = 2π/√(μ)δ_𝕊_2,E1 - f_2(μ) , whereδ_𝕊_j,E1 = 1 if the jth slice is of typeE1 and δ_𝕊_j,E1=0otherwise, andf_1(μ)= ℓ_1/√(A)∫_s_+^∞ dss(λ^2+λ_0^2)- 1 + μ/(s-ℓ_1^ 2) √(s(s-s_+)(s-s_-) ) ,f_2(μ)= ℓ_2/√(A)∫_s_+^∞dss(λ^2 - λ_0^2)+ 1-μ/(s -μℓ_2^2 ) √(s(s-s_+)(s-s_-) ) ,with A s_± = λ^2 (1+μ ) - λ_0^2 (1- μ ) ± 2λ√(1-μ/ℓ_2^2 + μ^2 -μ/ℓ_1^2+ μλ^2 ) .The dummy integration variable s is theappropriately rescaled blueshift factor of the string worldsheet, s= σ/| M_1| .Dividing the two sides ofeqs. (<ref>) gives γ asa function of μfor each of the three possible topologies.[The functionsf_j(μ) are combinations of complete elliptic integrals of the first, second and third kind, see app. <ref>. The value μ=1 gives γ = 1, correspondingto the scale-invariant AdS_2 string worldsheet. The knownsupersymmetric top-down solutionslive at this special point in phasespace.] If theground state of the putative dual quantum-mechanical system was unique, weshould find a single slice-pair type and value of μ foreach value of γ. Numerical plotsshow that this is indeed the case. Specifically, we found that γ(μ) is a monotonically-increasing function of μ for any givenslice pair, and that it changes continuously from onetype of pair to another. We will returnto thesebranch-changing`sweeping transitions'in section <ref>. Let us stress that theuniqueness of the cold solutiondid not have to be automatic in classical gravity,nor in the dual large-N quantum mechanics.Formost of the (ℓ_j, λ) parameter space, as γ ranges in (0, ∞) the mass ratio μ covers alsothe entire range (0, ∞). However, we found that ifℓ_1 <ℓ_2 (strict inequality)andfor sufficiently light domain walls, γ vanishes atsomepositive μ= μ_0(λ, ℓ_j). Below this critical value γ becomes negative signaling that the wall self-intersects and the solution must be discarded. This leads to astrikingphenomenonthat we discuss in section <ref>.§.§ Warmphases The last setof solutionsof the model are the yellow- or orange-coloredones in fig.<ref>. Here the string avoids the horizon, so the slice pairis of type[H1,X] or[X,H1] withX one of the horizonless types:E1, E2 orE2^'.Assume firstthat the black hole is on the green side of the wall, so that M_1 = (2π T)^2.In terms ofμtheDirichlet conditions(<ref>, <ref>)read: 2π T Δ x_1|_ Hor - 2πτ_1 = f̃_1(μ) , 2πτ_2=2π/√(-μ) δ_𝕊_2,E1 - f̃_2(μ),wheref̃_1(μ)= ℓ_1/√(A)∫_s̃_+^∞ dss(λ^2+λ_0^2) +1-μ/(s+ ℓ_1^ 2) √(s(s- s̃_+)(s- s̃_-) ) , f̃_2(μ)= ℓ_2/√(A)∫_s̃_+^∞dss(λ^2 - λ_0^2)- 1+μ/(s+μℓ_2^ 2 ) √(s(s-s̃_+)(s-s̃_-) ) , and therootss̃_± = σ_±/M_1 insidethe square root aregiven by A s̃_± = - λ^2 (1+μ ) + λ_0^2 (1- μ ) ± 2λ√(1-μ/ℓ_2^2 + μ^2 -μ/ℓ_1^2+ μλ^2 ) .In the first condition (<ref>) we have used the fact thatthe period of thegreenslice that contains the horizonis P_1 =Δ x_1|_ Hor. This comes simply from the fact that if the full black hole is included in the slice, we can compute its area if we know the x-periodicity.If the black hole is on thepinkside of the wall,theconditionstake a similar form in terms of the inverse mass ratioμ̂= μ^-1 = M_1/M_2, 2π T Δ x_2|_ Hor - 2πτ_2 = f̂_2(μ̂) , 2πτ_1=2π/√(-μ̂)δ_𝕊_1,E1 - f̂_1(μ̂) ,where heref̂_1(μ̂)=ℓ_1/√(A)∫_ŝ_+^∞ dss(λ^2+λ_0^2) +μ̂- 1 /(s+μ̂ℓ_1^ 2) √(s(s- ŝ_+)(s- ŝ_-) ) , f̂_2(μ)= ℓ_2/√(A)∫_ŝ_+^∞dss(λ^2 - λ_0^2)- μ̂+1/(s+ℓ_2^ 2 ) √(s(s-ŝ_+)(s-ŝ_-) ) ,and therootsŝ_± = σ_±/M_2 insidethe square root are given by A ŝ_± = - λ^2 (μ̂+ 1) + λ_0^2 (μ̂- 1 ) ± 2λ√(μ̂^2- μ̂/ℓ_2^2 + 1 -μ̂/ℓ_1^2+ μ̂λ^2 ) . Thefunctions f̃_j and f̂_j,as well as the f_j of the cold phase,derive from the same basicformulae(<ref>)and differ only bya few signs.We chose to write them out separately because these signs are important. Note also that while incold solutionsμ is always positive, here μ and its inverse μ̂can have either sign.Allthe values of μ and μ̂do not,however,correspond to admissible solutions. For a pair of type[H1,X] we must demand(i) thatthe right-hand sides in (<ref>) be positive – the non-intersection requirement, and (ii)thatx_1^'|_σ≈σ_+be negative – theturning point condition(<ref>).Likewise for solutions of type[X, H1] we must demand that the right-hand sides in (<ref>) be positive and that x_2^'|_σ≈σ_+be negative. The turning-point requirementis easy to implement. In the [H1,X] case, x_1^'|_σ≈σ_+ is negative when the numerator of the integrand in(<ref>), evaluated at at s=s̃_+,is positive. Likewise for the [X,H1] pairs,x_2^'|_σ≈σ_+ is negative when the numerator of the integrand in (<ref>), evaluated at at s=ŝ_+, is positive.Aftera little algebra these conditions takea simple formfor [H1,X]μ∈ (-∞, 1] ; for[X,H1]μ̂= μ^-1∈ (-∞, 1] . Recalling that μ = μ̂^-1 = M_2/M_1, we conclude that in all the cases the energy density per degree of freedomin the horizonless slice is lower than thecorresponding densityin the black hole slice. This agrees with physical intuition:the energy density per degree of freedom in the coolerCFT is less than the thermal density πT^2/6–the interfaces did not let the theorythermalize. When μ→ 1 or μ̂→1, the wallentersthe horizon and the energy is equipartitioned. This completes ourdiscussion of the equations of state. To summarize, these equations relate the parameters of the interior geometry (μ, S_j) to those of the conformal boundary(γ , τ_j).The relation involveselementary functionsinthe hot phase, and was reduced to a single function γ(μ),that can be readily plotted, inthe cold phases. Furthermoreatany given point in parameter spacethe hot and cold solutions, when they exist,are unique. The excluded regions are τ_2< τ_2^*(λ, ℓ_j)for the hot solutions,andμ > μ_0(λ, ℓ_j) for the cold solution withμ_0 the point where γ =0.Inwarm phasesthe storyis richersince more than onesolutions typically coexist at any given value of (γ, τ_j).Some solutionshavenegative specific heat, as we will discuss later. To find the parameter regions where different solutions exist requires inverting the relation between (γ, τ_j) and (μ, S_j). We will do this analytically in some limiting cases, and numericallytocompute the full phase diagram in section <ref>. § PHASE TRANSITIONS In the previous section we have outlined three phases classified by the presence of a horizon, and according to whether the strings enter the horizon. Among these, we must differentiate phases that have a different number of "centers". We can then identify three types of transitions between the different phases : * Hawking-Page transitions describingthe formation of ablack hole. Thesetransitionsfrom the cold to the hot or warm phases of fig.<ref> are always first order;* Warm-to-hot transitionsduring whichpart of the wallis captured by the horizon. We will show that these transitions are alsofirst-order;* Sweeping transitions where the wall sweeps away acenterof global AdS, i.e.a rest point forinertial observers. These are continuous transitionsbetween the one and two-center phases of fig.<ref>.It isinstructiveto picture thesetransitionsby plotting the metric factor g_ttwhiletraversing space alongthe axis of reflection symmetry, see fig.<ref>.Before embarking in numerical plots, we will first dothe following things:(i) Comment on the ICFT interpretation of thesetransitions; (ii) Compute thesweeping transitions analytically;and (iii) Prove that the warm-to-hot transitions arefirst order, i.e. that one cannot lower the wall to the horizon continuously by varying the boundary data. §.§ ICFTinterpretationWhen a holographicdual exists,Witten has argued that the appearance of ablack holeat the Hawking-Page (HP) transition signalsdeconfinement inthe gauge theory <cit.>. Within this interpretation [There is an extensive literature on the subject including<cit.>, studies specific to two dimensions <cit.>, and recent discussions in relation withthe superconformal indexinN=4 super Yang Mills <cit.>. For an introductoryreview see <cit.>] leads to the conclusion that in warm phases a confined theory coexists with a deconfined one.We will see below that such coexistence is easier when the confined theory is CFT_2, i.e. the theorywith thelarger central charge.[Even though for homogeneous 2-dimensional CFTsthe critical temperature,τ_ HP= 1,does not depend onthe central charge by virtue of modular invariance.] This is natural from the gravitational perspective. Solutions of type[H1, X] are more likely than solutions of type[X, H1] because ablack hole forms more readily on the 'true-vacuum' side of the wall. We will actually provide some evidence later that if c_2 > 3c_1 there are no equilibriumphases at allinwhich CFT_2 is deconfined while CFT_1 staysconfined.The question that jumps to one's mind is what happens for thick walls, where oneexpects a warm-to-hot crossover rather than a sharp transition. One possibility is that the coexistence of confined and deconfined phases is impossible in microscopic holographic models.Alternatively, anappropriately defined Polyakov loop <cit.> could providea sharp order parameter for this transition. For sweeping transitions the puzzle isthe other way around.Hereasharp order parameter exists in classical gravity –it is the number of rest points for inertial observers. This can be defined both forthin-andfor thick-wall geometries. The interpretation on the field theory side is however unclear.The transitions could be related to properties of the low-lying spectrum at infinite N, or to the entanglement structure of the ground state. More particularly, one hypothesis was that the entanglement wedge of the subsystem containing CFT_2 would include part of the side 1 bulk only for <_0. This hypothesis was motivated by looking at the near-boundary behavior of the RT surfaces. In that case RT surfaces are semi-circles anchored at the boundary, and thus they will necessarily intersect the membrane when <_0, and will avoid it when >_0. As promising as this initial lead was, it turned out not to be correct, as one can find crossing RT surfaces even for >_0. So the meaning of this sweeping transition in the field theory remains to be found. We will make a deeper dive into RT surfaces for ICFT in chap.<ref>. §.§ Sweeping transitionsSweeping transitions are continuous transitions that happen at fixed values of the mass ratio μ.We will prove these statements here. Assume for now continuity, and let the jth slice go from typeE1 to type E2. The transition occurs when the string turning point andthe center ofthe jth AdS slice coincide, i.e. whenr_j(σ_+)= √(σ_+ + M_jℓ_j^2)= 0 .Clearly, this has a solution only if M_j<0. Insertingin (<ref>) the expressions (<ref>),(<ref>) for σ_+ two equations for the critical values of μ with the following solutionsμ_1^*= 1 - ℓ_2^ 2λ^2/ℓ_2^2/ ℓ_1^2 , μ_2^*= ℓ_1^2/ ℓ_2^2/1 - ℓ_1^ 2λ^2 .In the low-Tphasesboth M_j are negative and μ is positive. Furthermore, a littlealgebrashows thatfor all λ∈ (λ_ min, λ_ max)the following is truex_1^'|_σ≈σ_+ <0atμ≫ 1 ,x_2^'|_σ≈σ_+ <0atμ≪ 1 . This means that forμ≫ 1 the green slice is of typeE1, and forμ≪ 1 the pinkslice is of typeE1. A sweeping transition can occur if the critical mass ratios (<ref>)arein the allowed range. We distinguish three regimes ofλ:*Heavy (λ > 1/ℓ_1):None of the μ_j^*is positive, so thesolutionis of type[E1,E1] for all μ, i.e. coldsolutions arealways double-center;*Intermediate(1/ ℓ_1> λ > 1/ℓ_2):Only μ_2^* is positive. Ifthis isinside therangeofnon-intersecting walls, the solutiongoes from[E1,E2] at large μ,to [E1,E1] at small μ. Otherwise the geometryisalways of the single-center type [E1,E2];*Light (λ < 1/ℓ_2): Bothμ_1^* and μ_2^* arepositive, so there is the possibility of two sweeping transitions: from[E2,E1]at small μ to[E1,E2]atlarge μ passingthrough the double-center type[E1,E1] . Note that since λ_ min = 1/ℓ_1 - 1/ℓ_2,this range of λ only exists ifℓ_2< 2ℓ_1,i.e.whenCFT_2has no more than twice the number of degrees of freedom ofthe more depleted CFT_1.We can now confirm that sweeping transitions are continuous, not only in terms of the mass ratioμ but also in terms of the ratio of volumes γ. To this end, we expand the relations (<ref>) around the above critical pointsand show that the L_j vary indeed continuously across the transition. The calculations can be found in the appendix <ref>. In fact, numerically we can see that it is not only continuous but completely smooth. Thus, the phase transition in question is not to be understood in the thermodynamical sense, which is what we mean by labeling it as "continuous". Forthe warm phases, we proceed alongsimilar lines. One of the two M_j is now equal to (2π T)^2 >0, so sweeping transitions mayonly occur for negativeμ.Consider firstwarm solutions of type[H1,X] with theblack hole inthe `true vacuum'side.A little calculation shows that x_2^'|_σ≈σ_+isnegative,i.e. X=E1,if and only ifλ > 1/ℓ_1 andμ <μ_2^*<0 . Recall that whenX=E1some inertial observers can be shielded from the black hole by taking refuge at therestpoint of the pink slice. We see that this is only possible forheavy walls (λ >1/ℓ_1)and for μ <μ_2^*. A sweeping transition [H1,E1]→ [H1,E2] takes placeatμ= μ_2^*.Considerfinally a black holein the `false vacuum' side, namely warm solutionsoftype [X,H1]. Here x_1^'|_σ≈σ_+ is negative, i.e. X has a rest point, if and only if the following conditions are satisfied λ > 1/ℓ_2 andμ̂:= μ^-1<(μ_1^*)^-1 <0 . Shielding from the black hole looks here easier,both heavy and intermediate-tension walls can do it.In reality,however,we have foundthat solutions with the black hole in the `false vacuum' side are rare, and that the above inequality pushesμ̂ outside the admissible range. Even rarer are the cases where such a solution is dominant, in terms of free-energy. Thegeneral trend emerging from the analysis is thatthe heavier the wall themore likely arethe two-center geometries.A suggestivecalculation actuallyshows that ∂σ_+/∂λ|_M_jfixed ispositive fortwo-center solutionsnegative forsingle-center solutions where the word `center' here includesbothanAdS restpoint anda black hole. At fixed energy densities, a single center is therefore pulled closer to a heavier wall, while two centers are instead pushed away. It might be interesting to also compute ∂σ_+ /∂λ and ∂ V/∂λ at L_j fixed,where V is the regularizedvolume ofthe interior space. In the special case of the vacuum solution withan AdS_2 wall,the volume(and the associated complexity<cit.>) can be seen to grow with the tension λ. §.§ Warm-to-hottransitionsIn warm-to-hot transitions the thindomain wall enters the black-hole horizon. One may have expected this to happen continuously,i.e.to be able to lower the wall to the horizonsmoothly, byslowly varying theboundary data L_j, T. We will now show that, ifthe tension λ isfixed, the transition is actually alwaysfirst order.Note first that in warm solutions theslicethatcontains the black hole has M_j= (2π T)^2. Ifthestringturning point approaches continuously the horizon, then σ_+ → 0. From eqs. (<ref>, <ref>) we see that thiscan happen if and only if (M_1-M_2)→ 0,which implies in passing thatthe solution must necessarilybe of type[H1,E2^'] or [E2^',H1]. Expanding aroundthis putative point where the wall touches the horizon we setM_1 - M_2/M_1+M_2 := δ with|δ|≪ 1⟹σ_+ ≈( 2π T/λ)^2 δ^2.Recalling that the horizonless slice has the smaller M_j we see that forpositive δthe black hole must bein the green slice and μ = 1 - 2δ +O(δ^2), while for negative δthe black hole isin the pinkslice andμ̂= 1+2δ +O(δ^2). The second optioncan beimmediately ruled outsince it is impossible to satisfy the boundary conditions (<ref>).Indeed, f̂_1(μ̂≈ 1) is manifetsly positive, as is clearfrom (<ref>), andwe have assumed that𝕊_1isof type E2^'. Thus the second condition (<ref>) cannot be obeyed. By the same reasoning we see that forδ positive, and since now 𝕊_2isof type E2^', we need that f̃_2(μ≈ 1) be negative. As is clearfrom the expression (<ref>) thisimplies thatλ <λ_0. The upshot of the discussionis that a warm solutionarbitrarily close to the hot solution may exist only if λ <λ_0 and if the black hole is on the true-vacuum side. It is easy to see that under these conditions thetwo branches of solution indeedmeet atμ =1, Δ x_2|_ Hor=0 andhence from (<ref>)τ_2 = 1/π tanh^-1( ℓ_2 (λ_0^2 -λ^2 ) /2λ) := τ_2^* . Recall from section <ref> that this is the limiting value for the existence of the hot solution – the solution ceases to exist at τ_2<τ_2^*. The nearby warm solution could in principletake over in this forbidden range, provided that τ_2(δ) decreases asδ moves away from zero. However, it actually turns out that τ_2(δ)initiallyincreasesfor small δ, sothis last possibilityfor a continuous warm-to-hot transition is also ruled out.To see why this is so, expand (<ref>) and (<ref>) around μ=1,s̃_+ = δ^2/λ^2 +O(δ^3) ,s̃_-= - 4λ^2/A(1-δ (1+ λ_0^2/λ^2)) +O(δ^2) , and shifttheintegration variable s := y+s̃_+ so that (<ref>) reads2πτ_2(δ)=ℓ_2/√(A)∫_0^∞dy [ y(λ_0^2 - λ^2) + 2δ/ (y+ μℓ_2^2) √(y(y+s̃_+) (y - s̃_-) ) +O(δ^2) ].Weneglected in the integrand all contributionsof O(δ^2) except for thes̃_+ in the denominator that regulates the logarithmic divergence of the O(δlogδ) correction. Now use the inequalities (<ref>):y(λ_0^2 - λ^2) + 2δ/√((y+s̃_+) (y - s̃_-) ) >y(λ_0^2 - λ^2) + 2δ/√( (y+δ^2/λ^2) (y +4λ^2/A ) ) > √( y) (λ_0^2 - λ^2)/√((y +4λ^2/A ) ) ,where the second oneis equivalent to 2δ > (λ_0^2/λ^2 - 1)δ^2, which is truefor small enough δ. Plugging in (<ref>) shows that τ_2(δ)> τ_2(0) at the leading order in δ,proving our claim.A typical τ_2(μ)in the[H1,E2] and[H1,E2^'] branch of solutions, and for λ <λ_0, is plotted in figure <ref>. The function grows initially as μ moves away from 1, reaches a maximum value and then turns around and goes to zero as μ→ -∞. The red line indicates the limiting valueτ_2^* below which there is no hot solution. For τ_2 slightly aboveτ_2^* we see that there are three coexisting black holes, the hot and two warm ones. For τ_2 < τ_2^*, on the other hand,only one warm solution survives, but it describes awall at a finite distance from the horizon. Whether this is the dominant solution or not,the transition is therefore necessarily first order. § EXOTIC FUSION AND BUBBLESBefore proceeding tothephase diagram,we pause here to discussthe peculiar phenomenon announced earlier, in section <ref>. This arises in the limits γ= L_1/L_2→ 0 or γ→∞,with L_1+L_2 and T kept fixed. In these limits, the conformal boundary of one slice shrinksto a point.Consider for definitenessthe limit L_1 → 0. In the language of the dual field theory the interface and anti-interface fuse in this limitinto a defect of CFT_2.The naive expectation, based on free-field calculations<cit.>, is that this is the trivial (or identity) defect. Accordingly, the greeninteriorslice shouldrecede to the conformal boundary, leaving as the only remnant a (divergent)Casimir energy.We have found that this expectation is not always borne out as we will now explain. Suppose first that the surviving CFT_2 is in its ground state, and that the result of the interface-antiinterfacefusion is the expected trivial defect. The geometry should in this caseapproachglobal AdS_3 of radius ℓ_2, withM_2tending to-(2π/L_2)^2. Furthermore,σ_+should go to infinity in order for the green slice toshrink towards the ultraviolet region.As seenfrom eqs. (<ref>, <ref>)this requires M_1 → -∞, so thatμ should vanish together withγ. This is indeedwhat happens inmuch of the (λ, ℓ_1, ℓ_2) parameter space. One finds μ∼γ^2→ 0, a scaling compatible withthe expected Casimir energy ∼#/L_1. Nevertheless,sometimes γ vanishes at finite μ_0. In such cases, as μ→μ_0the green slice does not disappear even though its conformal boundary has shrunk to a point.This is illustrated by the leftfigure <ref>, which showsastatic bubble of `true vacuum' suspended froma point on theboundary of the `false vacuum'.[These are static solutions,not to be confused with `bags of gold' which arecosmologiesglued onto the backside of a Schwarzschild-AdS spacetime, see e.g.<cit.>. The phenomenon is reminiscent ofspacetimesthat realize `wedge'or codimension-2 holography, like those in <cit.>. ] To convince ourselves that the peculiar phenomenon is real, we give an analytic proof in appendix <ref> of the existence of such suspendedbubbles in at least oneregion of parameters(ℓ_2 > ℓ_1 and λ≈λ_ min>0). Furthermore, since the vacuum solution for a given γ is unique, there is no other competing solution. In the example of app. <ref>, in particular,γ is finite and negative at μ=0.In the languageof field theory, this is a striking phenomenon. It implies that interface and anti-interface do not annihilate, but fuse into an exotic defectgeneratingspontaneously a new scale in the process. This is the blueshift at the tip of the bubble, σ_+(μ_0, L_2), or better the corresponding frequency scale r_2(σ_+)inthe D(efect)CFT.The phenomenon is not symmetric under the exchange 1↔ 2. Static bubbles of thefalse vacuum (pink)spacetime insidethetrue (green) vacuum do not seem to exist. Weproved this analytically for λ < λ_0, andnumerically for all othervalues of the tension. We have also found that the suspendedgreen bubblecan be of typeE1, i.e. have a center. The redshift factor g_tt inside the bubble can even be lower than in the surroundingspace,so that the bubble hosts the excitations of lowest energy. We did not showthisanalytically, but the numerical evidence is compelling. Dosuspendedbubblesalso exist when the surroundingspacetime containsa black hole ? The answer is affirmative as one can show semi-analytically by focussing on the region λ≈λ_0. We have seen in the previoussection that near this critical tension there exist warm solutions of type [H1,E2^'] with the wall arbitrarily close the horizon. Let us consider the function τ_2(μ, λ) given in this branch of solutions by (<ref>) and (<ref>)(with 𝕊_2 ≠ E1). This is a continuous function in both arguments, so as λ increases past λ_0, τ_2(1)goes from positive to negative with the overall shape of the function varying smoothly. This is illustrated in figure <ref>, where we plot τ_2(μ) for λ slightly below and slightly above λ_0. Itshould beclear from these plots thatforλ > λ_0 (the plot on the right) τ_2 vanishes ata finiteμ≈ 1. This is awarm bubble solution, asadvertised.We have found more generally that warmbubblescanbe alsoof typeE1, thus acting as a suspended Faraday cage that protectsinertial observers from falling towards the horizon of the black hole. Contrary,however,to what happened forthe ground state, warm bubble solutions arenot unique. There is always acompeting solution at μ→ -∞, and it is the dominant oneby virtue of its divergent negative Casimir energy. A stability analysis would show if warmbubble solutions can be metastable and long-lived, but this isbeyond our presentscope. As forwarm bubbles of type [X, H1],that is with the black hole in the false-vacuum slice, these also exist but only ifℓ_2<3ℓ_1. Indeed, as we will see in a moment, whenℓ_2> 3ℓ_1 the wall cannotavoid ahorizonlocated on the false-vacuum side. Finally, simple inspection offig.<ref> shows that by varying the tension,the bubble solutions for λ > λ_0go over smoothly to the hot solution atλ=λ_0.Atthis critical tension the bubble is inscribed between the horizon and the conformal boundary, as infigure <ref>.This gives another possible meaning to λ_0: Only walls withthistensionmaytouchthe horizon without falling inside.§ PHASE DIAGRAMSIn this last section of the paper we present numerical plots of the phase diagram of the model. We work in the canonical ensemble, so thevariables are the temperature and volumes, or by scale invariancetwo of the dimensionless ratiosdefined in (<ref>).We choose these to beτ= τ_1+τ_2 = T(L_1+L_2) and γ = L_1/L_2. The color code is as infig.<ref>. We plotthephase diagram for different values ofthe action parameters ℓ_1, ℓ_2, λ. Since our analysis is classical in gravity, Newton's constant G plays no role. Only two dimensionless ratios matter,[Dimensionless in gravity, not in the dualICFT. ] for instanceb := ℓ_2/ℓ_1 = c_2/c_1≥ 1andκ := λℓ_2 ∈ (b-1, b+1) . The valueb=1 corresponds to a defectCFT, whileb≫ 1 is the opposite“near void" limit in which thedegrees of freedom ofCFT_2 overwhelmthose of CFT_1. The true vacuum approaches in this limit the infinite-radius AdS, and/orthefalsevacuum approaches flat spacetime. The critical tension λ_0 corresponds to κ_0 = √(b^2-1).§.§ Inversion algorithmAs mentioned before, to be able to plot the phase diagrams one needs to invert the equations (<ref>-<ref>), to find which pairs of M_1, M_2 and Δ x|_ Hor yield the desired canonical parameters τ and γ. These equations are not invertible in general, since the integrals on the RHS of (<ref>-<ref>) yield Elliptic functions, as shown in the appendix <ref>.We must thus resort to numerical methods. Given τ and γ, we want to find all bulk parameters which satisfy the conditions (<ref>-<ref>). To do so, we need to compartmentalize the search, treating possible bulks [ E, E], [ H, E], [ E, H] and [ E, E] separately. This is because according to the nature of the slice (hot, warm, cold) the equations that one needs to solve are different, see (<ref>-<ref>).Let us begin with the simpler case [H,H]. For this one, all we have to do is determine Δ x_i|_ Hor on both sides, which is immediately given by (<ref>). One removes the solution if the necessary condition (<ref>) is not satisfied, and we are done.The cases [ H1, E] and [ E, H1] are symmetric, let us specify to the case where the black hole is on side 1. One gets equation (<ref>) for side 1 and either of (<ref>,<ref>) for side two, depending on if it isE1 orE2. The free parameters are P_1 and M_2, M_1 being set by the temperature. The tricky part here the possible jump in the relevant equation as we move M_2 through a sweeping transition. By implementing the condition (<ref>), we can immediately know which equation to consider, and as the two are connected continuously due to smoothness of the sweeping transition, it will not throw off the numerical algorithm. What we do then is to solve the equation for side 2, which depends only on M_2. We do this numerically, using a secant algorithm. From our analysis of the warm solution (see fig. <ref>), we expect to find at most two solutions of this type. The main difficulty here is that the numerical algorithm will find a single solution, given an initial value. In addition, it might leave the allowed range which is M_2<M_1=(2π T)^2, necessary for the validity of the warm solution. To remedy this problem, we repeat the application of the algorithm for several different starting points, and stop it as soon as it leaves the allowed range. We find that this is sufficient to yield both solutions consistently, as long as the τ and γ remain reasonable.The case [ E, E] is comparatively simpler. Again, we know by our analysis sec.<ref> that we should expect exactly one solution. We have two coupled equations (<ref>,<ref>), for the two free parameters M_i<0. By taking the ratio, we find an equation which depend only on μ=M_1/M_2, reducing the problem to a single equation. Again, we solve it by means of a secant algorithm, stopping it when it finds a solution, or exits the allowed range M_i<0. We again apply the algorithm for evenly distributed initial conditions, to ensure that we find the solution. Similarly to the warm case, we have found that we always find the solution provided the inputs τ and γ are not extreme. Once the solution is found, we should verify that 2π/√(-M_j)>L_j, which is the "no self-intersection" condition.This provides us with a set of candidate solutions in the form of quadruplets (M_1,M_2,Δ x_1|_ Hor,Δ x_2|_ Hor). All is left to do is to find the dominant solution by plugging the quadruplet in (<ref>) and selecting the one with the lowest free energy. To plot the phase diagram, we discretize a chosen (τ,γ) region, and for each point perform the above computation. This leads to the "pixelization" that can be noticed in the phase diagrams of the next section.§.§ Phase diagrams plots examplesAs explained in the introduction, although the interpretation is different,ourdiagrams are related to the ones of Simidzija and Van Raamsdonk <cit.> by double-Wick rotation (special to 2+1 dimensions). Since time inthis reference is non-compact, only the boundaries of our phase diagrams, at γ = 0 or γ= ∞, can be compared. The roles of thermal AdS and BTZ are also exchanged §.§.§ Defect CFTConsider first b=1. By symmetry,we may restrictin this case toγ≥ 1. Figure <ref> presentsthe phase diagram in the (γ, τ) planefor a very light (κ = 0.03) and a very heavy (κ = 1.8) domain wall.For the light, nearlytensionless,wall the phase diagram approaches that of a homogeneous CFT. The low-Tsolution is single-center, and the Hawking-Page (HP)transition occurs at τ≈1. Light domain walls follow closely geodesic curves, and avoidthe horizon in a large region of parameter space. [One cancompute this phasediagram analytically by expanding inpowers ofλ.]Comparingthe left with the right figure <ref> shows that heavy walls facilitate the formation of the black hole and have a harder time staying outside.Indeed,in the right figure the HPtransition occurs atlower T, and thewarm phase recedes to L_1≫ L_2. Furthermore, both the cold and the warm solutionshavenow an additionalAdS restpoint. This confirmsthe intuition that heavier walls repelprobe massesmore strongly, and can shield them from falling insidethe black hole.Thetransitionthat sweepsaway this AdS restpoint is shown explicitlyin the phase diagramsof figure<ref>.Recall from the analysis of section <ref> that in the low-Tphase suchtransitionshappenfor λ < 1/ℓ_1 ⟹κ < b = 1.Furthermore, the transitionstakeplace at the critical mass ratiosμ_j^*,given by (<ref>). Since in coldsolutions the relation between μ and γ is one-to-one, the dark-light blue critical lines arelinesof constant γ. Thesestatements are in perfect agreement with the findings of fig.<ref>. Warmsolutions of type[H1,E1], respectively[E1,H1], exist fortensions λ >1/ℓ_1 ⟹κ > b, respectively λ >1/ℓ_2 ⟹κ > 1. In the case ofa defect,these two ranges coincide. The stable black hole forms in the larger of the two slices, i.e.for γ >1 in the j=1 slice. The sweeping transition occurs atthe critical mass ratioμ_2^* = (b^2-κ^2)^-1,which through (<ref>) and (<ref>) corresponds to a fixed value of τ_2. Since τ = τ_2(1+γ), the criticalorange-yellow line isa straight line in the (γ, τ) plane, in accordance again with the findingsoffig.<ref>. A noteworthy "empirical" fact is the rapidity of thesetransitions as a function of κ. For κ a little below or above the critical value the single-center cold, respectively warm phases almost disappear. Note also the cold-to-warm transitions are always nearτ≈ 1. This is the critical value for Hawking-Page transitions in the homogeneous case,as expected at large γ whenthej=1 slicecovers most of space. The critical curvesfor the cold-to-hot and warm-to-hot transitions also look linear in the above figures, but this is anillusion. Since the transitions are first order we must compare free energies. Equatingfor examplethe hot and cold free energies gives after some rearrangements (and with ℓ_1=ℓ_2:= ℓ) 2π^2 τ +2/ℓlog g_I = 1/2τ_1| M_1| L_1^2 (1+ μ/γ) . Now | M_1| L_1^2 can be expressed in terms of μ through (<ref>, <ref>),and μ in the cold phase is a function of γ. Furthermore log g_I/ℓ= 4π tanh^-1 (κ/2)isconstant, see (<ref>), and τ_1 = τ/(1+γ^-1). Thus (<ref>) can be written asa relation τ= τ_ hc(γ), and we have verified with a careful fit thatτ_ hc is not a linear function of γ. §.§ Non-degenerate vacuaFigure <ref> presents the phase diagramin thecase of non-degenerate AdS vacua,b=ℓ_2/ℓ_1 = c_2/c_1 = 3,andfordifferent values of the tension in the allowed range,κ∈ (2,4). Since there is no γ→γ^-1symmetry, γ here variesbetween0 to ∞. To avoid squeezing the γ∈ (0,1) region, weuse forhorizontal axisα:= γ - γ^-1.This is almost linear in the larger of γ or γ^-1, when eitherof theseis large,butthe region γ≈1 isdistorted compared to <ref> and <ref> of the previous section. The most notablenew feature inthese phase diagrams is the absence of a warm phase in the regionγ <1. This shows that it is impossible to keep the wall outside the black hole when thelatter forms on the false-vacuum side. From the perspective ofthe dualICFT, see section <ref>, the absence of [X,H1]-typesolutionsmeans that no interfaces, however heavy,cankeepCFT_1 in the confined phase ifCFT_2 (the theory with larger central charge)has already deconfined. We suspect that this is a feature ofthe thin-brane model which does not allow interfacesto be perfectly-reflecting <cit.>.Warm solutionswith the horizon in the pink slice appear to altogether disappearabove thecritical ratio of central charges b_c=3.[This critical value was also noticed inref. <cit.>, who also notethat multiple branes can evade the bound confirming the intuition that it is a feature specific tothin branes. As a matter of fact, although [X,H1] solutions do exist forb <3 as we show below, they have very large γ, outside the range of ournumerical plots, unless b is very close to 1.] The boundary conditions corresponding totopologiesof type[X,H1] are given by (<ref>). We plotted the right-hand side of the second condition (<ref>) for different values of λ andμ in their allowed range, and found no solution with positive τ_1for b>3. Analytic evidence for the existence of a strict b_c=3 bound can be found by considering the limit of a maximally isolating wall, λ≈λ_ max, andof a shrinking green slice μ̂→ -∞.In this limit, the right-hand side of (<ref>) can be computed in closed form with the result τ_1(μ̂)=π/√(-μ̂)(2 - √(1+ ℓ_2/ℓ_1))+subleading . We took X=E1 as dictated by theanalysis of sweeping transitions, see section <ref> and in particular eq. (<ref>). This limiting τ_1(μ̂) is negative for b>3,and positive for b<3 wherewarm [E1,H1] solutions do exist, as claimed.An interesting corollary is that end-of-the-world branes cannotavoid the horizon of a black hole, sincethe near-void limit,ℓ_1 ≪ℓ_2, is in the range that has no[X,H1]solutions. Indeed, in the models of doubly holographic evaporating black holes mentioned in sec.<ref>, the EOW brane does indeed always falls in the horizon. §.§ Unstable black holes The phase diagramsin figs.<ref>, <ref>, <ref> show the solution with thelowest free energy in variousregions of parameter space. Typically, this dominant phasecoexists withsolutions that describe unstable or metastable black holes which are ubiquitous in thethin-wall model.[For a similar discussion of deformed JT gravity see ref.<cit.>. Note that intheabsence of a domain wall,the only static black hole solutionof pure Einstein gravity in2+1 dimensionsisthe non-spinning BTZ black hole.]Figure<ref> shows the number ofblack hole solutions in the degenerate case,b=1, forsmall, intermediate and large wall tension, and in different regionsof the (τ, γ) parameter space. The axes are the same as in figs.<ref> and <ref> butthe range of γ is halved.At sufficiently high temperature the growing horizon captures the wall, and the only solution is the hot solution.We see however that in a large region of intermediate temperaturesthe hot solution coexists with two warm solutions.Finally, atverylow temperature the hot solution coexists withfourother black-hole solutions, two on either side of the wall. The dominant phasein this regionisvacuum, so theblack holesplay no role in the canonical ensemble. The hot solution exists almost everywhere, except when λ <λ_0 and τ = τ_2(1+γ) < τ_2^*(1+γ)with τ_2^* given by (<ref>). It has positive specific heat even when it is not the dominant phase. For warm black holes, on the other hand, thespecific heat can haveeither sign. One can seethissemi-analytically by focusing once againonour favorite near-critical region λ≈λ_0.A simple inspection of fig.<ref> shows thatin some range τ_2^* < τ_2 < τ_2^ max, so the hot solution coexists with twonearby warm solutions. At the maximum τ_2^ max,where dτ_2/dμ=0, the warmsolutions merge and then disappear. Since the black hole is in the j=1 slice, M_1= (2π T)^2 and their energy reads E_ [warm] =1/2(ℓ_1 M_1 L_1 + ℓ_2 M_2 L_2)=2π^2 T^2 L_2(ℓ_1γ+ ℓ_2 μ). Taking a derivative with respect to T withL_1, L_2kept fixed weobtaind/dT E_ [warm] = 2/TE_ [warm]+ 2π^2 T^2 L_2^ 2ℓ_2 d μ/dτ_2 . Near τ_2^ max the dominant contribution to this expression comes from the derivative d μ/dτ_2 which jumpsfrom -∞ to +∞. It follows thatthe warm black hole with the higher mass has negative specific heat, and should decay to its companion black hole either classically or in the quantum theory.[We have verified in several numerical examples that the black holes with negative specific heat are never the ones withlowest free energy, a conclusion similar to the one reached in deformed JT gravity in <cit.>.] It would be very interesting to calculate this decay process, but we leave this for future work. One last commentconcernstransitions from the double-center vacuum geometries of type[E1,E1], to warm solutions where the wall avoids the horizon.One can ask what side of the wall the black hole chooses. A natural guess is that itforms in the deepest of the two AdSwells. The relative depth is the ratio of blueshift factors at the two rest points,ℜ := √(g_tt|_r_1=0/g_tt|_r_2=0) =ℓ_2/ℓ_1 √(μ(γ)).One expects the black hole to form in the j=1 (green) slice ifℜ<1 and in the j=2 (red) slice ifℜ > 1. Our numerical plots confirmed in all cases this expectation. § CLOSING REMARKSWhile the thin brane model lends itself to an analytical analysis thanks to its simplicity, an important question is how much of this analysis will survive in top-down interface models, where the domain walls are typically thick. For instance, the order parameters of the Hawking-Page and sweeping transitions (the area of the horizon and the number of inertial observer rest-points), do not depend on the assumption of a thin wall, and should go through unscathed. The warm-to-hot transition, on the other hand, may be replaced by a smooth crossover, since there is no sharp criterion to decide whenever a thick wall enters the horizon or avoids it. From the field theory however, since this transition is related to the deconfinement transition of one of the CFTs, we have a well-defined order parameter in the shape of Polyakov loops. For this reason, we also expect such a phase transition to survive the upscale to the UV, although its interpretation from the bulk might be different.Nonetheless, generating and examining a full UV complete example of this model would certainly be very interesting. Supergravity solutions dual to ICFT exist <cit.>, and they usually involve a fibration of an AdS_D-1 space on internal manifold. As we move from one side of the internal manifold to the other, the geometry interpolates between two different AdS_D vacua. It would be very interesting to study such solutions at finite temperature, but the main obstacle lies in extending those solutions to finite temperature in a way that emulates our minimal model. There is a way to do so by promoting the fibered AdS geometry to a black hole geometry<cit.>; which remains a solution. However, this description is quite different from our minimal model, as it does not describe a situation where a black hole can be located on one side of the wall, rather the horizon is "fibered" along the internal manifold.The question of computing the entanglement structure of the system in the various states is also of obvious interest, both in the possible connection to the sweeping transition, but also purely to understand the applicability of the RT prescription in the presence of membranes, as was done in <cit.> for BCFT. We return to these questions in the last chapter.Finally, another direction would be the extension of the minimal model, by addition of matter either on the bulk<cit.> or on the membrane. The choice of the matter should be motivated by some specific question one would like to study, this being particularly relevant in the context of strongly coupled condensed matter systems<cit.>.CHAPTER: STEADY STATES OF HOLOGRAPHIC INTERFACES This chapter is based on 2107.00965 In this chapter, we consider again the same minimal ICFT model, but we attempt to study states which are driven out of equilibrium. Indeed, the holographic duality has been mainly exploited at, or near thermal equilibrium, where a hydrodynamic description applies. For far-from-equilibrium processes our understanding is poorer (see <cit.> for a review). Although the semiclassical gravity description seems more tractable, highly-distorted horizons raise a host of unsolved technical and conceptual issues. To make progress, simple analytically accessible models such as the minimal one can prove to be valuable.We will restrict here to the study of Non-Equilibrium Steady States (NESS) which are stationary states, characterized by persistent currents. They are the simplest example of a far-from-equilibrium state, which will allow for an analytical treatment. In (1+1) dimensional critical systems where energy transport is ballistic, they are particularly simple thanks to the stringent constraints imposed by conformal symmetry <cit.>. In a pure CFT, these stationary states are equilibrium states in disguise; they can be obtained from thermal states by boosting them. By including a minimal interface in the system, we break this symmetry, and that will allow us to obtain "true" stationary states.The salient feature of the gravity dual of such a state is a highly deformed, non-Killing event horizon, which we determine analytically. It lies behind the apparent horizon, seemingly in contradiction with well-known theorems <cit.> stating the opposite should be true. Other theorems also state that stationary non-Killing black holes should be excluded <cit.>. The way that these solutions manage to evade all these restrictions, is by non-compacity; both at the contact point with the interface, and by the infinite extent of the system, which is necessary to sustain the stationary nature of the state.Another important insight brought about the shape of this horizon is about the entropy production at the interface. Due to the scattering of incoming excitations, there is naturally entanglement forming between the reflected and transmitted fluids. As we will argue, the gravity dual suggests that the interface is a perfect scrambler, namely that the quantum fluids exit the scattering thermalized. Although the picture is suggestive, a definitive proof of this fact should go through the computation of entanglement entropies, with the (H)RT prescription. This is surprisingly difficult in such spacetimes, and it is the topic of the next chapter.This scrambling behaviour is reminiscent of flowing black funnels <cit.> where a non-dynamical black hole acts as a source or sink of heat in the CFT. There are however important differences between the two setups. The non-back-reacting 1+1 dimensional black hole is a spacetime boundary that can absorb or emit arbitrary amounts of energy and entropy. Conformal interfaces, on the other hand, conserve energy and have a finite-dimensional Hilbert space. So even though one could mimic their energy and entropy flows by a two-sided boundary black hole whose (disconnected) horizon consists of two points with appropriately tuned temperatures, the rational, if any, behind such tuning is unclear. We will briefly comment on the gravity dual, which shows important differences, outlining that the two models are qualitatively different.Throughout this chapter we use units in which 8π G =1.§ THE BOOSTED BLACK STRINGLet us begin by the holographic dual of a simple stationary thermal state. As we mentioned, this state can simply be obtained by boosting an equilibrium thermal state, which introduces a steady current in the model. From the gravity point of view, this translates into a boost of the static BTZ geometry, introducing a spin J. The geometry that will be of interest here is then the uncompactified version of (<ref>) : ds^2= ℓ^2 dr^2/(r^2- Mℓ^2+ J^2 ℓ^2/ 4r^2 ) - (r^2- Mℓ^2 )dt^2+ r^2 dx^2- J ℓ dx dt.We denote by x ∈ℝ the uncompactified angle of (<ref>). This metric has a planar horizon which is "spinning" from left to right, namely it has a current of energy flowing in the direction of J.It has an inner and outer horizon r_± (see (<ref>)), and as already explained we need Mℓ≥ |J| to avoid naked singularities. We will occasionally use the shorthand already introduced in (<ref>) : h(r) = (r^2-Mℓ^2+J^2ℓ^2/4r^2) = (r^2-r_+^2)(r^2-r_-^2)/r^2 .Besides r_±, another special radius is r_ ergo=√(M)ℓ≥ r_+. It delimits the ergoregion inside which no observer (powered by any engine) can stay at a fixed position x. To see this, consider any timelike vector field v_t^μ = (ṫ,ṙ,ẋ) describing an observer's trajectory.Then : v_t^μ v_t μ<0 ⇔J ℓẋṫ>r^2ẋ^2-(r^2-Mℓ^2)ṫ^2+ℓ^2 ṙ^2/(r^2-Mℓ^2+J^2ℓ^2/4r^2>0 .Crucially, we used r^2-Mℓ^2<0 and h(r)>0 which hold only in the ergoregion, and outside the black hole. Together with ṫ>0 which holds for future-directed timelike vectors outside the horizon, we find that Jẋ>0, so that the observer is indeed dragged along with the black hole. This is reminiscent of the Kerr black hole with which the metric (<ref>) shares several of its properties (see <cit.> for a review). The outer horizon is a Killing horizon, while the inner one is a Cauchy horizon. The frame-dragging forces will force ingoing matter to cross the outer horizon at infinity, x≈Jℓ t/2r_+^2→∞. This is of course a pathology of the coordinates which are ill-adapted to the horizon. More appropriate coordinates are Eddigton-Finkelstein, which are essentially infalling with the observers and are defined through :dv=dt+ ℓ dr/h(r) anddy= dx +Jℓ ^2 dr/ 2r^2h(r) . In these coordinates the metric is non-singular at the (future) horizon : ds^2= - h(r) dv^2 +2ℓ dv dr+ r^2(dy-Jℓ/2r^2dv)^2 . Notice that by the change (x,t)→ (-x,-t) in (<ref>), we can obtain the metric which is regular at the past horizon. However, since we consider geometries that are presumably formed by some physical process, we will consider the past horizon as unphysical. §.§ Dual CFT stateIn the context of holography, (<ref>) describes a NESS of the dual CFT. This has been discussed in many places, see <cit.>. This can be confirmed explicitely by going to the Fefferman-Graham gauge (<ref>), through the following change of coordinates :x^± = x± t ,r^2 =ℓ^2/z^2(1+ z^2 ⟨ T_–⟩/ℓ)(1+ z^2⟨ T_++⟩/ℓ) , which brings (<ref>) to the form (<ref>). From it we read the dual CFT state :1/2J = ⟨ T_–⟩ - ⟨ T_++⟩ and1/2 Mℓ= ⟨ T_–⟩+⟨ T_++⟩ . It follows thatthe dual state has constant fluxes of energy in both directions, with a net flow⟨ T^tx⟩ = J/2.To abide by the standard notation for heat flow we will sometimeswrite J/2 = dQ/dt.GenericNESSarecharacterized by operators other than T_αβ, for instance bypersistent U(1) currents. To describe themone mustswitchon non-trivial matterfields, and the above simple analysis mustbe modified. The vacuum solutions (<ref>) describe, nevertheless, auniversal minimal class of NESS that exist in all holographic conformal theories.There are manyways of preparing these universalNESS. For instance, one can couplethe two endpoints of the system to heat baths so that left- and right-moving excitations thermalize at different temperatures Θ_± .[We use Θ for temperature to avoid confusion with the energy-momentum tensor. In gravity the heat baths can be replaced by non-dynamical boundary black holes, see below.] An alternative protocol (which avoidsthe complications of reservoirs and leads) isthepartitioningprotocol. Here one prepares two semi-infinite systems attemperatures Θ_±, and joinsthem atsome initialtime t=0. [To implement the partitioning protocol on the gravity side one shouldreplacethe constant ⟨ T_++⟩ in (<ref>) by θ(x^-) Θ_-^2 + θ(-x^-) Θ_+^2, where θ(x) is the step function,and similarly for ⟨ T_–⟩. This only reproduces the flow of energy for t>0, while for the discontinuity at t=0 one would most likely need external sources. Analyzing such non-stationary geometries is beyond our scope here. ]The steady state willthenforminsidea linearly-expanding interval in themiddle <cit.>(see also <cit.> for an explicit construction in 4D SYM). In both cases, after transients have died out one expects⟨ T_±±⟩ = π c/12Θ_±^2 = π^2ℓΘ_±^2⟹⟨ T^tx⟩ = π c/12 (Θ_-^2- Θ_+^2) ,where c= 12πℓ is the central charge of the CFT by the Brown-Henneaux formula (<ref>).Equation (<ref>) for the flow of heat isa (generalized)Stefan-Boltzmann lawwith Stefan-Boltzmann constant πc/12. Comparing (<ref>)to (<ref>)relates the temperatures Θ_± to the parameters M and J of the black string. This idealized CFTcalculation is, of course,onlyrelevant for systems inwhichthe transport of energy is predominantly ballistic. Eq. (<ref>) implies in particular the existence of aquantumofthermalconductance, seethe review <cit.> and references therein. It isinterestingto also considerthe flow of entropy. This is illustrated in figure<ref> which shows theentropy densitys ≡ s^t in the three spacetime regions of the partitioning protocol. Inside the NESS region there is a constant flow of entropy from the hotter towardthe colder sides_±=±π c /6 Θ_± .Here s_± are the entropy densities of the chiral fluids defined through the first law δ⟨ T_±±⟩ = Θ_±δ s_±. The passage of the right-moving shockwave increases the local entropy at a rate π c(Θ_–Θ_+)/6, while theleft-moving wave reduces it at an equalrate. Total entropy is thereforeconserved, not surprisinglysince there are no interactions in this simple conformal 2D fluid. One can computethe entropy holographically with the help of the Hubeny-Rangamani-Ryu-Takayanagiformula<cit.>. Fora boundary region of sizeΔ xthe entanglement entropy reads (see sec. <ref> for a derivation).S_q ent = c/6log[ β_+β_-/π^2 ϵ^2sinh (πΔ x/β_+ ) sinh (πΔ x/β_- )] ,where β_± = Θ_±^-1 and ϵ is a UV cutoff. From this one computes the entropy density in the steady states_ NESS=lim_Δ x→∞ S_q ent/Δ x = π c / 6(Θ_-+Θ_+) = 2π r_+ .The last equality, obtained with the help of (<ref>), (<ref>) and (<ref>),recastss_ NESS as the Bekenstein-Hawking entropy of the boosted black string (recall that ourunitsare 8π G = ħ =1).This agreement was one of the earliest tests <cit.>of the AdS/CFT correspondence.§ NESS OF INTERFACESAlthough formally out-of-equilibrium, thestate of the previous section is a rather trivial example of a NESS. It can be obtained from the thermal state by a Lorentz boost, andis therefore a Gibbs statewith chemical potentialforthe(conserved) momentum in the x direction. More interesting steady statescan be found when left-and right-movingexcitationsinteract, for instance at impurities<cit.> or when theCFT livesin a non-trivial backgroundmetric <cit.>.Such interactions lead to long-range entanglement and decoherence,givingNESS that are not just thermal states in disguise.[ Chiral separation also fails whenthe CFT is deformed by(ir)relevant interactions. Thespecial case of the TT̅ deformationwas studied, using both integrability and holography,in <cit.>. Interestingly, thepersistent energy currenttakes againtheform (<ref>) with a deformation-dependentStefan-Boltzmannconstant. ] The case ofa conformal defect,in particular, has beenanalyzed in ref.<cit.>.As explained in this reference the heat current is still given byeq.(<ref>) but the Stefan-Boltzmannconstant is multiplied by T, the energy-transmission coefficient of the defect (<ref>). The relevant setup is shown in figure <ref>.The fluids entering the NESS region from opposite directions are thermal at different temperatures Θ_1≠Θ_2. The difference, compared to the discussion of the previous section, is that the two half wires (j=1,2)need not be identical, or (evenwhenthey are) their junction is a scattering impurity where interactions between the currents can take place.§.§ Energy currents Consider R_j and T_j, the reflection and transmission coefficients for energyincident on the interfacefrom the jth side (see (<ref>)). Then, the energy currents in the NESS read [The currents are given inthe folded picture in which the interface is a boundary of the tensor-product theory CFT_1⊗CFT_2, and both incoming waves depend on x^-.] ⟨ T_–^(1)⟩ =π c_1/12Θ_1^2 , ⟨ T_++^(1)⟩ = R_1π c_1/12Θ_1^2+ T_2π c_2/12 Θ_2^2 ,⟨ T_–^(2)⟩ =π c_2/12 Θ_2^2, ⟨ T_++^(2)⟩ = T_1π c_1/ 12Θ_1^2+ R_2π c_2/12 Θ_2^2,where we simply considered the incident excitations to be thermalized (from the reservoirs at ∞) while the outgoing currents are a result of both transmission and reflection of the incoming currents. Crucially, we used the fact that the energy-transport coefficients across a conformal interface in 2d are universal, i.e independentofthe nature of the incident excitations. As we have shown in sec.<ref>, this assumes that theVirasoro symmetry is not extended by extra spin-2 generators, which is true in our holographic model. Wehave also used that the incoming and outgoing excitations do not interact away from the interface.Conservation of energy and the detailed-balance condition (which ensures that when Θ_1=Θ_2 the heat flow stops) imply the following relations among the reflection and transmission coefficients:R_j+ T_j = 1 andc_1 T_1=c_2 T_2 . Hence, only one of the four transport coefficients is independent. Withoutloss ofgenerality we assume that c_2≥c_1, i.e. that CFT_2is the theory with more degrees of freedom. The average-null-energyconditionrequires 0≤ R_j, T_j≤ 1, so from(<ref>) we conclude0≤ T_2≤c_1/c_2 orequivalently 1 ≥ R_2≥1-c_1/c_2 . As noticed in <cit.>, reflection positivity ofthe Euclidean theory gives a weaker bound <cit.> than thisLorentzian bound. Note also that in the asymmetric case(c_2 strictly bigger than c_1) part of the energy incident from side 2 is necessarily reflected.Let dQ/dt =⟨ T^(1) tx⟩= - ⟨ T^(2) tx⟩ be the heat current across the interface. From eqs. (<ref>) and (<ref>) we find dQ/ dt=π/ 12 c_1 T_1( Θ_1^2-Θ_2^2 ). Since in a unitary theory c_1T_1 is non-negative, heat flows as expected from the hotter to the colder side. The heat flow only stops for perfectly-reflecting interfaces (T_1 =T_2= 0),or whenthe twobaths are at equaltemperatures. For smalltemperature differences, the heat conductance reads dQ/dt =πΘ/ 6 c_j T_jδΘ . Theconductance per degree of freedom, πΘ / 6, is thus multiplied by the transmission coefficient of the defect <cit.>. Note finally that the interface is subject to a radiation force given by the discontinuity of pressure,F_ rad =⟨ T^(1) xx⟩-⟨ T^(2) xx⟩=π/6( c_1R_1 Θ_1^2 - c_2R_2Θ_2^2),where we used (<ref>) and (<ref>). The force is proportional to the reflection coefficients, as expected. §.§ Entropy production There is a crucialdifference between the NESS ofsection <ref>, and the NESS in the presence of the interface. In both cases the incoming fluids are in a thermal state. But while for a homogeneous wire they exit the systemintact, in the presenceof an interface they interact and become entangled. Thestate of the outgoing excitationsdepends thereforeon the nature of these interface interactions.Let usconsider theentropy density of the outgoing fluids, defined as the von Neumann entropy density for an interval [x, x+ Δ x]. We parametrize it byeffective temperatures, so that the entropy currents reads_-^(1)=-π c_1/ 6 Θ_1 ,s_+^(1) =π c_1/ 6 Θ_1^ eff ,s_-^(2)=-π c_2/6Θ_2 , s_+^(2) =π c_2/ 6 Θ_2^ eff .We stress that(<ref>) is just a parametrization, the outgoing fluids need not be in a thermal state. In principle Θ_j^ eff may varyas a function of x, but we expect them to approach constant values in the limit t≫| x|≫Δ x →∞. Figure <ref> is a cartoon of the entropy-density profile s^t in various spacetime regions of the partitioning protocol. Entanglement at the interface produces thermodynamicentropy that iscarried away by the two shock waves.The total thermodynamic entropy on a full constant-time slice obeysdS_ tot/dt = π c_1/6(Θ_1^ eff- Θ_1) + π c_2/6(Θ_2^ eff- Θ_2) + dS_ def/dt , where S_ def denotesthe entropy of the interface.Since this is bounded by the logarithm of the g-factor,S_ def cannot grow indefinitely and the lastterm of(<ref>) can be neglected in a steady state.[Defectswithan infinite-dimensional Hilbert spacemay evade this argument. But in the holographic model studied in this paper, log g ∼ O(c_j)<cit.> and the last term in (<ref>) can be safelyneglected at leading semiclassical order.]The entanglement between outgoing excitations is encoded in a scattering matrix, which we may write schematically as 𝕊(ψ_1^ in, ψ_2^ in, ψ_ def^ in|ψ_1^ out,ψ_2^ out, ψ_ def^ out) .Here ψ_j^ in/out are the incoming and outgoing excitations, and ψ_ def^ in/outis the state of the defect before/after the scattering. Strictly speaking, there is no genuine S-matrix in conformal field theory. What describes the conformal interface is a formal operator I, obtained by unfolding the associated boundary state<cit.>. The above𝕊 is anappropriate Wick rotation of I, as explained in ref.<cit.>.The density matrixof the outgoing fluids depends a priori on the entire S-matrix,not just on thetransportcoefficients T_j and R_j. In the large N limit we consider however, these amount to quantum corrections and our state is indeed described by ⟨ T_ij⟩.The second law of thermodynamics bounds the effective temperatures from below since the entropy production (<ref>)cannot be negative. The Θ_j^ eff are also bounded from above because the entropy density cannot exceed themicrocanonical one, s =(π cu/3)^1/2 withu the energy density of the chiral fluid. Using (<ref>) andthedetailed-balance condition this givesΘ_1^ eff≤√( R_1 Θ_1^2 +T_1 Θ_2^2 ) andΘ_2^ eff≤√( R_2 Θ_2^2 +T_2 Θ_1^2 ) .The boundsare saturated by perfectly-reflecting or transmitting interfaces,i.e. when either R_j=1 orT_j=1. This is trivial, because in such cases there is no entanglement between the outgoing fluids in this case.Partially reflecting/transmitting interfaces that saturate the bounds (<ref>) act as perfect scramblers. Their existenceat weak coupling seemsunlikely,but strongly-coupled holographic interfaces could be of this kind. We will later argue that the thin-brane holographic interfaces are perfect scramblers. This is supported by the fact (shown in section <ref>) that far from the brane the event horizon approaches the equilibrium BTZ horizons, and hence the outgoingchiral fluids are thermalized.Any domain-wall solutioninterpolating between two BTZ geometries, with no other non-trivial asymptoticbackgroundsshouldbe likewisedual to aNESS of a perfectly-scrambling interface. We suspect that many top-downsolutionsof this kindexist, but they are hard to find. Indeed,although many BPS domain walls are known in the supergravity literature, theirfinite-temperature counterpartsarerare. The oneexample that we are aware of is the Janus AdS_3 black brane<cit.>. But even for this computationally-friendly example, the far-from-equilibrium stationary solutions are not known. § STATIONARY BRANES To simplify the problem we will here resort to the more tractable thin-brane approximation, hoping that it captures some ofthe essential physics of the stationary states. This model is the one described in sec. <ref>, as well as the one studied in the related papers <cit.>.§.§ General setupConsider twoBTZ metrics (<ref>)gluedalong a thinbrane whose worldvolumeis parametrized byτ andσ. Much like in the previous chapter, its embedding in the two coordinate patches(j=1,2)is given by six functions { r_j(τ, σ),t_j(τ, σ),x_j(τ, σ)}. Here, we must allow a little bit more generality than in the static case due to the loss of the time-inversion symmetry in the stationary case. The most general ansatz, suchthat the induced metric is τ-independent,is of the form :x_j=x_j(σ), r_j=r_j(σ), t_j = τ + f_j(σ) . We will denote by h_ab the metric induced on the worldsheet of the membrane.Compared to the static case, in (<ref>) we allow for the addition of the function f_j(σ) to the time coordinate of the membrane. We will see that extension of the ansatz will prove to be necessary to find solutions. In principle, onecan multiply τ on the right-hand side byconstants a_j^-1. But the metric (<ref>) is invariant under rescalingof the coordinates r → a r,(t ,x) → a^-1 (t, x), and of the parameters(M,J) → a^2 (M, J), so we mayabsorb the a_j intoa redefinition ofthe parameters M_j, J_j. Hence, without loss of generality,we set a_j = 1.Following the same convention as in chap.<ref>, we choose the parameter σ to be the redshift factor squared [This is a slight misnomer, since σ becomesnegative inthe ergoregion. Nonetheless, we will adopt this jargon introduced in chap.<ref>] for a stationary observer σ=r_1^2 -M_1ℓ_1^2 = r_2^2 -M_2ℓ_2^2. Eq. (<ref>) is one of the metric matching equations (<ref>). Ofthe remaining embeddingfunctions,the sum f_1+f_2 is pure gauge (it can be absorbed by a reparametrization of τ) whereas the timedelay across the wall, Δ t (σ) ≡ f_2(σ) - f_1(σ), is a physical quantity. This and the two functions x_j(σ)should be determinedby solving thethree remaining equations: (i) The rest of the metric matching conditions equating h_τσ and h_σσ, and (ii) one of the (trace-reversed) Israel-Lanczosconditions (<ref>), which we remind here :K^1_ab+K^2_ab= λh_ab<ref> .The conventions used are the same as in the previous chapter, namely the normal vector to the wall is outward pointing and λ is the brane tension.§.§ Solution oftheequationsIn this section we solve the Israel-Lanczos equations. According to the convention chosen above, the solution is given in the `folded setup' where the interface is a conformal boundary for the product theory CFT_1⊗CFT_2. Unfolding side j amounts to sending x_j → -x_j and J_j→ -J_j. § SOLVING THE THIN-BRANE EQUATIONSFrom the form (<ref>) of the bulk metric and the embedding ansatz (<ref>) ofa stationary brane,we derivethe followingcontinuity equations for the inducedmetric. In this section we write ĝ_ab for the induced metric instead of h_ab, to prevent any confusion with the function h(r) : ĝ_ττ =M_1ℓ_1^2 -r_1^2 = M_2ℓ_2^2 - r_2^2 , ĝ_τσ=(M_1ℓ_1^2- r_1^2)f_1' -J_1ℓ_1/2x_1' =(M_2ℓ_2^2 - r_2^2)f_2'-J_2ℓ_2/2x_2' , ĝ_σσ = ℓ_1^2r_1'^2/h_1(r_1)+r_1^2x_1'^2-J_1ℓ_1 x_1'f_1' + (M_1ℓ_1^2- r_1^2)f_1'^2 ,= ℓ_2^2r_2'^2/h_2(r_2)+r_2^2x_2'^2-J_2ℓ_2 x_2'f_2'+(M_2ℓ_2^2 - r_2^2)f_2'^2 .The primes denotederivativeswith respect to σ, andthe function h(r) has beendefined in (<ref>),<ref>h(r) = r^2- Mℓ^2+ J^2 ℓ^2/4r^2= 1/r^2 (r^2 - r_+^2)(r^2-r_-^2).Following sec.<ref>we choosethe convenient parametrization σ = - ĝ_ττ,so that r_j^2 = σ + M_jℓ_j^2 and r_j' = 1/2r_j. This parametrization need not be one-to-one, it will actually only cover halfof the wall whenthis latterhas a turning point.With this choice the ergoplane is located at r_j^2 = M_j ℓ_j^2 ⟹σ =0, and the functions h_j can be written as h_j(σ) =σ^2 + σM_jℓ_j^2 + J_j^2ℓ_j^2/4 /σ + M_jℓ_j^2 =(σ -σ_+^ Hj )(σ - σ_-^ Hj )/σ + M_jℓ_j^2 ,where σ_±^ Hj =- M_jℓ_j^2/2±1 /2√(M_j^2 ℓ_j^4 -J_j^2 ℓ_j^2 ) arethelocations of thehorizons in the jth chart.From (<ref>)-(<ref>) one computesthe determinant of the induced metric - det(ĝ)=σℓ_j^2/4r_j^2 h_j + h_j r_j^2 x_j'^ 2 .Note that itdoes not depend onthe time-delay functions f_j(σ), because these can be absorbed bytheunit-Jacobian reparametrization τ̃= τ + f_j(σ), σ̃= σ .Eq.(<ref>) can be used to express the x_j^' (up to a sign) in terms ofdet ĝ. A combination of (<ref>) and (<ref>) expresses, in turn, the time delay across the wall in terms of the x_j',σ (f_2^' - f_1^') = 1/2 (J_1ℓ_1 x_1^' -J_2ℓ_2 x_2^') . To complete the calculation we need thereforetofinddetĝ and then solve the equations (<ref>) for x_j'. This is done with the help of the Israel-Lanczos conditions <cit.> (see sec.<ref> for more details) which express the discontinuity of the extrinsic curvature across the wall, (<ref>).After Mathematica-aided computations, we arrive at the expressions for the extrinsic curvature (see app. <ref>) :K_ττ = h r^2x'/ℓ√(|ĝ|) andK_τσ =h r^2x'/σℓ√(|ĝ|)ĝ_τσ + J √(|ĝ|)/2σ ,where ĥisa shorthand notation fordet(ĝ). The Israel-Lanczosequations (<ref>)thus read1/√(|ĝ)|(h_1r_1^2x_1'/ℓ_1 +h_2r_2^2x_2'/ℓ_2 ) = - λσ , 1/√(|ĝ)|( h_1r_1^2x_1'/ℓ_1+ h_2r_2^2x_2'/ℓ_2)ĝ_τσ+√(|ĝ)|/2 (J_1+J_2)=- λσ ĝ_τσ . Theseare compatible if and only ifJ_1+J_2 = 0,whichtranslates to energy conservation in the boundary CFT. We have checkedthat the thirdequation, [K_σσ] = - λĝ_σσ,is automatically obeyed and thus redundant.§.§ The general solution Squaring twice (<ref>)and using (<ref>) to eliminate the x_j^' 2 leads toa quadratic equation for the determinant. This has a singular solution det (ĝ ) =0, and a non-pathologicalone - det (ĝ ) = λ^2 σ^3[4h_1h_2r_1^2r_2^2/ℓ_1^2ℓ_2^2 - (h_1r_1^2/ℓ_1^2 + h_2r_2^2/ℓ_2^2 - λ^2σ^2)^2 ]^-1 .Insertingthe expressions for r_j(σ) and h_j(σ) leadsafter some algebra to- det (ĝ ) = λ^2 σ/Aσ^2+2Bσ+C ,with coefficients :A = (λ_max^2-λ^2) (λ^2-λ_min^2) ,B = λ^2(M_1+M_2) - λ_0^2(M_1-M_2) , C = -(M_1-M_2)^2 + λ^2J_1^2 .The critical tensions in these expressions areλ_ min=|1/ℓ_1 - 1/ℓ_2| , λ_ max = 1/ℓ_1+1/ℓ_2 , λ_0= √(λ_ maxλ_ min) . For a static wall,i.e. when J_1=J_2=0, the above formulae reduce, as they should,to the ones obtainedref.<cit.>. [When comparing with this referencebeware that it usesthe (slightlyconfusing) notation ĝ_σσ≡ g(σ) so that,since the metricis diagonalin the static case, det ĝ = -σ g(σ). ]The only effect of the non-zero J_jis actuallyto shiftthecoefficient C in (<ref>). The roots of the quadratic polynomial in the denominator of (<ref>),σ_± =-B ±√(B^2 - AC)/A , determine the behaviour of the solution. If σ_+ is either complex or negative(part of) the brane worldvolumehas det ĝ >0 inthe ergoregion, so it isspacelike and physically unacceptable. Acceptable solutionshaveσ_+ >0 or σ_+=0, anddescribe walls that avoid, respectively enter the ergoregion as explained in the main text, see section <ref>. The actual shape of the wall is found by inserting (<ref>) in (<ref>) and solving forx_j'^ 2. After some rearrangements this givesϵ_1x_1'/ℓ_1 =(λ^2+λ_0^2)σ + (M_1-M_2)/2(σ+M_1ℓ_1^2+J^2ℓ_1^2/4σ)√(Aσ (σ - σ_+)(σ - σ_-) ) ,ϵ_2x_2'/ℓ_2 = (λ^2-λ_0^2)σ - (M_1-M_2)/2(σ+M_2ℓ_2^2+J^2ℓ_2^2/4σ)√(Aσ (σ - σ_+)(σ - σ_-) ) ,where ϵ_j=± are signs. They are fixed bythe linear equation (<ref>) with the result ϵ_j(σ)=-σ/|σ| .These signs agree with the known universal solution<cit.> near the AdS boundary, at σ→∞, and they ensure that walls entering the ergoregion have no kink. Expressingthe denominators in terms of the horizon locations(<ref>) gives the equations (<ref>) and (<ref>) of the main text.It is worth noting that the tensionless (λ→ 0)limit of our solutionis singular. Indeed, on one hand,extremising the brane action and ignoring its back-reaction gives ageodesic worldvolume,but on the other hand forλ=0 fluctuations of the string are unsuppressed. In fact,whenλ is small the wall starts as a geodesic near the AdS boundary but always departs significantly in the interior.The results of sec. <ref>can besummarisedas follows. First, from (<ref>) r_j(σ)= √(σ + M_j ℓ_j^ 2) .Secondly, we find a sufficient and necessary condition on the parameters for the existence of solutions : J_1 = -J_2This ensures conservation of energy in the CFT, as seen from the holographic dictionary (<ref>). Thirdly, matching h_τσ fromthe two sides determinesthe time delay in terms of the embedding functions x_j,Δ t^'≡ f_2' - f_1' =J_1 /2 σ ( ℓ_1 x_1^' + ℓ_2 x_2^') ,where primes denote derivatives with respect to σ. What remains is thus to find the functions x_j(σ).To this end, we use the continuity of h_σσ andthe ττ component of (<ref>). It is useful and convenient to first solve these two equations for the determinant of the induced metric, with the result - detĝ =λ^2 σ/A σ^2 + 2Bσ +C =λ^2 σ/A (σ - σ_+)(σ - σ_-) ,whereσ_± =-B ±√(B^2 - AC)/A ,and the coefficients A,B,C readA = (λ_max^2-λ^2) (λ^2-λ_min^2) ,B = λ^2(M_1+M_2) - λ_0^2(M_1-M_2) ,C = -(M_1-M_2)^2 + λ^2J_1^2 . The three critical tensions enteringthe above coefficients have been defined in the previous chapter, sec. (<ref>).<ref>λ_ min= |1/ℓ_1 - 1/ℓ_2|λ_ max = 1/ℓ_1+1/ℓ_2λ_0= √(λ_ maxλ_ min) . Without loss of generality weassume,as earlier, that ℓ_1≤ℓ_2,so the absolute value in λ_ min is superfluous. Note that the expressions (<ref>) to (<ref>) are the same as the ones for static branes, see (<ref>), exceptfor the extra term λ^2 J_1^2 in the coefficientC. Thedeterminantofthe induced metriccan be expressed in terms of x_j and σ in each chart, j=1 and j=2. It does not depend on the time-shift functions f_j, which could be absorbed by a reparametrization of the metric with unit Jacobian. Having already extracteddeth, onecan now invert these relations to find the x_j', x_1'/ℓ_1 = - sgn(σ) [ (λ^2+λ_0^2)σ^2+ (M_1-M_2)σ] /2(σ- σ_+^ H1 )(σ- σ_-^ H1)√(Aσ(σ - σ_+)(σ - σ_-) ) , x_2'/ℓ_2 = - sgn(σ) [(λ^2-λ_0^2) σ^2 - (M_1-M_2) σ] /2(σ- σ_+^ H2 )(σ- σ_-^ H2)√(A σ(σ - σ_+)(σ - σ_-) ) ,where we denoted the horizons locations in the blueshift parametrization: σ_±^ Hj =- M_jℓ_j^2/2±1 /2√(M_j^2 ℓ_j^4 -J_j^2 ℓ_j^2 ) .These points are where the outer and inner horizons of the jthBTZ metricintersect the domain wall.Eqs. (<ref>) to(<ref>) givethe general stationarysolution of the thin-braneequationsfor any Lagrangian parameters ℓ_j and λ, and geometric parameters M_j and J_1=-J_2.The Lagrangian parameters are part of the basic data of the interface CFT, while the geometric parameters determine the CFTstate. WhenJ_1=J_2=0,allthese expressions reduceto the staticsolutions described in chap.<ref>.§ INSIDETHE ERGOREGION The qualitative behaviour of the domain wall is governedbythesingularities of(<ref>, <ref>),as one moves from the AdS boundaryat σ∼∞ inwards. In addition to the BTZ horizons at σ^ Hj_±, other potential singularities arise at σ_± and at the entrance of the ergoregion σ=0. From (<ref>)we see that the brane worldvolume would become spacelike beyond σ=0, if σ_± are both either negative or complex. To avoid such pathological behaviour one of the following two conditions mustbe met: * σ_+>0:The singularity at σ_+ is in this case a turning point, and the wall does not extendto lower values of σ. Indeed, as seen from (<ref>) and (<ref>), the singularity as _+ is integrable.* 0= σ_+> σ_- : In this case the worldvolume remains timelike as the wall entersthe ergoregion. The reader can verify from eqs. (<ref>), (<ref>) and(<ref>) that the embedding nearσ=0 is smooth. Thesetwo possibilitiesare illustrated in figure <ref>. Branes entering the ergoregion are dual, as will become clear, to steadystatesof an isolated interface, while those that avoidthe ergoregionare dual tosteady states ofan interface anti-interface pair. We will return to the secondcase in section <ref>, here we focus on the isolatedinterface. The conditionσ_+=0impliesC=0 and B≥ 0. Using(<ref>) and (<ref>),and the fact that the coefficient A is positive fortensions in the allowable range (λ_min < λ < λ_max) we obtainM_1-M_2 =±λJ_1 = ∓λ J_2andλ^2 (M_1+M_2)≥λ_0^ 2 (M_1-M_2).Furthermore,cosmic censorship requires that ℓ_j M_j> | J_j| unless the bulksingularity atr_j=0isexcised(this is the caseinthe pink regionof the leftfig. <ref>. If none of the singularities is excised, the inequality in (<ref>) isautomatically satisfied and hence redundant. With the help of theholographic dictionary(<ref>) onecantranslate the expression(<ref>) for M_1-M_2to the language of ICFT. Since theincoming fluxes are thermal, T_–^(j) = π^2 ℓ_j Θ_j^2and (<ref>)givesM_j = 4π^2 Θ_j^2 - J_j/ℓ_j⟹ , M_1 - M_2 =4π^2 ( Θ_1^2-Θ_2^2 ) - J_1 (1/ℓ_1 + 1/ℓ_2).Combining with (<ref>) gives theheat-flow rate J_1 =2⟨ T^(1) tx⟩=4π^2[1/ℓ_1 + 1/ℓ_2±λ]^-1( Θ_1^2-Θ_2^2 ) .This agrees with the ICFTexpression(<ref>) if we identify the transmission coefficientsas follows (recall that c_j=12πℓ_j)T_j = 2/ℓ_j[ 1/ℓ_1 + 1/ℓ_2±λ]^-1 . It is gratifying to find that, for the choice of plus sign, (<ref>)are precisely the coefficients T_j computed in thelinearized approximation in ref.<cit.>. In essence, this is a non-perturbative derivation of the transmission coefficients of the interface in the minimal model. The correct choice of sign will be justified in a minute. Let us pause here to take stock of the situation. We found that(i) the dual of an isolated interfacemust correspondto abrane that enters the ergoregion (otherwise it will turn back, and the dual will contain two interfaces), and (ii) thatthe brane equations determine in this case the flow of heat in accordance with the CFTresult of<cit.> and the transmission coefficients found in <cit.>.[The fact that our non-linear analysis agrees with the linearized-wave treatment of <cit.> is an indirect confirmation of the fact that the transport coefficients are universal.]To complete the story, we must make sure that once inside the ergoregion the brane does not come out again. If it did, it would intersect the AdS boundary at a second point, so the solution would not bedual to an isolated interface as claimed.Insertingσ_+=0 in the embedding functions(<ref>, <ref>) we findx_1'/ℓ_1= -(λ^2+λ_0^2)σ+ (M_1-M_2)/2(σ- σ_+^ H1 )(σ- σ_-^ H1) √(A (σ - σ_-) ) ,x_2'/ℓ_2= -(λ^2-λ_0^2)σ - (M_1-M_2) /2(σ- σ_+^ H2 )(σ- σ_-^ H2) √(A(σ - σ_-) ) ,wherethe σ_±^ Hj are given by(<ref>) and σ_- = -2λ/A[ λ (M_1+M_2) ±2λ_0^2J_1 ].As already said, theembedding is regular at σ=0, i.e. the brane enters the ergoregion smoothly.What it does next depends on which singularity it encounters first. If this were the square-root singularity at σ_-,thewall wouldturn around (just like it does forpositive σ_+),exit the ergoregion and intersect the AdS boundary at anotheranchor point. This is the possibility that we want to exclude. Consider for startersthe simpler caseℓ_1 = ℓ_2 ≡ℓ. In thiscase λ_0=0 and A = λ^2 ( 4/ℓ^2 - λ^2), so(<ref>)reduces to σ_-=-2ℓ^2 (M_1+ M_2)/4- λ^2ℓ^2≤ -min(M_j) ℓ^2 .In the last step we used the fact thatboth M_j are positive, otherwise the conical singularity at r_j=0⟺σ =- M_j ℓ^2 would be naked. What (<ref>)shows is that the putative turning point σ_-lies behind the bulksingularity in at least one of the two BTZ regions,where our solution cannot be extended. Thus this turning point is never reached.For general ℓ_1 ≠ℓ_2 a weakerstatement is true, namely thatσ_- is shielded by an inner horizon for at least one j. For a proof, we maximize σ_- with respect to the brane tension λ. We have performed thiscalculation with Mathematica, but do not find it useful to reproduce details here. The key point for our purposes is that there are no solutions in which the brane enters the ergoregion, turns around before an inner horizon, and exits towards the AdS boundary. Since as argued by Penrose <cit.>,Cauchy (inner) horizons are classically unstable,[For recentdiscussionsofstrong cosmic censorship in the BTZ black hole see<cit.>.]solutions in which the turning point lies behind one of them cannot be trusted. As such, the cauchy horizon should be viewed as the singularities of the black hole.One last remark is in order concerning the induced brane metric h_ab. By redefining the worldvolume time, τ̃=τ+ Jℓ_1∫x_1'()d /2, we can bring this metric to the diagonal form :dŝ^2 =- dτ̃^2+ | det ĝ| d^2/ with det ĝ =λ^2 /A(σ_- - σ). Theworldvolume is timelike forall σ >σ_-, as already advertised. More interestingly, the metric (felt by signals that propagate on the brane) is that of a two-dimensional black-holewithhorizon at the ergoplane σ=0. This lies outside the bulk horizons σ_+^ Hj, in agreementwith arguments showing that the causal structure is always set by the Einstein metric <cit.>. Similar remarks in a closely-related context were made beforein ref.<cit.>. The brane-horizon (bH) temperature,4πΘ_ bH =(- det ĝ|_σ=0)^-1/2 ,is intermediate between Θ_1 and Θ_2as can be easily checked. For ℓ_1=ℓ_2 for example one finds 2 Θ_ bH^2 = Θ_1^2+ Θ_2^2.§ THE NON-KILLINGHORIZONSince σ_- lies behind an innerhorizon, the first singularities ofthe embedding functions (<ref>,<ref>) are at σ_+^ Hj. A keyfeature of the non-static solutions is that these outerBTZhorizons, which are apparent horizons aswillbecome clear, do not meetat the same point on the brane.ForJ_j≠0 the following strict inequalities indeed hold σ_+^ H1 >σ_+^ H2 ifM_1 > M_2 σ_+^ H2 <σ_+^ H1 if M_1< M_2.For small J_j these inequalities are manifestby Taylor expanding (<ref>),σ_+^ Hj= - J_j^2 /M_j + O(J_j^ 4) .We showthat they holdfor all J_j in appendix <ref>.Themeaning of these inequalitiesbecomes clear if we use the holographic dictionary (<ref>), the energy currents (<ref>) and the detailed-balance condition (<ref>) to write the M_j as followsM_1 = 2π^2 [ Θ_1^2 (1+ R_1) + Θ_2^2 (1-R_1) ] , M_2 = 2π^2 [ Θ_1^2 (1- R_2) + Θ_2^2 (1+ R_2) ] .Assuming0≤ R_j ≤ 1, we see that the hotter side of the interface has the larger M_j. What (<ref>)therefore says isthat the brane hits the BTZ horizon of the hotter side first. §.§ The arrow of timeAssume forconcretenessM_1> M_2, the case M_2>M_1 beingsimilar.[Strictly speaking we also ask that thebrane hits both outer horizons before theinner (Cauchy) horizons, since we cannot trust our classical solutions beyond thelatter. As explained inappendix <ref>, this condition is automatic when M_2>M_1, but not when M_1>M_2, where it is possible forsome range of parametersto have σ_+^ H2 <σ_-^ H1. Those specific solutions should be discarded, although we do not think that this difference has a deeper physical meaning.]From eq.(<ref>) we haveM_1 = M_2 + λ| J_1|. We do not commit yeton the sign of J_1, nor on the sign in (<ref>), but the product of the two should bepositive. From the holographic interpretation, we expect J_1 to be positive (heat flows from the hot to the cold side), so the correct sign in (<ref>) is plus (to match the correct reflection/transmission coefficients), but we would like to understand how this condition arises from the gravity side. Figure<ref> shows a sketch of the behaviour of the brane past the ergoplane. The vertical axis isparameterised by σ (increasingdownwards), andthe horizontal axes by the ingoing Eddington-Finkelsteincoordinates y_j defined in (<ref>). These coordinates are regular at the future horizons, and reduce to the flatICFT coordinates x_j at the AdS boundary. Therefore, they do not affect the CFT state, or in other words, they are pure gauge and act trivially on the asymptotic boundary.Let us take a closer lookat thewall embedding in Eddington-Finkelstein (EF) coordinates. From (<ref>,<ref>) andthe identities r_j' = 1/2r_jwe gety_1' =ℓ_1/2(σ- σ_+^ H1 )(σ- σ_-^ H1)[ J_1ℓ_1 /2 √(σ + M_1ℓ_1^2) -(λ^2+λ_0^2)σ+ λ| J_1|/√(A (σ - σ_-) )], y_2' = ℓ_2/2(σ- σ_+^ H2 )(σ- σ_-^ H2)[ J_2ℓ_2 /2 √(σ + M_2ℓ_2^2) - (λ^2- λ_0^2)σ -λ| J_2|/√(A (σ - σ_-) )] .A little algebra shows thatthe square brackets in the above expression vanish at the corresponding horizons σ= σ_+^ Hj if J_1=-J_2>0. Thefunctions y_j present no singularity at the horizon. By contrast, ifJ_1=-J_2 <0 these functions aresingular:y_1 → +∞ at σ_+^ H1, and y_2 → - ∞ at σ_+^ H2. Consider first the case J_1>0. At first, it seems remarkable that in Eddigton-Finkelstein coordinates, the singularity at the horizon σ= σ_+^ Hj disappears and the brane enters smoothly in the black hole. After all, the membrane depends on , J_1 and both M_j, while the Eddington-Finkelstein coordinates change depends only on one side's parameters, M_j, J_j. Indeed, there are seemingly "miraculous" cancellations at play at σ_+^ Hj so that the behavior of the membrane at this location is independent on the tension. This can somehow be explained by remembering that the Eddington-Finkelstein coordinates are adapted to infalling observers. It appears that close to the horizon, the shape of the membrane is independent of its tension, and its shape follows a falling observer's trajectory. In the case J_1<0, this cancellation of course does not happen, and the membrane does not enter the future horizon. However, if we go to outgoing E-F coordinates, we find that the membrane exits smoothly the past horizon of the white hole. Knowing this, we now understand why we find a pair of solutions dual to the stationary ICFT system. The first one, with J_1>0 generates a heat flow in the correct direction in the field theory; from hot to cold. In this case, M_1=M_2+λ J_1, and hence the sign in the expression (<ref>) for the transmission coefficients is plus, in agreement with the result of ref.<cit.>.The other solution with J_1<0 can be obtained by time-reversal, which leaves the M_j unchanged. This gives an explanation for the interpretation of this companion solution on the field theory: it is the same stationary state, but time-inverted. Indeed, if one chooses the minus sign in the expressions (<ref>) (which is the "incorrect" transmission coefficients), and plugs them in (<ref>), while at the same time exchanging T_–↔ T_++, one will recover the equations (<ref>), but now with the "correct" transmission and reflections coefficients. Indeed, in the time-reversed stationary states, the "incident" rays will be the left-movers. Thus, the minus sign of (<ref>) appears when one chooses the incorrect time direction for the stationary states, and as such, misinterprets the scattering experiment.In the end, both solutions describe exactly the same situation. By choosing the time direction to be forward, we will have to discard the time-reversed solution. In gravity, similarly to a white hole, which solves Einstein's equations but cannot be produced by gravitational collapse, we expect that no physical protocol can prepare the J_1<0 solution.§.§ Event versus apparent horizon Denote byH_1and H_2the horizonsof the two BTZ regions of the stationary geometry, and by E_1 and E_2 theirintersections with the brane worldvolume. We can foliate spacetime by Cauchy slicesv_j = v̅ + ϵ_j(r_j,x_j), where v̅ is a uniform foliation parameter.[The non-trivial radial dependence in the definition of the Cauchy sliceis necessarybecause constantv_j curves are lightlike behind the jth horizon.] We use the same symbols for theprojections ofH_j andE_j on a Cauchy slice. Since simultaneous translations of v_j are Killing isometries,theprojections do not depend on v̅. BothH_1and H_2 are local (or apparent)horizons, i.e. future-directedlight rays can only traverse them in one direction. But itis clearfromfigure <ref>that H_1 cannot be part of theevent horizon of global spacetime. Indeed,afterentering H_1 an observer moving to the right can traversethe[ E_1,E_2] part of the wall, emerge outside H_2 in region2, and from there continue her journey to the boundary. Suchjourneys areonly forbidden if E_1=E_2, i.e. for thestatic equilibrium solutions. Before proceeding, let us briefly describe an alternative route that was considered before arriving at the correct resolution, since we believe it might be interesting to some readers. In our initial attempt, we searched for a way to prevent an observer from doing the trip depicted in white in (<ref>), keeping the apparent horizon as event horizons. A way to do that is to clip the membrane atE1. This is of course already not very satisfying on side 1, as the membrane terminates on the horizon, but we hoped there might be a way to justify this.On side 2, this is even harder to justify, as we would have a "dangling" membrane, and it seems that an observer could go around the membrane and cross it from the "wrong side", which gives a picture of the bulk which is completely different. This new problem can be resolved thanks to the ergosphere. Indeed, no observer could make the trip to go behind E_1 from the pink side, because the dragging forces would prevent him. There would thus be an artificial "end of the world", as no observer could pass behind the membrane. Incidentally, this also worked as an explanation for the correct sign choice of the currents J_i, as inverting them spoils this setup. Ultimately, this explanation was discarded, as there was no way to explain the abrupt ending of the membranes without introducing additional matter fields, and while for observers the geometry was consistent, the full geometry of the spacetime was quite strange, as with a spacelike trajectory one could go "around" the membrane. As we explain now, the real resolution is that the event horizon is deformed by the gluing of the spacetimes.In order to analyze the problem systematically, we define an everywhere-timelike unit vector field that will define the arrow of time in the full spacetime. t^μ∂_μ= ∂/∂ v_j +h_j(r_j) -1/2ℓ_j∂/∂ r_j+J_jℓ_j/2r_j^2∂/∂ y_j inthejth region ..Using the metric (<ref>) the reader can check that t^μ t_μ = -1. To avoid charging theformulae wedrop temporarilytheindex j. A future-directed null curve has a tangent vectorẋ^μ = ( v̇ , ṙ , ẏ ) whereẋ^μẋ_μ = 0 and ẋ^μ t_μ < 0.Thedots denotederivativeswith respect to a parameter on the curve. Solving the conditions (<ref>) gives ṙ = h/2ℓv̇ - r^2/2ℓv̇(ẏ -Jℓ/2r^2v̇)^2and v̇ > 0 .We see that the arrow of time is defined by increasing v, and that behind the horizon, where h(r) is negative, r is monotone decreasing with time, meaning a causal curve necessarily advances deeper into the black hole. This suffices to show that H_2 is part of the event horizon– an observer crossing itwill never make it out to the boundary again, and will plummet to the Cauchy Horizon. Crossing the membrane does not help, as the observer finds himself behind H_1, where he continues to plummet. As already shown, the story differs in region 1, as an observer may follow a curve similar to the one depicted in (<ref>). Here, the event horizon consists of a lightlike surface H_1such thatno future-directed causal curve starting from a point behind it can reach the [ E_1, E_2] part of the wall. Indeed, once a causal curve enters behind σ^ H1_+, it necessarily must sink in further because of (<ref>). Its only chance of escape is reaching the wall at [ E_1, E_2], and thus if this is not possible, it is indeed located behind the event horizon.Clearly, the full event horizon H_ event = H_1 ∪H_2must becontinuous and lie behind the apparent horizon H_1 in region 1. This is illustrated in fig.<ref>. General theorems <cit.> actually show that a local horizon which is part of atrapped compact surface cannot lie outsidethe event horizon. But there is no clash with these theorems here because H_1 fails to becompact, bothat infinity andat E_1.To compute the projection ofH_1 on aCauchy slice, note first that it is a curve that passes through the pointE_2. When a causal curve enters the apparent horizon ^ H1_+, it will inexorably sink deeper into it. Its best chance at escaping the singularity it thus to maximize its speed toward the wall. The constraints on the tangent vector will be less stringent when the curve is lightlike. Time-reversing this thought process, it is easy to convince oneself that the projection of the horizon will be the curve that is everywhere tangent to the projection of the local light cone,as shownin fig.<ref>. Put differently, at every point on the curve we must minimize the angle between (the projection of) light-like vectors andthe positive-y_1 axis. This will guarantee that an observer starting behind H_1 will notbe able to move fast enough towards the right in order to hit the wall before the pointE_2. Parametrisingthe curve by y_1,using (<ref>)and dropping again for simplicity the j=1 index we find - dy/dr |_ H_1 = max_v_y >0 [r^2/2ℓ v_y(1-Jℓ/2r^2v_y )^2 - h/2ℓv_y]^-1 ,where v_y≡ dv/dy. The extrema of this expressionare v_y = ± r/√(Mℓ^2 - r^2). Recall that we are interested in the region behind the BTZhorizonand infuture-directed light rays for which v is monotone increasing(whereas r is monotone decreasing). For null rays moving to the right we should thus pick the positive v_y extremum. Inserting in (<ref>) gives the differential equation obeyed by H_1, dy/dr|_ H1 = 2 ℓ/ Jℓ -2 r√(Mℓ^2 -r^2 ) .The (projected) event horizon in region j=1 is the integral of (<ref>) withtheconstant of integration fixed so thatthe curve passes throughE_2.Here now comes the important point. The reader can check that near the BTZhorizon, r =r_+^ H1(1+ ϵ) with ϵ≪ 1, the denominator in (<ref>) vanishes like ϵ. This is a non-integrable singularity, so y(r) diverges at r_+^ H1 and hence H1 approaches asymptotically H1 asannounced in section <ref>. Presumably, the holographic entropy will therefore asymptote to that of the equilibriumBTZ horizon, given by (<ref>). This suggests that the chiral outgoing fluid is thermal, not only in the cold region 2 but also in the hotter region 1, at least far from the interface. But since the state of the outgoing fluid will be the same on the full slice, we conclude that it is thermal everywhere. Many questions remain about this deformed event horizon, particularly whether it is indeed related to the entropy of the black hole. Near the interface, the HRT surface measuring the entanglement entropy will certainly be deformed, but it is unclear if it will enter the apparent horizon or not. These questions will be explored in the next chapter. §.§ Remark onflowingfunnelsThe fact thatoutgoing fluxes are thermalized means that, in what concerns theentropy and energy flows, the interface behaves likeablack cavity. This latter can be modeled by a non-dynamical, two-sided boundary black hole whose (disconnected)horizon consists of two points. To mimic the behaviour of the interface, the two horizon temperatures should be equal to theΘ_j^ eff that saturate the bounds (<ref>). This is illustrated infigure <ref>. The precise shapeof the flowing horizon(s)depends on the boundaryblack hole(s) and is not important for our purposes here.For completeness,following ref.<cit.>, we outline how to derive itin appendix <ref>.Like the thin-brane horizon of figure <ref>, it approaches the BTZ horizons at infinity but differs in the central region (notably with a delta-function peak in the entropy density at x= 0, see app.<ref>). The key difference is however elsewhere. The two halves of the flowing funnel of figure <ref> are a priori separate solutions, with the temperatures Θ_j and Θ_j^ eff chosen at will. To mimic the conformal interface onemust imposecontinuity of the heat flow, dQ_1/dt= π c_1/ 12 (Θ_1^2 -(Θ_1^ eff)^2)=π c_2/ 12 ( (Θ_2^ eff)^2 - Θ_2^2) = dQ_2/ dt .This relates thehorizontemperatures to each other and to those of the distant heat baths. It is however unclear whether anylocal condition behind the event horizons can impose thecondition (<ref>). § PAIR OF INTERFACESIn this last section weconsidera pair of identical interfaces between two theories, CFT_1 and CFT_2.[Our branes are not oriented, so there is no difference between an interface and anti-interface. More general setups could include several differentCFTs and triple junctions of branes, but such systems are beyond the scope of the present work.]The interfaceseparation isΔ x. Let the theory that livesin the finite interval beCFT_2 and the theory outsidebe CFT_1 (recall that we are assuming ℓ_2 ≥ℓ_1). At thermal equilibrium the system undergoes a first-orderphasetransitionat a critical temperature Θ_ cr = b/Δ x where b depends on the classical Lagrangian parameters λℓ_j<cit.>. Below Θ_ cr the brane avoidsthe horizon and isconnected, while above Θ_ cr it breaks into two disjoint pieces that hit separately the singularity of the black hole. This isa variant of theHawking-Page phase transition<cit.> thatcanbe interpreted<cit.> as a deconfinement transition ofCFT_2. We would like to understand what happens when thissystem iscoupledtoreservoirs withslightly differenttemperatures Θ_± =Θ± dΘ at x=±∞.Because of the temperature gradient, the branes are now stationary, butthey conserve the topologyof their static ancestors. In the low-Θ phase the brane avoidsthe ergoregion (which is displaced from the horizon infinitesimally) and stays connected,while in the high-Θ phase it splits in two disjoint branes thatenter the ergoregion and hitseparately a Cauchy horizon or a bulk singularity. The two phasesareillustrated in figure <ref>. Consider thehigh-Θ phase first. Theisolated-brane solutionofsections <ref> and <ref>ishere juxtaposed to a solutionin which the roles of CFT_1 and CFT_2 are inverted.The mass parameterof the three BTZ regions decreasesin the direction of heatflow, jumping by λJ across each brane. This is indeed the `ticket of entry'to the ergoregion, as explained ineq. (<ref>)andsection <ref>.The total change of BTZ mass across the pair is the same as if the two branes had merged into a single one with twicethe tension.Usingthe holographic dictionary (<ref>)and the fact that incoming fluxes at x=±∞ are thermal withtemperaturesΘ_±one indeedcomputeshigh Θ :dQ/ dt= π^2 ℓ_1/1+ λℓ_1 ( Θ^2_- - Θ^2_+)≡π^2 ℓ_1 T_ pair( Θ^2_- - Θ^2_+),where the effectivetransmission coefficient T_ pair isthat ofaCFT_1 defect whose dualbrane hastension2λ. Note in passing that thiseffectivebrane tension can exceed the upper bound (<ref>) above which an individual brane inflates, and that an array of widely-spacedbranes can make the transmission coefficient arbitrarily small.The heat flow (<ref>) is what one would obtain from classical scatterers.[The argument grew outofa conversationwith Giuseppe Policastro who noticed that thetensions of two juxtaposed branes effectively add up inthe calculation of ref.<cit.>] To understand why, think of T_j andR_j as classical transmission and reflection probabilitiesfor quasi-particles incident on theinterface from the side j. The probability of passingthrough both interfaces is the sum of probabilitiesof trajectories with any number of double reflections in between, T_ pair= T_1 ( 1 + R_2^2 + R_2^4+ ⋯) T_2= T_1T_2/1 -R_2^2=1/1+ ℓ_1 λ ,where in the last step we used the holographic relations (<ref>).This gives precisely the result (<ref>) as advertised. The low-Θ caseis drastically different.The solution is now obtained bygluing a brane witha turning point (i.e. σ_+>0, seesection <ref>)to its mirror image,so that the brane has reflection symmetry. The bulk metric,however, is not ℤ_2 symmetric because in the mirror image we do not flip thesign of the BTZ `spin' J.This is required forthe continuity of the dx/dt component of the bulk metric. While it may seem we are gluing brane with its time-reversed counterpart this is not the case here. The arguments that lead to this conclusion in sec.<ref> relied on the regularity at the horizon, which is absent here. Similarly, the argument from the transmission coefficients in the field theory are not applicable here, since there might be interference effects on the energy fluxes in the middle of the two membranes. TheBTZ mass isthusthe same at x= ±∞,while its value in the CFT_2 region depends on the interface separation Δ x. It follows from the holographic dictionary (<ref>) that the heat flow isin this caseunobstructed, lowΘ : dQ/dt=π^2 ℓ_1 ( Θ^2_- - Θ^2_+) ,i.e. the effective transmission coefficient is T_ pair =1. Superficially,it looksas iftwo branes with equal and opposite tensionshave merged into atensionless one. In reality, however, the above phenomenon is deeply quantum. What the calculation says is that when acharacteristic thermal wavelengthbecomes larger thanthe interface separation, coherent scatteringresults in all incident energy being transmitted. This is all the moresurprising since CFT_2 is in the confined phase, and one couldhave expected that fewer degrees of freedomare available to conduct heat. The microscopic mechanism behind this surprising phenomenon deserves to bestudied further.The above discussion stays valid for finite temperature difference Θ_+-Θ_-, but the dominant phase cannot in this case be foundby comparing free energies. Nevertheless, as Δ x→ 0 we expect from the dual ICFT that the interface-antiinterface pair fuses into the trivial (identity) defect [In some range of parameters (≥_0, ℓ_2<3ℓ_1), the pair fuses into a non-trivial defect<cit.>, although this configuration is at best metastable.]<cit.>, whereas at very large Δ x the connected solution ceases to exist. A transition is therefore bound to occur between theseextreme separations. Let us comment finally on what happens if the interval theory is CFT_1, the theory with fewer degrees of freedom, and the outside theory is CFT_2.Here the low-temperature phase only exists for sufficiently-large tension ifc_1< c_2<3 c_1, and does not existif c_2> 3 c_1<cit.>.The (sparse) degrees of freedom of theintervaltheory in this latter case are always in the high-temperature phase, and there can be no quantum-coherent conduction of heat. Reassuringly, thisincludes thelimit c_2/c_1 → 0 in whichthe CFT_1 interval is effectively void. Note also that in the low-temperature phase the wire can be compactified to a circle and the heat current can be sustained without external reservoirs. This is not possible in the high-temperature phase. § CLOSINGREMARKSThe study of far-from-equilibriumquantum systems is an exciting frontier both in condensed-matter physics and in quantum gravity. Holography is a bridge between these two areas of research andhas led tomany new insights. Much remains however to be understood, and simple tractable models can help as testing grounds for new ideas. The holographic NESSof this paper are tractable thanks to several simplifying factors: 2dconformal symmetry,isolated impurities and the assumption of a thin brane.If the first two can be justified in (very) pure ballistic systems, the thin-brane approximation is an ad-hoc assumption of convenience. Extending these results totop-downdual pairs is an important step to validate them.Another obvious question concernsthe structure of entanglement and theHubeny-Rangamani-Ryu-Takayanagicurves <cit.> in the abovesteady states. Whileit is known that geodesicscannot probethe region behind equilibriumhorizons <cit.>, they can reach behind both apparent and event horizons intime-dependent backgrounds, see e.g.<cit.>.In the framework of thefluid/gravity correspondence the entropy currentassociated with the event horizon is a local functional of the boundary data <cit.>. It would be interesting to examine this questionin the present far-from-equilibrium context. In the next chapter, we provide some partial results in this direction.Another interesting questionishowthe deconfinement transition of the intervalCFT insection <ref> relates tothe sudden jump in thermal conductivity of the system. Last but not least,it would be nice to relate the production of entropy to the scattering matrix of microscopic interfaces, e.g.forthe simplest free-field interfaces of <cit.>. Presumably, this is also a computation that could be completed holographically, with a slightly more complicated model. CHAPTER: ENTANGLEMENT ENTROPY AND HOLOGRAPHIC INTERFACES Based on unpublished workThis final chapter is dedicated to the study of the entanglement entropy structure in the models that were constructed in the previous two chapters. There are several motivations for why this is an interesting quantity to study.First and foremost, it is in direct connection with the doubly holographic black hole evaporation models quickly described in sec.<ref>. Thus if one wants to use this holographic model to test the Island formula, it is necessary to be able to compute RT surfaces. Indeed, this is what was studied in <cit.>.On a related note, the study of RT surfaces in such spacetimes is interesting in and of themselves because there generally are several competing RT surfaces. As we change the boundary interval, there will be discontinuous jumps in the RT surfaces, a sort of phase transition in the entanglement structure. In the context of the doubly-holographic models and the page curve, this is related to the Page time as is explained in sec.<ref>.From the point of view of the ICFT, such transitions might be related to some other property of the underlying state. For instance, one of our conjectures was that the presence (or not) of a center in one side of the ICFT dual (see <ref>) might be related to the crossing (or not) of the RT surface computing the Von Neumann entropy of associated CFT. This turned out to be incorrect, but the connections of RT surfaces with bulk reconstruction <cit.> suggests there should be a way to relate the sweeping transition with the entanglement structure of the ICFT.Another interesting direction was alluded in chap.<ref>, and it is the computation of RT surfaces in non-equilibrium solutions with deformed horizons. Indeed, it is still an open question how is the Bekenstein-Hawking formula modified whenever we consider out-of-equilibrium horizons, which are non-Killing. Is it the event horizon or the apparent horizon that must appear in the formula <cit.>? In equilibrium situations, those two coincide, so the distinction cannot be made. Additionally, this may also allow us to understand and confirm the entropy production at the interface of the NESS state described in the previous chapter.Finally, entanglement entropy computations allow one to compute the Quantum Null Energy Condition (QNEC) <cit.>. As we will explain in the main text, the QNEC is an inequality (which should hold in any QFT) relating derivatives of the entanglement entropy and the one-point function of the Stress-Energy tensor. As such, it is a powerful tool to determine if simple holographic bottom-up models are at least consistent. For instance, the QNEC has been used to restrict quenches of CFTs <cit.>, by using the RT prescription in the dual bulk. Such a quench geometry is obtained in much the same way as the models of chap.<ref>, the difference being the membrane is now spacelike, instead of timelike. Whenever the QNEC is not obeyed in such a quench, it signals that it is probably unphysical. In the same vein, it would be interesting to compute the QNEC in the various geometries that were obtained in the previous chapters to check for consistency.Since this chapter is composed of partial, incomplete and sometimes inconclusive results, it will necessarily seem much more scattered and less polished than the previous ones. While there isn't a clear-cut goal or thread, but rather several different directions, we believe that the work we present here is valuable, and hope that it will be completed and "packaged" in the more standard form of a paper sometime.Throughout this chapter, we use units in which 8π G =1.§ GEODESICS IN ASYMPTOTICALLY ANTI-DE-SITTER SPACESThe main object of study in this chapter are RT and HRT surfaces in 3-dimensional asymptotically AdS spacetimes. In fact, we can restrict to the study of spacetimes which are locally AdS, since the only matter content of our models is the membrane, and everywhere else the vacuum equations of motion hold. In three dimensions, finding RT curves amounts to computing spatial geodesics anchored at the asymptotic boundary.In this section, we compute and outline some properties of such geodesics, mainly in Poincaré space. Indeed, we solved the equations in one coordinate system, by exploiting diffeomorphisms between the different geometries, we can obtain the geodesics in any locally AdS spacetime.Let us begin by recalling the Poincaré metric :ds^2 = ℓ^2/z^2(dw_+ dw_- +dz^2) ,where w_-=x-t, w_+=x+t are the lightcone coordinates. Consider the most generic spacelike geodesic, of initial point p_0=(w_+^0,w_+^0,z_0) and tangent vector ṗ_0 = (ẇ_+^0,ẇ_-^0,ż_0). We will consider WLOG affinely parametrized geodesics, namely : ṗ^μṗ_μ= ℓ^2/z^2(ẇ_+ ẇ_- +ż^2)=1 ,where this condition will hold not only at the initial point, but everywhere along the geodesic, so we omit the "0" indices.Denoting the affine parameter , the action that we need to minimize is simply :L= ∫ℓ^2/z^2(ẇ_+ẇ_-+ż^2)d .Using cyclicity of w_+, w_- we get two integration constants :K_± =ẇ_±ℓ/z^2 .Translations of the affine parameter give us the conservation of the tangent vector norm (<ref>) :K_-K_+z^2+ℓ ^2 ż^2/z^2=1 .Note that other symmetries like scale invariance (z→ω z, w_±→ω w_±) or boosts (w_+→ w_+, w_-→^-1w_-) do not provide additional integration constants.We can solve (<ref>) for z(). There are three cases which we must differentiate, K_- K_+>0, K_+K_-<0 and K_+K_-=0. These signs are simply determined by the initial vectors, since sign(K_- K_+)= sign(ẇ_-ẇ_+). Thus, in Poincaré coordinates, we will have three types of qualitatively different geodesics.§.§ Case K_-K_+>0We begin by the K_-K_+>0 case. Upon integrating (<ref>) : ±(/ℓ+C)= Arctanh(√(1-K_+K_- z^2))⇔ z()=√(1-tanh^2(/ℓ+C)/K_+K_-) .The ± signs denotes two branches of the solution which join at z=1/K_+K_-, for = -Cℓ. The C integration constant can be absorbed into a translation of the affine parameter, we set C=0 so that the "turning point" of the geodesic is always at =0. Then in (<ref>) the plus sign must be chosen for >0 and minus sign for <0. For this choice, the initial _0 is given as : _0=-ℓsign(ż_0)Arctanh(√(1-ℓ^2/z_0^2ẇ_+^0ẇ_-^0)) .Plugging (<ref>) into (<ref>) and performing the integration yields :w_± = 1/K_∓tanh(/ℓ)+G_± ,where G_± are two integration constants. They are easily fixed using the initial conditions, and the full solution is given by :z() =z_0^2/ℓ√(1-tanh^2(/ℓ)/ẇ_+^0ẇ_-^0)⇔ = ±ℓ√(1-z()^2/z_0^2ℓ^2/z_0^2ẇ_+^0ẇ_-^0) ,w_±() =tanh(/ℓ)-tanh(_0/ℓ)/ℓẇ_∓^0z_0^2+w_±^0 , _0= -ℓsign(ż_0)Arctanh(√(1-ℓ^2/z_0^2ẇ_+^0ẇ_-^0)) . These are the geodesics one usually thinks of in AdS space. At =±∞, they are anchored on the boundary of AdS. In the case where ẇ^0_+ =ẇ^0_-, they are semi-circles centered on the asymptotic boundary. By boosting them one can obtain the general shape of the geodesic.Note that while it would be nice to parametrize the geodesic by its initial point on the asymptotic boundary, this can't be done as they lie formally at infinity. Indeed, as we send z_0=→ 0, to satisfy the affine condition we need ẇ^0_±∝^2 and ż_0∝, so on the boundary the tangent vector is degenerate. §.§ Case K_-K_+<0We treat the case K_-K_+<0. The resolution of the equations is essentially the same, modulo the change tanh→ coth. Skipping the details, we have :z() =z_0^2/ℓ√(1-^2(/ℓ)/ẇ^0_+ẇ^0_-) ,w_±() =(/ℓ)-(_0/ℓ)/ℓẇ^0_∓z_0^2+w^0_± , _0= -ℓsign(ż_0)Arccoth(√(1-ℓ^2/z_0^2ẇ^0_+ẇ^0_-)). Note that now, the point =0 is not really a "turning point" anymore since as we approach it all the coordinates diverge as 1/. For the two signs of , we get two disconnected solutions, with one end in the Poincaré horizon z=∞ and the other on the boundary z=0. Naturally, the change → - is equivalent to a reversal of the initial tangent vector ẇ_±^0→ -ẇ_±^0,along with a translation of the initial point of -2(_0/ℓ)z_0^2/ℓẇ_∓^0. See fig.<ref> for a sketch of the geodesic projected to the t=cst plane (note that there are no geodesics of this type that stay in this plane, because ṫ_0≠0 necessarily). These geodesic seem very strange, but this is due to the coordinate system we chose. Indeed, the Poincaré horizon is a coordinate singularity, and if we were to map these geodesics to global coordinates, we would see that they are a portion of a geodesic that is doubly anchored at the boundary. Nonetheless, for the purposes of the RT prescription, the coordinate system in which we look for the geodesics will of course matter, since it will depend on the state of the CFT.§.§ Case K_-K_+=0The last case to consider is of "measure zero" in the set of initial conditions, the case K_- K_+=0. It includes in particular geodesics that are shot "straight" toward the bulk interior.Solving (<ref>) in this case gives us z=e^ signż_0/ℓ ,where we absorbed the integration constant into a translation of .The equation (<ref>) is again readily solved, regrouping all the equations we obtain :z=exp( sign(ż_0)/ℓ) ,w_± = ẇ^0_±ℓ/2z_0^2(e^2 sign(ż_0)/ℓ-e^2_0/ℓ)+w^0_± , _0= ℓsign(ż_0)ln(z_0) .Of course, in (<ref>), one of the ẇ^0_± must be equal to zero if the equations are to be satisfied.§ SIMPLE EXAMPLESLet us apply the formulas we have derived above to some standard examples. Consider an interval of length a on the boundary of Poincaré space. The dual CFT state is simply the vacuum on ℝ^2, so computing the entanglement entropy of the interval should reproduce the classic result by Calabrese and Cardy <cit.>.Consider the initial point to be (t=0, x=0, z=) on the boundary, where we introduced the IR cutoff . Since the geodesics exhibited in the previous section are parametrized in terms of the initial tangent vector ṗ^0 and not the final point, we must invert some relations to find the correct values for which the geodesic ends on (t=0,x=a,z=0)[In poincaré space, it is possible to invert the relation generally and to give the geodesics as parametrized by initial and final points. We don't find it useful to reproduce it here, as it is more cumbersome than the parametrization we exhibited and, while it can be convenient, we won't really need it for our purposes.].We already know ẇ_+ẇ_->0 in order to have a geodesic connecting two points on the boundary. The endpoint is located at z=, at =-_0.From the equation of w_±(-_0) (<ref>), we find : ẇ^0_+=ẇ^0_-≡ẇ_0=/ℓ√(a^2/4^2+1) .What remains is to compute the length of the geodesic. Thankfully, in the affine parametrization this is simply given by the difference in initial and final affine parameters:L = -2_0 = 2ℓ Arctanh(ẇ_0ℓ a/2^2)=2ℓ Arctanh(a/√(a^2+4^2)) .To obtain the entanglement entropy, all we have to do is divide by 4G=1/2π and take the limit → 0. We obtain at leading order :S=2π L = 4πℓln(a/+O(^2))=c/3ln(a/)+O(^2) ,which correctly reproduces the expected result<cit.>. Let us illustrate a slightly more complicated example, which will be crucial for the methods of computation we will use in the ICFT setups. We consider now the spinning string geometry, the metric (<ref>). In this case, spacetime is only stationary, but not static. The problem thus becomes immediately more complex; instead of studying curves restricted to a 2-dimensional Cauchy slice, we must allow for generic spatial geodesic in the full 3D geometry. Furthermore, one can try to solve the geodesic equations for the metric (<ref>), and it is feasible but a much harder problem.Instead, we exploit the change of coordinate to bring us back to the Poincaré geometry. To find the diffeomorphism relating the two coordinate systems one can for instance equate the two parametrizations of the AdS hyperboloid described in sec.<ref>, and solve the resulting equations. We find (=sign(J)):w_+= √(r^2-r_+^2/r^2-r_-^2)exp((r_+- r_-)(x+t)/ℓ) ,w_-= √(r^2-r_+^2/r^2-r_-^2)exp((r_++ r_-)(x-t)/ℓ) ,z= √(r_+^2-r_-^2/r^2-r_-^2)exp( x r_+-tr_-/ℓ) . Of course, (<ref>) is far from unique, as we can apply any isometry of Poincaré and still obtain a valid change of coordinates. Note in passing that this coordinate change covers only a portion of Poincaré space, since w_+>0∩ w_->0, and only a portion of the spinning string, as the coordinate change becomes singular at the horizon r=r_+[The outer horizon is mapped to w_-=0∪ w_+=0. A more careful analysis would show that this surface is composed in fact of two horizons, the future, and past ones. If one extends to the rest of the Poincaré spacetime, we would obtain the two interior regions, as well as another exterior.]. There are other similar changes of coordinates that map the region r_-<r<r_+ and 0<r<r_-, but we won't be needing them here.Consider now the boundary region in stationary CFT delimited by two points (t=0,r=1/,x=0) and (t=0,r=1/,x=a). WLOG we take =1, namely J>0. Under the change of coordinates (<ref>), these are mapped to p_1= (w_+=1,w_-=1, z= √(r_+^2-r_-^2)) ,p_2= (w_+=exp(r_+-r_-/ℓa),w_-=exp(r_++r_-/ℓa), z= √(r_+^2-r_-^2)exp(xr_+/ℓ)) ,where we kept only the leading order in . We will write p_i^b for the corresponding points on the 2D-boundary, with the z-coordinate excised.Note that we do not care about the specific shape of the boundary interval we select between the two points. Indeed, any spacelike curve between p_1 and p_2 will have the same entanglement entropy, because they all lie in the same "causal diamond"[The causal diamond of a portion of a Cauchy slice A is the region that is in causal contact uniquely with A. For instance, if A is the interval between two points x, y, the causal diamond is determined by shooting lightrays from x and y, and selecting the enclosed region. Because points in this region can only be affected by A, they can be expressed as a unitary transformation applied to A. See fig.<ref>. ]. Any two of these curve houses states which are related by a unitary transformation, and thus have the same entanglement entropy, see sec.<ref>.We can exploit this fact to simplify the problem even further. Indeed using the points (<ref>), we would still need to apply the HRT prescription, as they do not lie on a preferred Cauchy slice. Then, we use the isometries of Poincaré to bring these two points on a constant time slice. In this case, we can begin by a x-translation to bring p_1 to the origin. Then, we perform a boost to bring p_2 on the same t=0 Cauchy slice. Since these are isometries of Minkowski on the boundary, we do not need to perform them explicitly, as we now that the norm |p^b_1-p^b_2| will be conserved. We find :|p_2^b-p_1^b|^2 = 4exp(r_+ a/ℓ)sinh(r_+-r_-/2ℓa) sinh(r_++r_-/2ℓa)≡a . In this way, we successfully reduced the problem to the same RT computation as previously with an interval of size a. There is one difference, and that is in the UV cutoffs which are now different at the two endpoints. Recycling (<ref>) and computing the correct _f given the different cutoffs, we obtain :S=2π L=2π(_f-_0)=c/6log( 4/^2 (r_+^2-r_-^2) sinh(r_+-r_-/2ℓa) sinh(r_++r_-/2ℓa)) ,which correctly recovers the expected entanglement entropy, see (<ref>). With this method, inverting (<ref>), it is also possible to obtain the equation of the HRT surface in the original coordinate system. In particular, the spacelike geodesic that we obtain does not remain in a constant time plane, as we should expect for a non-static geometry. A sketch of the HRT surface, projected on the t=0 plane, can be found on fig. <ref>. Notice that the HRT surface cannot breach the horizon, and as a→∞, it will approach it more and more, which gives the Bekenstein-Hawking formula for the coarse-grained entropy (see (<ref>)). § ENTANGLEMENT STRUCTURE OF STATIC ICFT STATESIn this section, we apply the lessons we have just learned to compute the entanglement structure of static ICFT states on ℝ^2. We will also mention possible applications to the compact geometries of chap.<ref>. §.§ RT surfaces with interfacesIn this section, we wish to apply the RT prescription to spacetimes containing a gravitating domain wall. As the wall connects two separate bulks, we need a prescription to continue the RT surface across the wall. For instance, in the case of end-of-the-world branes which appear in Island computations <cit.>, the prescription states that the RT surface should end perpendicularly on the End of the World brane[This is only in the case where the EOW brane is described by the simplest Lagrangian, only involving its tension. If additional fields are living on it, the prescription will change]. We will show that this can be recovered this as a special limit of the more general interface prescription.Consider again the action determining a spacelike geodesic, but on a spacetime which is composed of two pieces. Each piece will contain a domain wall which we parametrise as x^μ_mi(ξ^a), where the i denotes its embedding on side i. The geodesic is parametrised by , and crosses the membrane at ^* :L=∫__0^^*√(g^1_μνẋ^μẋ^ν)d+∫_^*^_0√(g^2_μνẋ^μẋ^ν)d = ∫__0^^*L_1(x,ẋ)d+∫_^*^_0L_2(x,ẋ)d . In what follows, we omit the second half of the action as it is treated similarly. In the variation of (<ref>), the initial and final points are fixed, δ x(_0)=δ x(_f)=0. On the other hand, the variation at ^* receives a softer constraint; it must simply remain on the membrane. Using the parametrization of the membrane, we can write is as : δ x^ρ(^*) =x^ρ_m/ξ^aδξ^a ,where the δξ^a are arbitrary variations. The variation of (<ref>) then reads : δ L = ∫__0^^*δL_1/δ x^()δ x^() + L_1/ẋ^ρδ x^ρ(^*)+1↔ 2 ,where we used the shorthand δL_1/δ x^()=L_1/ x^-d/dL_1/ẋ^, which is the term which will give the Euler-Lagrange equations. The second term in (<ref>) appears from an integration by parts.Thus by the arbitrariness of the variation δ x^(), we obtain the geodesic equations which must be obeyed by the geodesic on both sides. The "crossing constraints" are obtained by requiring that the boundary term vanishes. They are :L_1/ẋ^ρ x^ρ_m/ξ^a=L_2/ẋ^ρ x^ρ_m/ξ^a , ⇔ t^1 ρ_a ẋ^_1 g_1 ρ/√(g_1μνẋ_1^μẋ_1^ν)=t^2 ρ_a ẋ^_2 g_2 ρ/√(g_2μνẋ_2^μẋ_2^ν) ,where we labeled by i the curves lying on side i. We introduced the notation t^i ρ_a = x^ρ_m/ξ^a which are the tangent vectors to the membrane.The constraints (<ref>) tell us that the projection of membrane tangent vectors on the geodesic should be conserved while crossing. One can get a bit more intuition on the meaning of this constraint by considering the membrane to contain only spacelike directions. In this case, the tangent vectors t^i μ_a are all spacelike, and using the metric matching condition (<ref>), we have that the scalar products t^i μ_at^i ν_b g_iμν=h_ab are equal on both sides, since they are the membrane's induced metric. This allows us to divide on both sides of (<ref>) by the norm of the tangent vectors. The resulting equations then equate the angles of the geodesic with the membrane : ∢(t^1_a,ẋ_1)=∢(t^2_a,ẋ_2) . Note that the condition (<ref>) can only be interpreted as (<ref>) in the case where the metric matching is satisfied. Indeed, there is an intuitive way of understanding why is this the case. The metric matching condition is essentially a continuity equation on the metric. This should imply that the Christoffel symbols are at most step-wise discontinuous. Then by the geodesic equation : ẍ^μ = Γ^μ_νρẋ^νẋ^ρ .By integrating we find that ẋ^μ should be continuous across the interface. The precise way this is realized is simply (<ref>).In the case where some directions of the membrane are timelike, the norm squared of the tangent vectors will be negative. We could still divide on both sides by the absolute value of the norm, but the expression that we obtain in this way is difficult to visualize as an "angle". §.§ Vacuum ICFTWe are now ready to compute RT surfaces in setups containing a gravitating wall. We consider in this section the joining of two CFTs in their vacuum state, on an infinite interval. The solution is described in sec.<ref>, we rewrite here the crucial formulas to set slightly different conventions. The dual metrics can be given in Poincaré coordinates :ds^2_j = ℓ_j^2/z_j^2(-dt^2+dx_j^2+dz_j^2) ) ,and the membrane equation is very simple in these coordinates :x_j(z_j) = -tan(ψ_j)/z_j ,where the solution is given in the folded picture, and the conventions are to keep the part of spacetime that lies to the "increasing x" side of the membrane (for convenience we inverted here the convention with respect to previous chapters). The identification is done by setting ℓ_1/z_1=ℓ_2/z_2⇔cos(ψ_1)/z_1=cos(ψ_2)/z_2 along the membrane.Remember that in the minimal model, the only parameters available are the AdS radii ℓ_i and the tensionof the membrane, as here, the state is fixed to the vacuum. From the gluing equations (<ref>), we can trade ℓ_2/ℓ_1 andfor the angles ψ_i. The WLOG assumptions ℓ_1≤ℓ_2 as well as the Israel equations then impose :0< ψ_1≤π/2;-ψ_1<ψ_2<ψ_1 .So, for any two angles in the range (<ref>), we have an ICFT dual for definite c_2/c_1 and membrane tension .The simplicity of the solution in this case allows us to explicitly construct a coordinate system unifying the two spacetime pieces<cit.>. Then, the full solution can be expressed in a single coordinate chart, see fig.<ref>. We will see that in this picture, crossing RT surfaces can be expressed by a beautiful geometric construction, which is due to <cit.>. In the following section, we reproduce the results already obtained there, and we push the analysis of the RT surfaces in slightly more detail than in the aforementioned paper. §.§ Intervals containing the interfaceWe consider first the application of the RT prescription to boundary intervals containing the interface. Consider first two points on the boundary at equal times, located on each side of the interface, p_1 =(τ_1=0,x_1=σ_1) and p_2=(τ_2=0,x_2=σ_2) (in the folded picture).Because the geometry is static, we can restrict ourselves to a constant time Cauchy slice to search for the RT surface. As mentioned in sec.<ref>, spacelike geodesics confined to the Cauchy slice are simply semi-circles whose center lies on the AdS boundary.The sought-out RT surface will be composed of circle arcs meeting on the interface. To satisfy the crossing constraints (<ref>), we need to appropriately choose the radii of the two geodesics so that the circles are tangent on the membrane (such that their incident angle is continuous). As first noticed in <cit.>, this problem can be solved by a simple Euclidean geometry construction, outlined in fig.<ref> The only caveat of the construction of fig.<ref> is that the RT surface is parametrized in terms of O_1=(,0) and φ, instead of the boundary points _1 and _2. By repeated use of the law of sines, we obtain the following formulas (see app.<ref> for the detailed derivation) : _2= (-)(cos(ψ_2)/cos()-1) , _1= (-)sin(+ψ_2)/sin(-ψ_1)(1+cos(ψ_1)/cos()) , μ = _2/_1=sin(-ψ_1)(cos(ψ_2)/cos()-1)/sin(+ψ_2)(cos(ψ_1)/cos()+1) ,where we defined α=φ-ψ_2 simply because it simplifies the look of the expressions. There are some constraints onand α to generate a well-defined RT surface (for instance, to avoid values for which the construction yields an RT surface traversing on the wrong side of the asymptotic boundary). The conditions read : <0 →ψ_1≤≤π/2⇔ 0<_2/_1<1 , >0 →π/2 ≤≤π-ψ_1 ⇔_2/_1>1 . Thus, for ψ_1<<π-ψ_1 we cover all possible boundary intervals. Notice that the absolute value ofdoes not influence the ratio μ=_2/_1, which is a consequence of scale invariance. By a lengthy computation, one can show that μ(α) is a monotonically increasing function, which shows that there is a unique RT surface given _1 and _2.What remains is to compute the length of the geodesic to obtain the entanglement entropy. Parametrizing the semicircle as (x=Rcos(þ), z=R sin(þ)) the length of the geodesic depends only on the initial and final opening angles :L=ℓln(sinþ_f(1+cosþ_0)/sinþ_0(1+cosþ_f)) .Using (<ref>), and denoting the z_i-cutoffs _i, we find the length of the full geodesic (see app.<ref>):L= ℓ_1 ln(_1/_1) + ℓ_2ln(_2/_2)+ℓ_1 log(2/tan(-ψ_1/2)(1+cos()/cos(ψ_1)))+ℓ_2 log(2tan(+ψ_2/2)/1-cos()/cos(ψ_2))_g(ξ) . The quantity on the second line is independent of , and thus depends only on the ratio μ. The fact that (<ref>) takes the form L=ℓ_1 ln(_1/_1) + ℓ_2ln(_2/)+g(μ) is no coincidence. To understand why, consider the more general case in which p_1 =(τ_1=0,x_1=1), p_2=(τ_2≠ 0, x_2=σ_2) (where we used a scale transformation and τ-translation to choose the p_1 coordinates). As we have shown in <ref>, there is one Virasoro that survives the introduction of the interface. Constraining ourselves to the global conformal transformations to remain in the vacuum state, we can use those to bring τ_2 to zero (more specifically, we use a special conformal transformation centered on p_1). These CFT transformations translate to isometries of the bulk so it follows that the geodesic length between the two points is also unchanged. Using these isometries, it is possible to restrict the general form of the geodesic distance between two points.However, we can identify several quantities which are invariant under all the isometries such as ξ=-(t_1-t_2)^2+(x_1-x_2)^2/4x_1x_2, x_i/z_i or -(t_1-t_2)^2+(x_1-x_2)^2+(z_1-z_2)^2/4z_1z_2. Thus, this isn't sufficient to constrain it to the form identified above. We must take into account the additional fact that the geodesics are anchored to the boundary (or more precisely on the IR cutoff surface). A more elegant way to restrict the generic form, is to exploit the conformal symmetry of the dual theory. Indeed, as described in <cit.>(see also <cit.>) another interpretation of boundary-anchored geodesic is that they compute two-point functions of operators of large scaling dimension. Then, we can transfer the form of the generic two-point function into the form of the geodesic length between two points p_i = (t=τ_i,x=_i,z=_i):L= ℓ_1 ln(_1/_1) + ℓ_2ln(_2/_2) +g(ξ) , ξ =-(τ_1-τ_2)^2+(x_1-x_2)^2/4x_1x_2 .This formula embodies the procedure we used to obtain (<ref>), by a succession of coordinate changes. We can use (<ref>) together with (<ref>) to compute the entanglement entropy of any boundary interval for which the two points lie on either side of the interface.The only missing piece is that we must invert an equation to extract the function α(μ). This can in fact obtained analytically, although the expression is not very elegant. The defining equation is obtained with (<ref>), and after some massaging : μ = c_2√(1-c_^2)+c_ s_2/c_1√(1-c_^2)-c_ s_1×c_+c_1/c_2-c_ ,where we use the shorthands c_i = cos(ψ_i), c_ = cos(). Eq. (<ref>) can be reduced to a 4th-order polynomial equation after squaring. Solving yields 4 candidate solutions for c_(μ), but after plugging them back in (<ref>), only two remain. Imposing the further condition |c_|<1, only one remains, as we expected from the monotonicity of μ() : cos() = (-1+μ^2)(c_1+c_2)-√(2)(μ-1)sin(ψ_1+ψ_2/2)√((1+μ^2)-(μ-1)^2cos(ψ_1-ψ_2)+4μcos(ψ_1+ψ_2))/2(1+μ^2+2μcos(ψ_1+ψ_2)) . The equation is quite unpalatable, but by plugging it into (<ref>), together with the RT prescription we obtain an analytic expression of the entanglement entropy of any interval in the boundary CFT, which is remarkable.Let us conclude by considering the "End of the World" membrane limit, where side 1 disappears. This is realized by taking the limit ℓ_1/ℓ_2→ 0, which is ψ_1→π/2 in terms of the angles. In this limit, since side 1 effectively disappears, the geodesic in fig.<ref> becomes anchored at the membrane. Because the green boundary and the membrane are co-linear in this limit, O_1 moves to the origin, and the geodesic becomes perpendicular to the wall. We recover in this way the BCFT prescription <cit.> for RT surfaces.§.§ Same-side intervalsWe consider now the case where the two endpoints of the boundary interval are located on the same side. Let us begin with the case where the two points are located on the true vacuum, i.e. the CFT with lower central charge. In this case, it is easy to see that the RT surface will be the same as in the trivial case with no boundary. From now on, we will call "trivial" the geodesics that do not cross the membrane.From the constraints (<ref>) it is clear that a trivial geodesic will always be allowed, as the semicircle will never intersect the membrane. Let us consider a putative non-trivial geodesic, that crosses into the false vacuum. This curve will be necessarily longer than the trivial geodesic. Indeed, from (<ref>), the portion of the geodesic in the false vacuum will obtain a multiplicative factor of ℓ_2/ℓ_1>1, with respect to the length it would have had in the true vacuum. Adding this to the fact we are deviating from the trivial geodesic path for this excursion into the other side, it is clear that its length must be higher than the trivial one.Although this argument alone does not exclude the existence of non-minimal extremal curves of this type, we will confirm they do not exist through the geometric construction that will follow.The entanglement structure of intervals lying on the true vacuum is thus unchanged from the "pure" case given by (<ref>).Things get more interesting as we move the boundary segment to the false vacuum. Both arguments given in the previous paragraphs fail here. For once, ψ_2 can be negative, and so given big enough intervals the trivial geodesic will intersect the membrane, such that the true geodesic must necessarily be non-trivial. On the other hand, even when ψ_2>0, taking some extra distance to cross onto the other side might be worth it. Indeed, this can be seen as taking a "shortcut" through the false vacuum, where distances are shrunk by a factor of ℓ_1/ℓ_2.The typical non-trivial geodesic is depicted in fig.<ref>.The Euclidean construction is again parametrized by the angle φ and center O_1. The full geodesic is composed of three circle arcs that are tangent to each other on the membrane. Again, with repeated applications of the sine law one can obtain expressions for the final and initial points (see app. <ref> for details) : _1= -(cos(ψ_2)/cos(α)-1) , _2= -sin(+ψ_2)/sin(ψ_1-)sin(+ψ_1)/sin(-ψ_2)(1+cos(ψ_2)/cos()) , μ = _2/_1 = sin(+ψ_2)/sin(ψ_1-)sin(+ψ_1)/sin(-ψ_2)(cos(α)+cos(ψ_2))/(cos(ψ_2)-cos(α)) . While the formulas seem only slightly more complicated than (<ref>), the situation is quite a bit more involved in this case. Let us first look at the allowed range for α=φ-ψ_2. To find it, we enforce _1>0 and _2>0, as well as that the intersection on the membrane takes place at z>0.By symmetry we will later restrict further to _2>_1. After tedious but straightforward computations, we find : <0 → |ψ_2|≤≤ψ_1⇒_2/_1>1 , >0 →π-ψ_1 ≤≤π- Max(ψ_2,0) ⇒_2/_1<1 .From (<ref>), →π- sends _1/_2→_2/_1. Thus, it should suffice to consider only one of the two cases of (<ref>). But as we can see, the ranges of the two options for sign() don't seem to coincide under this mapping. In fact, this is simply an artifact of our Euclidean construction that introduced a dissymmetry between _1 and _2, because we considered the intersections x_1 to lie in the "positive" side of the half-line O_1x_1. By considering intersections on the other half-line, the range Max(ψ_2,0)<α<0 (with >0) is indeed allowed, restoring the symmetry _2↔_1.For what follows we will usually consider Max(ψ_2,0)<α<ψ_1 and thus μ>1, since it is the situation depicted in fig.<ref>. Unlike in the previous section, the function μ(α) is not monotonic. Indeed, it has a unique minimum at α_ crit : sin(α_ crit) = 1/4(-sin(2ψ_1)/cos(ψ_2)+√(sin^2(2ψ_1)/cos^2(ψ_2)+8(1-cos(2ψ_1+ψ_2)/cos(ψ_2)))),which we verify to always lie in the allowedrange. This implies that at least in some range, there are two distinct non-trivial geodesics. Looking at the boundary limits, we have to distinguish the two cases ψ_2<0 and ψ_2>0 : lim_→ψ_1μ()= ∞ , lim_→ Max(ψ_2,0)μ()=∞ if ψ_2≥01-cos(ψ_2)/1+cos(ψ_2) if ψ_2< 0 . In fig. <ref> we plot the curve μ() for both signs of ψ_2. An interesting remark arising from this analysis is yet again the appearance of the mysterious "critical tension" _0 (see <ref>), which is the tension corresponding to ψ_2=0. For ψ_2>0 or overcritical tensions, there are always two candidate non-trivial geodesics, while for ψ_2<0 or subcritical tensions, this is only the case for small enough μ. In this case, this will have no bearing on the RT surface as we will see that the geodesics for <_crit are never dominant. However, in the interpretation of <cit.>, where the geodesics compute two-point functions, the subdominant ones do contribute to corrections, so in this case, the crossing of the critical tension does produce a measurable field theory effect. In spite of all these hints, a deeper understanding of the meaning of this critical tension is still lacking.A last remark concerns the disappearance of trivial geodesics. It is easy to see that when ψ_2<0 and μ>1-cos(ψ_2)/1+cos(ψ_2)=μ(=0)≡μ_min, they cross the membrane and thus cease to exist. As we approach this limit from below, the crossing geodesics with <_crit approaches the trivial one, matching at μ=μ_min. Perhaps there is a deeper explanation for the simultaneous disappearance of these two geodesic paths from the field theory POV, but we have not been able to elucidate it.As in the previous case, we can obtain the geodesic length analytically (see <ref> for more details) :L = ℓ_2 ln(_1/_1)+ℓ_2ln(_2/_2)+ℓ_2ln(4 tan(+ψ_2/2)/tan(-ψ_2/2)(1-cos^2()/cos^2(ψ_2)))+ℓ_1 ln(tan(+ψ_1/2)/tan(ψ_1-/2))_g(ξ) .As expected, the length is invariant under →π- which is the inversion _2↔_1, and we have arranged it in the form (<ref>). This length is to be compared to the trivial one which can be written as:L_ triv =ℓ_2 ln(4 R^2/_1_2) R=_2-_1/2 . To compare the two lengths we should use (<ref>) to express L_ triv in terms of . Let us call Δ L() = L()-L_ triv() the difference between trivial and non-trivial geodesic.Unfortunately, the expression is much too complicated to hope to have a closed expression for _ trans, the value where dominating geodesic switches. However, we can again inspect the different limits to distill some properties. For instance, we can show that no matter what, when ℓ_1<ℓ_2, there will always be non-trivial geodesics dominating for large enough intervals. Indeed, in the limit →ψ_1, where _2/_1→∞ :L-L_ triv = (ℓ_2-ℓ_1)ln(ψ_1-)+O(1) . Thus in this limit, the non-trivial geodesic with >_ crit always dominates. In doing this analysis, we expected to find a "critical angle" for the brane, over which the non-trivial geodesic stopped existing, as found in <cit.>, in the context of Boundary CFT. However, as they mention in the paper, this critical angle appears only in larger than three dimensions for the bulk, so the absence of a critical angle in our model is consistent with their BCFT findings (the BCFT can be obtained in the limit ℓ_1→ 0, where CFT_1 disappears and the interface becomes a boundary). It would be interesting to extend the results of <cit.> in higher dimensions to the more general case of Interface CFTs.Similarly, we can inspect the lower limit → Max(ψ_2,0). The interesting case is when ψ_2>0, otherwise the non-trivial and trivial geodesics coincide as → 0. For ψ_2>0 :L-L_ triv(=ψ_2)= ℓ_2 ln(sin(ψ_1-ψ_2)/cos^2(ψ_2)sin(ψ_1+ψ_2))+ℓ_1ln(tan(ψ_1+ψ_2/2)/tan(ψ_1-ψ_2/2)) ,which is always positive, and therefore in this limit the trivial geodesic is always preferred. By plotting the curve and playing with the parameters, we can make further statements. The typical curve L-L_ triv is depicted in fig.<ref>. As we can see, and as we verified numerically by swiping over the parameter range for ψ_i, it has a single maximum. Incidentally, we verified that this maximum is attained precisely for _ crit (<ref>), which is when μ(α) is minimal. There is probably a way to prove this analytically, but we haven't been able to do so yet. It can however be understood intuitively, because, at =_crit, the non-trivial geodesic is tangent at the membrane which is the worst-case scenario since we don't benefit of the shortcut through side 1. As we grow μ, either by increasing or decreasing , the geodesic can pass deeper into the true vacuum, thus shortening its path. Combining this with the limit (<ref>), this shows that the non-trivial geodesics which lie at <_ crit are always subleading when they exist.To complete our analysis we would like to invert (<ref>) to find (μ). This appears to be a straightforward exercise much like in the previous section, and it is in theory, but in practice, we have encountered hurdles that have prevented us from obtaining a closed analytical expression in this case. Indeed, consider the equation that is to be solved : μ = c_+c_2/c_2-c_×(√(1-c_^2)c_2+s_2c_)(√(1-c_^2)c_1+s_1 c_)/(s_1c_-√(1-c_^2)c_1)(√(1-c_^2)c_2-s_2 c_) . This can be solved analytically by squaring away the square roots which yields a 4th order polynomial in c_[Strictly speaking we obtain a 6th order polynomial, but there are 2 roots that are evident, and aren't valid solutions to (<ref>).]. We do not include the full expressions for the roots as they are two cumbersome, but they take the form :c_ = A+√(q)±√(p_2-p_1) ,c_ = A-√(q)±√(p_2+p_1) ,where A, q and p_i depend on ψ_i and μ. The complicated step lies in choosing which of the four candidate solutions are actually solutions of (<ref>). This is done by replacing (<ref>) into (<ref>) and verifying it is a solution, as well as checking that the c_ lies in the range (<ref>). We have attempted this with the aid of Mathematica, but the resulting expressions were humongous, and their simplification relied on several assumptions on the range of parameters, which made it computationally impossible to simplify analytically.Thus, we turn to numerical verifications. This allows us to find the correct solutions among (<ref>), but only for a given value of μ, ψ_i. One could hope that the correct solutions do not change as we move the parameters, but this is not the case. See fig.<ref> for an example Looking at fig. <ref>, we need to do some "cut and patch" of different solutions to actually obtain the inverse of the curve displayed in fig.<ref>. This can be understood by looking at the form (<ref>). Say that q has a double root for some μ=μ^*, at fixed ψ_i. Then locally around that point, √(q)≈√(q”(μ^*)(μ-μ^*)^2) = √(q”(μ^*))|μ-μ^*|. Because of the absolute value, the solutions (<ref>) will have a kink at μ^*. Then, to obtain smooth solutions we must combine the ±√(q) solutions for μ<μ^*, with the ∓√(q) solutions for μ>μ^*. Another situation is of the type f(μ)/√(q(μ)), where f(μ) has a simple root at μ^*, while q(μ) has a double root. In that case, this will induce a discontinuity of 2f'(μ^*)/√(q”(μ*)) in the function. Both cases are displayed in fig.<ref>.As it turns out, we found that those "inversions" of sign can happen for √(q), √(p_2-p_1) and p_1 as well as it contains a term ∝ 1/√(q). To be able to obtain an inverse of (<ref>) for any ψ_i, one would need to analytically determine the roots of the aforementioned quantities. Unfortunately, this turns out to be impossible as the equations involved are much too complicated, involving high powers and various trigonometric functions.Thus, in the double-crossing case a closed form like (<ref>) is lacking. Of course, (<ref>) combined with a quick numerical check to select the correct solutions still allow us to compute entanglement entropies relatively quickly, as locally we have even an analytic expression.§ APPLICATION TO MORE GENERAL GEOMETRIESThe extended analysis of the previous sections was in the hopes of being able to bootstrap these computations to compute the entanglement structure of the solution chap.<ref> and possibly also for the solutions of chap.<ref>. Unfortunately, for many cases we will see that this method is not applicable. §.§ The thermal ICFT stateLet us begin analysing the non-compact thermal static solution with an interface, which is the limit J→ 0 of the NESS state in chap.<ref>. We remind the bulk metric :ds^2=-(r_j^2-Mℓ_j^2)dt^2+ℓ_j^2 dr_j^2/r_j^2-Mℓ_j^2+r_j^2 dx^2 .The solution is given by (<ref>) for J=0, M_1=M_2. One can verify that in this case, the wall's induced metric is AdS_2, and we can integrate (<ref>) to obtain the explicit form for the wall :x_j(r) = - sign(^2±_0^2)/√(M) arcoth(√(b_±^2ℓ_j^2+r^2)/r) ,where the ± is +(-) for j=1(j=2), while b_±^2=(^2-_ min^2)(_ max^2-^2)/M(^2±_0^2)^2. The gluing is made by identifying r_j^2-Mℓ_j^2 along the membrane.The goal is now to find a coordinate transformation that brings this solution to the vacuum one above. The additional difficulty to what was done in sec.<ref> is that now we have the membrane shape to keep track of. Since the membrane breaks some of the isometries of Poincaré, it will not be sufficient to find the coordinate change mapping (<ref>) to (<ref>), but the shape of the membrane will also be important. The procedure we employ to identify the correct coordinate change is as follows. First, we find an arbitrary coordinate change which brings us to the correct bulk metric, namely Poincaré in this case. After that, we restrict ourselves only to isometries of Poincaré space. Generally, some of these isometries will act non-trivially on the membrane, but they will leave the bulk metric unchanged. Using those, we attempt to bring the membrane to the form (<ref>). For this step, one can use the dual field theory to greatly facilitate the problem. Indeed, according to sec.<ref>, given the boundary state and the interface shape, the bulk dual is completely fixed. Thus, instead of trying to match the 2-dimensional membranes in the bulk, it is sufficient to match the 1-dimensional interfaces in the dual theory, by using only global conformal transformations.Following this procedure, we begin by with the following change of coordinates :x_p'± t_p' =√(r^2-Mℓ^2)/r e^√(M)(x± t) ,z_p'= √(Mℓ^2)/r e^x√(M) ,which brings (<ref>) to the Poincaré metric. By taking the limit r→∞, we see that on the boundary the interface is mapped from x=0 to :-t_p'^2+x_p'^2=1. This initially seems strange as (<ref>) describes two disconnected interfaces. However, from the coordinate change (<ref>) we reach only the upper right quadrant of the plane (x_p'-t_p',x_p'+t_p'), which ensures that we reach only one of the two disconnected pieces. The other 3 quadrants would be reached by considering the maximally extended black hole string (which, as it contains another boundary, it also contains another membrane), whereas our change of coordinates covers only the exterior of the solution (<ref>). In any case, that is all we need to compute RT surfaces, since for the static situation they will not enter the horizon.What remains to be done is to find a global conformal transformation that maps (<ref>) to the x=0 interface. This case is fairly simple, as (<ref>) is a hyperbola, which becomes a circle if we wick rotate to Euclidean coordinates. As is well known, any circle can be conformally mapped to a line by a special conformal transformation combined with a translation. In this case, the combination of the two transformations read :x_p=2 (x'^2-t'^2-1)/1+x'^2-t'^2+2 x' ,t_p=4t'/1+x'^2-t'^2+2 x' ,which indeed maps (<ref>) to the interface x=0. Notice that from the allowed quadrant mentioned before, we reach only the portion -2<t_p<2 of the interface. At the risk of repeating ourselves, the rest of the geometry corresponds to portions of the maximally extended black string solutions, which are of no interest to us when computing entanglement entropies of intervals lying on a single asymptotic boundary. From (<ref>) we can compute the associated coordinate transformation in the bulk (see sec.<ref>), which is simply a special conformal transformation of the three coordinates t,x,z and it is an isometry of Poincaré since the 1/z^2 prefactor will cancel the overall scale factor :x_p=2 (x'_p^2-t'_p^2+z'_p^2-1)/1+x'_p^2-t'_p^2+z'_p^2+2 x'_p) ,t_p=4t'_p/1+x'_p^2-t'_p^2+z'_p^2+2 x'_p ,z_p=4z'_p/1+x'_p^2-t'_p^2+z'_p^2+2 x'_p .Combining everything, we have the coordinate change that brings the static ICFT black string solution to the static ICFT vacuum solution :z_p= 2 √(Mℓ^2)/cosh(√(M)t)√(r^2-Mℓ^2)+rcosh(x√(M)) ,t_p = 2 √(r^2-M ℓ^2)sinh(√(M)t)/cosh(√(M)t)√(r^2-Mℓ^2)+rcosh(x√(M)) ,x_p =2rsinh(√(M)x)/cosh(√(M)t)√(r^2-Mℓ^2)+rcosh(x√(M)) . We can then leverage the equations of the previous section to compute, in principle analytically, the entanglement structure of the theory in the thermal equilibrium state. Analyzing the resulting formulas for the entanglement entropy is unfeasible because of their complexity, so we must content ourselves with plots. Consider boundary intervals at constant time, t=0, and x_i>0 in the folded picture. Notice that under (<ref>), these are mapped to constant time intervals with 0<x_p^i=_i<2. However, since the difference L-L_triv depends only on the ratio _2/_1, we do not expect a qualitative change in behavior for the transitions between trivial and non-trivial geodesics, in the case where the two points lie on the same side. Since this transition generally takes place for _2/_1≫1 (or _2/_1≪1), and _2≈ 2 if x_2≫1, we estimate that the switching to the double-crossing geodesic happens for x_1<<2/√(M). Indeed, this can be understood intuitively because the horizon of the BH puts a cap to how deep can the geodesic go. Thus, crossing to the true vacuum side is not as enticing as in the vacuum solution, since the geodesic will have to curve back to the false vacuum side as soon as it reaches the horizon. Therefore for this excursion to be worthwhile, the geodesic should start ever closer to the interface as M increases (and the horizon approaches the boundary), which is confirmed by our estimation x_1≪2/√(M).Note that one difference is that in the BTZ case, the inversion symmetry x_i→ 1/x_i no longer holds, as the cross-ratio is expressed differently in this coordinate system. In particular, non-trivial geodesics will exist only if one of the points satisfies x_i≪1/√(M). This is in contrast to the vacuum case, where we have non-trivial geodesics as soon as _1/_2 ≫ 1 (or ≪ 1), regardless of the value of the individual _i. Beyond that, there is not much to say about the entanglement structure in the black string case, save for linear growth of the entropy as the interval grows larger, which is the tell-tale of a thermal state. We plot two curves depicting the entanglement entropy for intervals containing the interface, and intervals contained in the false vacuum in fig.<ref>.The study of RT surfaces in the thermal state of the ICFT has very interesting applications in itself, especially in Page-curve computations in eternal black hole setups <cit.>. However from the point of view of the entanglement structure, we did not expect any surprises, and the difference between vacuum and thermal ICFT is similar to the case without interface. §.§ The NESS ICFT stateThe computation of the previous section was done as a preparation for the real goal, which is computing HRT surfaces in the NESS state of chap.<ref>. Indeed, in this state we hope to find exotic behavior for the HRT surfaces, since the apparent horizons do not match at the membrane. While spacelike geodesics with two anchor points on the boundary cannot cross these, it would be conceivable for an HRT surface to cross the membrane outside one horizon, and emerge inside the other, see fig.<ref>. Indeed, while in the spinning string, a geodesic with two anchor points on the boundary cannot cross the apparent horizon, there are geodesics with a single anchor point on the boundary that do (see app.<ref>). In fact, such an occurence seems almost inevitable as we consider a growing boundary interval containing the interface. As shown in fig. <ref>, spacelike geodesics tend to approach the (apparent) horizon, and thus as they hit the membrane from the colder side, we would expect them to emerge behind the apparent horizon of the hotter side.The other compelling reason to look at the entanglement structure of this state is to definitively prove that the interface indeed acts as a perfect scrambler, and to understand the entropy production from the gravity perspective. It is not immediately clear if such a thing can be computed within the stationary NESS state. Indeed, to compute the entropy production at the interface, one would naturally consider the entanglement entropy for an interval containing it. In the state preparation depicted in fig.<ref>, the shockwave due to the quench will alter the dual geometry, and the RT surface will be modified, resulting in a time-varying entanglement entropy for the interval. Then, taking the derivative of this quantity would yield the entropy production at the interface.However, from the geometries computed in chap.<ref>, we have access only to the gray area of fig.<ref>, i.e. only the stationary state. Then, the entropy of an interval containing the interface will naturally be constant simply because the HRT surface computing its value will not have time-dependence. This is unavoidable by the very fact we are considering a stationary state. Still, in the microscopic model, even in the stationary state, the knowledge of the "S-matrix" of the interface should allow us to compute the entropy production. The question of whether this information is accessible from the minimal model under consideration is still open. One possible direction would be to consider the entropy (left and right-moving) currents which will naturally be constant in time. While the incoming currents are certainly thermal(they are prepared to be such) the outgoing currents from the interface can have a non-trivial spatial profile, which tells us how different wave modes get scrambled after scattering<cit.>. Therefore, looking at the behavior of the intervals entropy under a spatial variation of the endpoints might be a way to probe the entropy production at the interface. Note that at infinity these currents become also thermal, as we have verified by applying the HRT prescription for intervals far removed from the interface, and confirming the trivial surface does not intersect the membrane (thus recovering <ref>, which shows that the dual state in the considered region is indeed thermal).At any rate, all of that rests on our ability to compute said HRT surfaces. We will make a first attempt by mirroring the methods of the previous subsection. We will be considering the metric in Eddington-Finkelstein coordinates, as we expect the interiors of the black strings to play an important role here, unlike the static case. We remind the form of the relevant metric :ds^2=-(r^2-r_+^2)(r^2-r_-^2)/r^2dv^2+r^2(dy- sign(J)r_-r_+/r^2dv)^2+2ℓ dvdr,where we expressed it in terms of the location of the horizons r_±, see (<ref>) for their definitions in terms of M, J.Composing (<ref>) with (<ref>) we can find a change of coordinates mapping to Poincaré :x_p+t_p=w_+ =r+r_+/r+r_-exp((r_+-r_-)(s y+v)/ℓ) ,x_p-t_p=w_- =r-r_+/r+r_-exp((r_++r_-)(s y-v)/ℓ) ,z =√(r_+^2-r_-^2)/r+r_-exp(s y r_+-v r_-/ℓ) ,where s= sign(J), which we will take to be positive from now on. In (<ref>), we chose the change of coordinates such that the geometry is mapped to the quadrants w_+>0, where the exterior is w_->0, and the interior w_-<0.Like before, let us look what is the shape of the interface in these coordinates. By plugging y=0, r=∞ we find that the interface in the new coordinates satisfies the equation :w_+^r_++r_-w_-^r_+-r_-=1. Of course we recover (<ref>) when J=0 and r_-=0.The next step in the procedure would be to find a global conformal transformation that maps (<ref>) to the interface x=0. Unfortunately, we are out of luck here; it can be shown that no such transformation exists. In a way, this outcome should have been expected from the get-go. Unlike the "pure" case with no membrane, here the non-equilibrium NESS state is not an equilibrium state in disguise. In hindsight, it would have been stranger to be able to map it to the static vacuum case, as that would imply that the NESS state we had found was merely an equilibrium situation expressed in peculiar coordinates.That is however a serious setback in our hopes of computing RT surfaces in this setup. Indeed, without access to the beautiful Euclidean geometry description of sec.<ref>, the problem at hand seems much harder to tackle even numerically, let alone analytically. In the next section, we outline some attempts at semi-numerical algorithms, which are still a work in progress. §.§ Other static geometriesLet us finally briefly mention the possibility to map the vacuum ICFT case to the static geometries of chap.<ref>. Here again, the question is disappointingly quickly answered by looking at the induced metric on the membrane. Indeed, as remarked before, the membrane metric in the vacuum case of sec.<ref> is AdS_2; in particular, its Ricci scalar is a constant. For the solutions of chap.<ref> (eq.(<ref>)), this is not the case except when M_1=M_2. Sothe analytical results of the vacuum ICFT case are applicable only to this very narrow subset of solutions.For the M<0 case, the relevant mapping linking Poincaré (<ref>) to global coordinates (<ref>) :z_p = ℓ^2√(-M)/√(r^2-Mℓ^2)cos(t√(-M))+r cos(x√(-M)) ,t_p = ℓ√(r^2-Mℓ^2)sin(√(-M) t)/√(r^2-Mℓ^2)cos(√(-M)t)+rcos(√(-M)x) ,x_p = ℓ rsin(√(-M)x)/√(r^2-Mℓ^2)cos(√(-M)t)+rcos(√(-M) x) . Note that we must constrain z_p>0, so that only a portion of the global space is mapped by (<ref>). This change of coordinates is specifically adapted for the interface located at x=0 (the other interface is located at x=π/2√(-M), as can be seen by integrating the solutions (<ref>) in the case M_1=M_2=M<0). This also explains the "disappearance" of one of the two interfaces under (<ref>). Consider the Cauchy-Slice t=0; under (<ref>), the full slice is mapped to Poincaré coordinates except for a single point, which is x=π/2√(-M), the position of the second interface. The same transformation (<ref>), but with translated x, would have mapped the two interfaces although they would not have the simple shape x=cst, which is needed if we want to apply the results of sec.<ref>.Since we can map the full Cauchy slice t=0 (except one point), the findings of sec.<ref> are almost directly applicable to the global solutions. Of particular interest are remarks made about the connections of various RT surfaces with the critical tension _0 (see the paragraph after fig.<ref>). Indeed, in the case where M_1=M_2<0, the sweeping transition happens precisely at =_0 (which does not hold when M_1≠ M_2). With speculation, this can lead us to two possible conjectures concerning the sweeping transition. In the following, "candidate RT surface" is an allowable spacelike geodesic in the RT prescription, which is not necessarily the minimal one.*Centerless slices admit no trivial candidate RT surfaces for sufficiently big boundary intervals (while centerful always do)*Centerless slices admit only one crossing candidate RT surface for sufficiently big boundary intervals (while centerful admits two)In the simpler case M_1=M_2, both conditions are satisfied. Unfortunately, the first conjecture can be immediately falsified. Indeed, according to the conditions (<ref>), we can have centerful solutions with tension <_0, given ℓ_1<√(2)ℓ_2. For these solutions, the near-boundary membrane has ψ_2<0. So for big enough boundary intervals, the trivial geodesic will intersect the membrane and cease existing, invalidating the first conjecture. One could amend the conjecture to link the disappearance of the trivial geodesic to the critical tension _0, instead of the sweeping transition, but it is also unclear if this statement holds for M_1≠ M_2.The second conjecture cannot be dismissed as easily, and one must again turn to numerical methods. This is a work in progress, as we are still in search of a satisfying all-encompassing algorithm as explained in the following section. § NUMERICAL ATTEMPTSWe describe briefly in this section some attempts at obtaining HRT surfaces in these more complicated geometries with the use of numerical methods. The algorithms we will present were developed mainly for the use in the stationary NESS geometry, but with the scope of applying them also to the static geometries of chap.<ref>. Ironically, the simplicity of the thin-brane model is actually the main difficulty in implementing numerical methods. Indeed, for a smooth metric standard shooting algorithms would presumably work to find the sought-out HRT surfaces.Let us restate the problem to set notation. The spacetime in consideration is composed of two halves of locally AdS_3 spaces (labeled 1 and 2), which are joined through a codimension two membrane, whose embedding equation in each side x^μ_m,i(ξ^a) is known. Given any two spacelike separated points p_1 and p_2, located on an IR cutoff surface that is arbitrarily close to the boundary, we seek to find all possible spacelike geodesics connecting them. We have identified two different possible directions for the algorithms, which have both their pro and cons. §.§ Semi-Numerical algorithmsIn the case of a homogeneous spacetime, we have shown in sec.<ref> that we are able to find analytically the geodesics connecting two given points in the bulk. We can use that to greatly reduce the amount of numerics needed, hence the name "semi-numerical". When we need to connect two points in the same slice, we can consider that we have the geodesic path analytically (with some caveats we will mention), as long as we have an analytic change of coordinates mapping our metric to Poincaré space.In what follows, we consider only possible non-trivial geodesics, i.e. geodesics that cross the interface. §.§.§ The minimization methodAn allowable non-trivial geodesic will cross the membrane in such a way that the crossing constraints (<ref>) are satisfied. Consider first the case where the p_i lie on different sides of the interface. In this algorithm, we begin by picking an arbitrary point on the interface parametrised by ξ^a_*, such that the corresponding points on both sides (x^μ_m,i(ξ^a_*)) are spacelike separated from the boundary points p_i. Given that, we obtain a curve composed of two geodesic pieces joined at the interface. This path will in general not satisfy the crossing constraints, and that is where the numerics kick in.The crossing equations (<ref>) will determine the correct ξ^a_*. Although those equations can be expressed completely analytically in our setups, it is in general not possible to solve them so. Thus, we use standard numerical methods, such as Newton-Raphson algorithm which needs the equation to be differentiable, or secant methods which do not, both of them being natively implemented in Mathematica through the "FindRoot" function.In the case where p_1 and p_2 are on the same side, the algorithm is similar. The difference is that now our problem is parametrised by two arbitrary points on the membrane, and we obtain two sets of crossing equations. In this case, we need to solve twice as many algebraic equations, with twice as many free parameters. Note that we have additional conditions that force all chosen points to be spacelike separated from one another.On paper, this method looks great; after all, it requires numerics only in the resolution of a set of algebraic equations. However, it has several important complications in practice. The first big problem is the "spacelike separated" condition. In poincaré coordinates, this condition is trivially verified by checking -Δ t ^2 +Δ x^2+Δ z^2>0. In other coordinates, the conditions are usually non-trivial, and one has to employ the coordinate change to Poincaré to verify them. This is an important issue because it is difficult to implement these constraints into the numerical algorithms that solve the crossing equations.Generically what tends to happen is that the algorithm attempts to evaluate the equations at a point ξ^a_* which does not satisfy the constraints, crashing the search as we transition to timelike geodesics. One can mend this problem somewhat by "sprinkling" the interface with several initial points ξ^a_*, in the hopes of one falling close enough to the solution such that the numerical solver converges. This is of course not ideal, first of all because it increases significantly the computational cost, but mainly because one can never be certain if a solution was not found due to insufficient sprinkling, or due to it not existing at all. Additionally, selecting an efficient sprinkling is also non-trivial as the allowable regions are non-compact.This problem disappears completely in the case where we consider static geometries. In this case we can restrict ourselves from the get-go to a specific Cauchy slice, which reduces the dimensionality of the problem, but more importantly removes the timelike direction. In this case, any two points chosen in this slice will be spacelike separated. Thus, this method seems more adapted to computation of RT surfaces in the geometries of chap.<ref>.Another, less prevalent but still important problem will be unwanted intersections. Because the spacetime is excised at the interface, there will be situations in which the geodesic crosses the interface in unintended places, see fig.<ref>. To avoid such unwanted solutions, we should check for spurious intersections with the membrane. This is actually a difficult problem, as the resulting equations are again not analytically solvable, requiring another numerical resolution. These constraints are difficult to reconcile with the numerical solving of the crossing equation, for the same reasons explained for the spacelike condition.We have attempted to use this method in the static geometries of chap.<ref>, with reasonable success in the case of symmetric intervals. It remains a work in progress however to iron out the "unwanted crossings" issue, as well as to analyse the resulting numerical curves. For this reason, we don't find it useful to present the very preliminary results that we have at hand. §.§.§ The shooting methodIn this second approach, we parametrise the candidate HRT surface by its initial conditions. We start on p_1, where we pick arbitrarily a spacelike unit vector, ṗ_1. This is again best done in Poincaré coordinates, where the unit condition (<ref>) is explicitly solvable. This yields an expression for the geodesic in terms of (p_1,ṗ_1). The step here which (generally) requires a numerical resolution is the determination of the intersection with the membrane. One has two options to express this condition. This equation can be written down either in Poincaré coordinates (where presumably, the membrane equation is more complicated, but the geodesics are simpler), or in the coordinate system naturally adapted to the membrane solution. If the geodesic is parametrised with , this equation yields ^*, the intersection point with the membrane, if it exists. If no solution is found, one draws another initial vector ṗ_1 and repeat the procedure.Then, the crossing conditions (<ref>) determine new initial conditions (p_m,ṗ_m) for the geodesic on the other side, which we can follow until we either reach the boundary, escape to infinity, or hit the membrane again and repeat the process. In principle, we obtain in this way a parametrisation of the final point p_f, in terms of the initial conditions p_f(p_1,ṗ_1). What remains to be done is solve the equation p_f(p_1,ṗ_1)=p_2 for ṗ_1, which again can in general only be done numerically given the complexity of the function p_f.The pros of this approach is that we do not have to worry about the "spacelike separated condition", as it is automatically satisfied by the choice of a spacelike tangent vector ṗ_1. Otherwise, it is a trade-off; instead of having to solve the crossing equations, we need to solve for the final point of geodesic. Whether this is advantageous or not depends on the specifics of the geometry at hand. Generally, we run in much of the same problems which are difficult to avoid with numerical solution-finding. First of all, we may miss the intersection of the geodesic with the membrane with an inappropriate initialisation value for , which again would force us repeat the numerical resolution by sprinkling possible initial _0. The other problem lies in the inversion of p_f(p_1,ṗ_1). While it should be a continuous function, in attempts we made we found it to be extremely sensitive on the initial condition. Coupled to the fact that there are geodesics that escape to infinity (fig.<ref>), this makes the numerical inversion of p_f unreliable at best... §.§ Application to the interface NESS stateIn this small section we presents the attempts to apply the shooting method to the stationary states obtained in chap.<ref>. The reason we chose the shooting method over the minimization method is because in the first place, we wanted to search for geodesics like those depicted in fig. <ref>. As in this case the specific final point p_f was not the first priority, the shooting method seemed the more appropriate. Recall the membrane embedding (<ref>):<ref> x_j=x_m,j(σ), r_j=r_j(σ), t_j = τ + f_j(σ) . The derivative of the horizon entering membrane was exhibited in sec.<ref>:x_m,j'() = -ℓ_j(λ^2±λ_0^2)σ+J_i/2(σ- σ_+^ Hj )(σ- σ_-^ Hj) √(A (σ - σ_-) ) ,f_1'=f_2' = 1/4(J_1ℓ_1 x_m,1'-J_2ℓ_2x_m,2' ) ,the various constants are defined in sec.<ref>. Most importantly, recall that =r_j^2-M_jℓ_j^2.We will be working in Eddington-Finkelstein coordinates (<ref>) :dv=dt+ ℓ dr/h(r) anddy= dx +Jℓ ^2 dr/ 2r^2h(r)<ref> . In the shooting method the main difficulty is the determination of the intersection with the membrane, so it is important to express it as simply as possible. Thus we will explicitely perform the integration of (<ref>) and (<ref>). In the specific case (<ref>) where the membrane enters the horizon, its world-volume becomes AdS_2[It is as of now unclear what is the condition that determines whether the world-volume of the membrane is AdS_2. It curiously appears to be the case whenever the membrane is dual to a single static interface(i.e. of equation x=cst), as is the case for horizon-entering membranes or the ones appearing in the vacuum ICFT case. It would be interesting to explore whether this could be explained by the application of a Fefferman-Graham-like prescription on the worldvolume of the membrane.], and x_j' can be integrated in terms of elementary functions, instead of incomplete elliptic ones, which would add an additional numerical step to the computation. Integrating the coordinate change (with the convention that at the boundary it is vanishing), we find :y= x+Jℓ^2/4 (r_+^2-r_-^2)(ln(|r-r_+|/r+r_+)/r_+-ln(|r-r_-|/r+r_-)/r_-) ,v= t+ℓ/2 (r_+^2-r_-^2)(r_+ln(|r-r_+|/r+r_+)-r_-ln(|r-r_-|/r+r_-)) ,where r_± are the horizons. For the membrane, the equation is a little bit more involved :x_m(r)= -|J|ℓ^2/2(r_+^2-r_-^2)[Jg(r,^H_+)/r_+-(^2±_0^2)^H_-+J g(r,^H_-)/r_-] ,f_1(r)=-J^2/4(ℓ_1^3/2(r_1+^2-r_1-^2)[J_1g(r,^H1_+)/r_1+_+^H1-(^2+ _0^2)^H1_-+J_1 g(r,^H1_-)/r_1-^H1_-]+1↔ 2) ,g(r,a)= √(a)∫_√(-_-)^√((r)-_-)dt/t^2-a=-1/2(ln[√((r)-_-)+√(a-_-)/|√((r)-_-)-√(a-_-)|]-ln[√(-_-)+√(a-_-)/|√(-_-)-√(a-_-)|]) ,where we use the shortcut (r)=√(r^2-Mℓ^2) when useful, and we omitted the indices denoting the side when possible. The convention here is that the membrane crosses the ergosphere at x=0 in the spinning string coordinates.The membrane equation (τ+v_m(),r(),y_m()) in Eddington-Finkelstein coordinates is obtained by plugging (<ref>) into (<ref>). It becomes obvious from (<ref>) that there is no hope of solving the membrane intersection equation analytically. Note that because the membrane embedding is invariant under v-translations we only need to solve one equation to find the intersection with the geodesic. Namely :y_m(r_g(^*))=y_g(^*) ,where y_m denotes the interface embedding, and r_g, y_g are the coordinates of the spacelike geodesic whose equation can be obtained by composing (<ref>-<ref>) with (<ref>). For the crossing equation, we define the normal and tangent vectors (omitting the index j denoting the side):t_τ^μ = (1,0,0) ,t_^μ = (v'_m,j(),1/2√(+Mℓ^2),y_m,j'()) ,n_μ = 1/N(0,-y_m,j'(),1/2√(+Mℓ^2)) ,where N^2=n_μ n^μ, and n_μ t^μ_a=0.We can trivialize the crossing equations by expressing the geodesic tangent vectors in the basis (<ref>) : ẏ^ν_gi(^*) = a^τ_i t_iτ^μ + a^_i t_i^μ + b_i n^μ_i . With this parametrization, and for an affinely parametrized geodesic, the crossing equations reduce to :a_1^a a_1^b h^1_ab=a_2^a a_2^b h^2_ab ,where h^i_ab=t^ν_iat^μ_ibg^i_μν is the induced metric on the interface. By the matching equations, h^1_ab=h^2_ab, and since the metric is non-degenerate we deduce that the crossing conditions simply force a_1^a=a_2^a, b_1=b_2 ! In other words, expressed in the appropriate basis (<ref>), the geodesic tangent vector is unchanged upon crossing.With these equation in mind we applied the shooting algorithm described in the previous section which yields the numerical function p_f(p_1,ṗ_1). The last remaining step, which is the (numerical) inversion of this function is until now unsuccessful, as most of the time the numerical search diverges, for the reasons mentioned in the previous section. By randomizing the initial condition, we were not able to find geodesics of the type depicted in fig. <ref>, although we found crossing geodesics that were outside both apparent horizons. Nonetheless, this does not yet show that these peculiar geodesics do not exist. In fact, we noticed that as we move y_2 far from the interface boundary, only very precise initial conditions yield geodesics that cross the membrane close to r^+_2 (refer to fig.<ref>). Thus a random search would have a hard time finding those curves, which could explain why we haven't stumbled upon one.The best way to search for such geodesics is actually to implement a hybrid of the shooting and minimization algorithm. One picks a point on the boundary, as well as a point on the membrane (which lies below one of the apparent horizons), and computes the geodesic connecting those two points analytically. Then, with the crossing condition, we obtain a shooting problem on the other side, and we repeat until that yields a geodesic with two boundary endpoints. This is currently being worked out. § QNEC IN ICFT SETUPSThe Quantum Null Energy Condition (QNEC) is an attempt to generalize the Null Energy Condition (NEC) which states that the stress-energy tensor of classical matter should obey T_μν(x)v^μ v^ν≥ 0, for any null vector v^μ, and at any point x. Such conditions are introduced ad-hoc in General Relativity in an attempt to restrict the allowable stress-energy tensors to "realistic" ones, to prevent the appearance of unphysical spacetime geometries (but they also are an essential ingredient in many general theorems <cit.>), and thus such conditions should hold for types of matter that we consider to be physical. As long as we consider only (reasonable) classical fields, this condition is satisfied.Unfortunately, it is violated quantum mechanically when we apply it to ⟨ T_μν⟩ v^μ v^ν<cit.>, as quantum effects such as the Casimir energy can produce negative energy densities locally. To restore the validity of these inequalities, what one usually does is to consider an averaged version, obtaining, in this case, the Averaged NEC (ANEC) <cit.> : ∫_C ⟨ T_μν⟩ v^μ v^ν d≥ 0 ,where the integration is done over an integral curve of the null vector field v^μ. The statement is that the inequality should hold for any null-vector field and any of its associated integral curves C. The ANEC then has a chance of surviving quantum effects, as they can produce local negative energy densities, but we expect the energy to remain positive when averaging them out for stable systems. The big disadvantage is that the condition is no longer local, and thus becomes much less powerful.For this reason, there have been efforts to formulate a QNEC, namely a quantum version of the NEC which retains the locality property. The bound should of course accommodate for the fact that locally the energy could be negative. Such an inequality was conjectured by Bousso et. al<cit.>, and proven using holographic techniques <cit.>, and later also directly from field theory techniques <cit.>. The QNEC is particularly interesting as it relates the energy density to variations of the entanglement entropy on an interval, which are not usually thought to be correlated quantities in non-gravitational systems. We will be interested in the case of a 2-dimensional CFT on flat space, in which the QNEC takes a more restrictive form<cit.>. Consider a generic point x in the geometry, for which we want to write the QNEC. We begin by picking a spatial slice ending at x, and with possibly another boundary at y (see fig. <ref>). By embedding this slice inside a Cauchy slice, we can associate with it an entanglement entropy, which we denote S(ρ,x,y), where ρ explicits the dependence on the state of the CFT. We can then compute the change in S(ρ,x,y) under variations of x along the lightlike directions. The QNEC is the inequality : 2π⟨ T_±±(x)⟩≥_±^2 S(ρ,x,y)+6/c(_± S(ρ,x,y))^2 . In particular, notice that the term involving the central charge c is exclusive to CFTs, and makes the inequality stricter. For a QFT, this term is missing. The derivative _± refer variations of x, keeping y fixed. A consistency check is the verification that under a conformal transformation (which will change the energy density (<ref>)), the inequality conserves the same expression. Indeed, one can verify that the quantity on the RHS of (<ref>) has the same transformation properties as the stress-energy tensor<cit.>. Note also that (<ref>) is really a family of inequalities for ⟨ T_±±(x)⟩, and the strictest one is obtained by maximizing the RHS w.r.t. y.Consider as an example the case of the vacuum state, ⟨ T_±±⟩ =0. Consider any two spacelike separated points on the boundary, denoted x = (x^+,x^-), y=(y^+,y^-) in lightcone coordinates. They define a spacelike interval for which we compute the entropy. With a few boosts, we can generalize the formula (<ref>) to :S( vac,x,y)=c/6log(|x^+-y^+||x^–y^-|/^2) . One can easily verify that the bound of the QNEC is identically vanishing for (<ref>), for any choice of y. Thus, for the vacuum state, the QNEC is saturated and becomes an equality. If one could determine whether the QNEC is saturated without an explicit check, then one could determine the state given the entanglement structure (or vice-versa) and obtain a sort of "energy-entanglement" equivalence. This is reminiscent of bulk reconstruction, where in some sense the bulk geometry (and hence the bulk Stress-energy tensor) emerges from the entanglement structure of the dual CFT<cit.>. From the saturation of the QNEC in the vacuum state, one can then easily show that it will also saturate in any state that can be obtained by conformally mapping the vacuum. This follows directly from the fact that (<ref>) changes covariantly with conformal transformations. For instance, one can verify directly with (<ref>) that the steady state, dual to the metric (<ref>) saturates the QNEC everywhere. With the same logic, one can push the reasoning much further. The QNEC will be saturated for any state, as long as the dual RT surfaces do not cross bulk matter. Indeed, without matter, the bulk metric is locally AdS_3, so we can find a change of coordinates which brings us to the Poincaré metric, at least for the portion of spacetime that the RT surface traverses. Then the computation follows as in the vacuum case, and the QNEC is saturated. In fact, one can quantify the deviation from saturation in the case of perturbative matter in the bulk<cit.>. To be more precise, the aforementioned references look at the non-saturation of the inequality (<ref>) for some fixed y. However, we believe that the more interesting quantity to inspect would actually be the best possible QNEC bound, namely :2π⟨ T_±±(x)⟩≥ Max_y(_±^2 S(ρ,x,y)+6/c(_± S(ρ,x,y))^2) . When considering the best bound as in (<ref>), it is not clear anymore that bulk matter necessarily invalidates the saturation. In that context, the geometries that we have built-in chap. <ref> and <ref> are the perfect playground to test such questions, as they provide a very simple form of matter in the bulk, the thin membrane. Computing the QNEC in these geometries is also an important check for the thin-brane approximation. Indeed, for intervals for which the associated RT surfaces cross the membrane, it is not obvious that the QNEC should hold generically. Were we to find cases where it is violated, it would indicate some breakdown of the holographic bottom-up model we considered. §.§ QNEC for static geometriesAs we have learned throughout this chapter, computing (H)RT surfaces on our stitched geometries is generically not an easy task. Thus it is not surprising that computing the QNEC bound offers the same challenges. Once one has a robust algorithm to compute HRT surfaces anchored at arbitrary boundary points, the QNEC follows easily, but as of now, it is still lacking. We will thus focus on geometries that can be brought back to the vacuum Poincaré case, described in detail in sec.<ref>. Recall that for these geometries we have determined that the entanglement entropy S associated with a crossing RT surface on an interval bounded by x=(τ_x,_x) and y=(τ_y,_y) has the following structure :S= c_i/6ln(_x/)+c_i/6ln(_y/)+g(ξ) , ξ = -(τ_x-τ_y)^2+(_x-_y)^2/4_x_y ,where the c_i and g(ξ), should be chosen accordingly with the location of the two points, see the formulas (<ref>) and (<ref>) for the specifics.We exploit the form (<ref>) to compute its lightlike derivatives without having to compute any more RT surfaces. We specify to the case "+" version of the QNEC, as the staticity of the state guarantees that both directions will be equivalent.Consider first the case where _x is on side 1, and _y is on side 2. Expressing ξ and _x in terms of the lightlike coordinates w_±^x, we find : ^2 S/( w_+^x)^2+6/c S/ w_+^x= π(w_-^x-w_-^y)^2(w_+^x+w_+^y)^2/32 _x^4 _y^2[ g”(ξ)+12π/c_1g'(ξ)^2] . In the case where _x is on side 2, one simply has to change c_1→ c_2. Notice that the bound is independent of the cutoff , and thus we can safely take the limit.By scale invariance and time-translation, let us begin by fixing _x=1, τ_x=0 on side 1, and consider (to reduce dimensionality) y =(τ_y=0,_y) on side 2. In the notation of sec. <ref>, we have μ=_y/_x. In our Euclidean construction, the natural variable is the angle , and thus properly speaking, we have g(ξ)=g((ξ)). The formula (<ref>) gives us (μ), and we can express this for the cross-ratio by the equality ξ=μ-2+1/μ. This gives : μ(ξ)=1+2ξ± 2√(ξ(1+ξ)) . The two possible solutions will give the same conclusion, as they amount to the inversion μ→ 1/μ which is a symmetry of the problem. Thus in the end, we have g(ξ)=g((μ(ξ))) where each of the functions is analytical, although extremely cumbersome. In fig. <ref> we plot the QNEC bound for the crossing geodesic, as we vary the endpoint y over the equal time slice.Re-assuringly, the bounds obtained while varying the endpoint y are all negative, which means that in the vacuum state the ICFT model is at least consistent with the QNEC. The interesting point is that we find a specific y for which the bound is 0, meaning (<ref>) saturates and this is despite the fact that the corresponding RT surface does cross bulkmatter. The y in question is however rather special, as it is the point that lies symmetrically to x with respect to the interface (equivalently, the point for which ξ=0).The simple fact that (<ref>) is saturated is actually unsurprising; indeed, in the interface models we consider one could always take y→ x, for which case the saturation is evident as we can always avoid the membrane. Nonetheless, it is noteworthy that we managed to find a saturating point even in the case where the RT surface crosses bulk matter. Whether this saturation holds more generally, or it is just due to the increased symmetry of the location of y, is yet to be determined.Consider now the limits as the endpoint y approaches the interface, from side 2 (x in side 1). We see the QNEC approaching 0, and this can be understood assuming the smoothness of the entanglement entropy in this limit. After y crosses the interface, the two points lie on side 1 and the RT surface is the trivial one, giving the saturating bound. As y approaches the boundary, we see the QNEC reaching this bound.The story is different when y lies on side 1 (x on side 2). As y approaches the interface, we see the QNEC reaches a non-saturating bound. Indeed, as y crosses the interface, the dominating RT surface will not be the trivial one, but rather the double-crossing one depicted in fig. <ref>. Such RT surfaces will not in general saturate the QNEC bound, and as such neither is the limit of the bound as y tends to the interface.To continue the analysis, the natural next step is to actually compute the QNEC for the two points x, y lying on side 2. The computation goes similarly to what we have just shown, the only tricky step being the problems with the analytic form of (μ), see fig. <ref>. Again, we unfortunately haven't finished the work needed to present clear results so this is added to the list of works in progress. This also goes for the computation of the QNEC in more complicated states, as it cannot be done without first having a firm grasp on the RT construction.§ CLOSING REMARKSThis section was dedicated to presenting partial results regarding the entanglement structure of the various ICFT states that were obtained in the previous chapters. The initial goal of exploiting the Euclidean geometry construction of sec.<ref> and reducing all the entropy computations to this case was unsuccessful, which forced us to consider numerical methods. Nonetheless, we showed that this does allow full analytic control for static geometries, in which the induced metric of the membrane is AdS_2. Efforts in the numerical direction were centered around the main idea to exploit the AdS_3-locality of the considered spacetimes. By locally mapping them to Poincaré coordinates, we limit the necessity for numerical resolution only at a few points. For now, the results obtained are still lacking, and the priority for future work will be to iron out the details that prevent the successful application of the algorithms. We believe that understanding the entanglement structure of the ICFT NESS will not only offer insights into the curious entropy production at the interface, but also regarding the properties of non-killing, out-of-equilibrium horizons.Finally, the recovery of the QNEC bound in these systems follows almost automatically from the entanglement structure, so it is a natural quantity to consider. Besides being a consistency check, of particular interest is the question of its saturation.Indeed, some authors have claimed that for CFT in D≥ 3, the QNEC is always saturated <cit.>. This fails in 2D as we have noticed in our models, but it is a tantalizing question whether the saturation still holds for the best bound (<ref>). This question deserves further study.CHAPTER: CONCLUSION AND OUTLOOKThe main focus of the work presented in this thesis was the study of a simple model describing a minimal Interface CFT and its holographic dual, which in the large N and 't Hooft coupling limit reduces to Einstein gravity with a gravitating membrane. The initial motivation to consider such a model was their intimate connection with the (then) recent progress on the black hole information paradox<cit.>. Indeed, the examples in which the Island formula can be applied usually involve Boundary CFT duals, which can be obtained as a limiting case from ICFT.The simpler case of the 3-dimensional bulk was initially introduced as a stepping stone in preparation for the study of full Supergravity solutions. However, the model proved to be much richer than expected, such that it became the main focus of this work. Furthermore, studying minimal models of this type has the advantage of yielding more generally applicable results. This broadens the scope of the results we obtain here to fields like condensed matter theory<cit.>, where minimal holographic models are often used to allow the computations at strong coupling, not easily accessible from the field theory. Finally, having access to non-trivial holographic models which allow for analytic results provides a nice playground to test and further our understanding of AdS/CFT.Aiming at applications in the black hole information problem, in chap. <ref> we considered the ICFT model at finite temperature whose gravity duals include black holes. Surprisingly, the sole presence of the gravitating membrane produced a very rich phase diagram for this system. While our analysis was thorough within this minimal model, it identified several interesting directions that deserve future study. One key issue is the validity of the thin-brane approximation and how our results would fare in full top-down ICFT/gravitating wall pairs. Another approximation of our treatment is that we discarded bulk solutions involving membrane fusion (fig. <ref>) even though such configurations sometimes appeared naturally in cases where the membrane intersected itself. While they should not alter the conclusions on the phase space drastically, this more complete model could exhibit some interesting properties of the pair of interfaces. A related question concerns the "exotic fusion" geometries in which, as we bring the two interfaces together, they do not fuse into the trivial defect (fig. <ref>). Their mere existence is quite surprising<cit.>, and it might be related to the "membrane fusion" geometries mentioned above. This is because deeper regions of the AdS bulk are supposed to probe the IR region of the dual field theory. As we go to the IR the two interfaces approach one another and eventually appear as one, "fusing". Thus, an allowable non-trivial membrane fusion in the bulk might signal the presence of a potential exotic interface fusion in the field theory.One last problem that deserves future work is the elucidation of the meaning of the sweeping transition (sec. <ref>) from the boundary theory. Our strong feeling is that they must be connected to the entanglement structure of the field theory, as it is the quantity that is related to the bulk reconstruction. We have arrived at some conjectures in chap. <ref>, but they are still very tentative. Related to this problem is the role of the critical tension _0. It is still unclear if this is simply a quantity that appears in the computations, or whether it carries a deeper meaning. In chap.<ref> we looked at an extension of the static states of chap. <ref> to more general non-equilibrium stationary states. This is of interest both to confirm the work that was already done from the field theory perspective <cit.> and extend it, but also to generate tractable non-equilibrium black hole solutions, which are still poorly understood. These goals were met. First, our calculation confirmed the universality of the energy-transport coefficients shown in ICFT in <cit.>. Secondly, it offers a much simpler way to compute these coefficients on the gravity side, when compared to the original scattering calculations of <cit.>. Lastly, perhaps the least expected result was the discovery of the maximal entropy production at the interface and the analytical recovery of the deformed, non-killing event horizon on the gravity side.These results naturally lead to the fascinating question of the relation of the deformed horizon with entanglement entropy. We touched on this issue in chap.<ref> and hope to complete our partial results in the near future. The well-known Bekenstein-Hawking formula, which has been re-derived with the RT prescription in AdS/CFT, has been mainly applied to black holes with static killing horizons. In this model, we have an example of an (analytically controlled) event horizon that is neither Killing, nor coinciding with its "apparent" counterpart. It is the perfect setup to elucidate the role played by these various horizons in holography. One other question that needs elucidation concerns the thermal transmission of the interface pairs. For a single interface, we recovered results previously obtained perturbatively<cit.>, but in the case of more than one interface, there appeared two distinct phases. When the membrane avoids the black hole and curves back to the boundary, the conductance suddenly becomes perfect. This behavior certainly pertains to some sort of coherent scattering, which allows all the thermal flux through as if there were no interfaces. The precise way in which this happens in the field theory is however unclear and deserves further study.The final chapter <ref> ties the project together as it brings us back to the initial goal of connecting our models to the Island computations. We have already mentioned that the computations of RT surfaces in the obtained geometries was the obvious next step in their study, and that is what is undertaken in this chapter. Based on the vacuum computations of <cit.> which we present, we outline different angles (analytical and numerical) to tackle computations of the sought-out RT surfaces.While some interesting results were obtained, we haven't yet managed to reach the initial goal: showing the existence of horizon probing RT surfaces in the non-equilibrium case, and understanding the entropy production at the interface. The main focus of future work will be the completion of these goals. Finally, another tangent direction that was mentioned is the computation of the QNEC bound for these same geometries. Its application to the ICFT model will cement its consistency, while potentially providing interesting insights on the necessary conditions for its saturation. The QNEC becomes much more interesting when one considers the Wick-rotated version of our models <cit.>, in which the interface now plays the role of a quench. It could be interesting to see which restrictions it imposes if any, and whether it tells us something about the non Wick-rotated version.Although much was accomplished, much still remains to be worked out, as is always the case in physics.CHAPTER: MISC § EXTRINSIC CURVATUREConsider a surface of codimension one embedded in a manifold of dimension D. It is parametrized by (D-1) parameters denoted s^a. In a Lorentzian manifold, one should differentiate according to the nature of the hypersurface (lightlike, spacelike or timelike). Unless otherwise stated, we assume here that the surface is timelike. Generally, its equation can be written as x^μ = x^μ(s^a) ,where x^μ(s^a) are D functions specifying the surface's shape. We can then naturally define a set of D-1 tangent vectors to the surface :t_a^μ =x^μ/ s^a ,which allows us to define the induced metric :h_ab=g_μν t_a^μ t_b^ν ,where g_μν is the metric of the ambient spacetime. To define the normal covector n_μ, one can use the generalized cross-product :n_μ = 1/2_μν_1…ν_D-1^a_1… a_D-1t^ν_1_a_1… t^ν_D-1_a_D-1 ,where thesymbols are the levi-civita tensors corresponding to the metric g and h (with the appropriate normalisation with √(|g|)). Using (<ref>), the normal vector is normalized, n^μ n_μ = 1. (For a lightlike surface, n^μ n_μ = 0, and for a spacelike surface, n^μ n_μ = -1).In the case where the surface can be defined in terms of an equation f(x^μ)=0, there is a much simpler way to obtain the normal co-vector : n_μ = _ν f/√(_ν f ^ν f) , which is obtained by varying the defining equation along tangent directions. The normal vector of course satisfies : n_μ t^μ_a =0 . Using it, we can define the projection operator on the surface : Π^μ_ν = δ^μ_ν - n_ν n^μNote that formally, n_μ and hence Π^μ_ν are defined only on the surface x^μ(s^a). Thus they are not vector fields on the ambient manifold. One can however arbitrarily extend them to all of the spacetime; this allows us to more easily define operations involving them, and in the end, it will not matter since we will evaluate everything on the surface.The projection operator can be applied to any tensor to isolate its components tangent to the surface. For instance, applied on the metric :h_μν=Π^_μΠ^_ν g_=Π_μν .This is another way to see the metric (<ref>), as expressed in the embedding coordinates. One can verify we have of course h_ab=t_a^μ t_b^ν h_μν. Thus, the projection tensor and the surface metric are essentially interchangeable.We have all we need to define Extrinsic curvature : K_μν= Π^_μΠ^_ν_ n_(=Π^_μ_ n_ν). Although it is not immediately obvious, one can show that K_μν is symmetric. This comes from the fact that locally, we can always write (<ref>) by the implicit function theorem. In this form, n_μ is locally hypersurface orthogonal, namely it satisfies n_[ρ_μ n_ν]=0. Contracting this with n^ρ yields K_[μν]=0 as required. Since K_μν is a tensor defined only on the surface, it is easier to work in the s^a coordinates : K_ab=t^μ_a t^ν_b K_μν = t^μ_a t^ν_b _μ n_ν .To compute _μ n_ν, one has to extend the vector field n_ν to the full embedding space. Alternatively, expanding (<ref>) : K_ab=t^μ_a t^ν_b (_μ n_ν + Γ^ρ_μν n_ρ)=t^ν_b _a n_ν + t^μ_a t^ν_bΓ^ρ_μν n_ρ .With the notation (<ref>), one does not need to go through the trouble of extending n_ν; indeed, the embedding space derivative has been "absorbed" by t_a^μ, so that we need only take derivatives in the tangent directions, in which n_μ is well-defined.The trace of the extrinsic curvature that will enter the Gibbons-Hawking action is then simply K = K_abh^ab= K_μνh^μν. CHAPTER: THERMAL HOLOGRAPHIC INTERFACES § EXTRINSIC CURVATURE FORMULASIn this section we collect some formulas for the extrinsic curvature of the membrane, in the parametrization:x^μ_m(,τ)=(τ,√(-Mℓ^2),x_m()) ,where the order of coordinates is (t,r,x), and the metric (<ref>). The tangent and normal (co)vector then are :t_τ^μ = (1,0,0) ,t_^μ =(0,1/2√(-Mℓ^2),x_m'()) ,n_μ = 1/N(0,-x_m'(),1/2√(-Mℓ^2)) ,where N=1/ℓ√((r')^2/r^2 + (-Mℓ^2 + r^2) (x')^2) is the normalisation of the normal vector. Note that we write √(-Mℓ^2)=r()≡ r where useful.Formula (<ref>) gives us the Extrinsic curvature :K_ττ = r (r^2-Mℓ^2)x'()/ℓ√(r'^2ℓ^2/r^2+(r^2-Mℓ^2)x'^2)=r^2x'/ℓ×√(/g()) ,K_τ = 0 ,K_ = x' r'^2ℓ^2(3+Mℓ^2)+r x'(r x'^2+ℓ^2 r”)+1/2ℓ^2 x”/ℓ√( g()) ,where all r should be seen as function of . Note we used the shorthand g() defined in (<ref>). Then writing down the traced-reversed Israel condition (<ref>), for the ττ component, we find : r^2_1x'_1/ℓ_1+r^2_2x'_2/ℓ_2=-√( g()) ,which is indeed the equation (<ref>) used in the main text. TheIsrael condition is much more complex as seen from the lengthy formula (<ref>) for K_. Nonetheless, one can write it down and plug in the solution (<ref>), and after a computer-aided simplification, it indeed evaluates to 0=0, such that it is redundant with the other equation.§ RENORMALIZED ON-SHELL ACTIONWe reproduce here the Euclidean action of the holographic-interfacemodel, in units 8π G=1. It is the sum ofbulk, brane, boundary and corner contributions, (see <cit.> for an explanation of the corner term)S_ gr =-1/2 ∫_𝕊_1d^3x√(g_1)(R_1+2/ℓ_1^2)-1/2∫_𝕊_2d^3x√(g_2)(R_2+2/ℓ_2^2) + ∫_𝕎 d^2s√(h_m)-∫_∂𝕊_1 d^2s √(h_1) K_1 -∫_∂𝕊_2 d^2s √(h_2) K_2 +∫_ C (θ - π)√(h_c) + c.t., where the counterterms,abbreviatedabove by c.t.,read <cit.> c.t. = 1/ℓ_1∫_𝔹_1√(h_1) +1/ℓ_2∫_𝔹_2√(h_2)-∫_𝔹_1∩𝔹_2 (θ_1+θ_2)√(h_c) . We remind that 𝕊_j are thespacetimeslices whose boundary is the sum of the asymptotic boundary𝔹_j andof the string worldsheet 𝕄, i.e. ∂𝕊_j = 𝔹_j ∪𝕄. The induced metrics are denoted by the letter h. The K_jare the tracesof the extrinsic curvaturesoneachslice computed with the outward-pointing normal vector. Finally, in addition to the standardGibbons-Hawking-York boundary terms, one mustadd the Haywardterm<cit.> at corners of ∂𝕊_jdenoted by C. [These play no rolehere, but they can be important in the case of string junctions.]There is at least one suchcorner atthe cutoff surface,𝔹_1∩𝔹_2, where θ -π is the sum of the angles θ_j defined in figure <ref>. Let us break the action into an interior and aconformal boundaryterm, S_ gr = S_ int + S_𝔹, withthe former including contributions from the worldsheet 𝕄.Usingthe field equations R_j = - 6/ℓ_j^2 and K_1|_𝕄+K_2|_𝕄= 2, andthe volume elements that follow from(<ref>)and (<ref>, <ref>), √(g_j)d^3x =ℓ_j r_j dr_j dx_j dtand√(h_m)d^2s =√(fg)dσ dt , We can write the interior on-shell action as follows : S_ int= 2/ℓ_1∫_Ω_1 r_1dr_1dx_1 dt+ 2/ℓ_2∫_Ω_2 r_2 dr_2dx_2 dt - λ∫_𝕄√(fg) d σ dt.We have beencareful to distinguish the spacetime slice S_j from the coordinate chart Ω_j, because we will now use Stoke's theoremtreating Ω_j as part of flat Euclidean space,∑_j=1,22/ℓ_j∫_Ω_j r_j dr_jdx_j =∑_j=1,21/ℓ_j∮_∂Ω_j r_j^2 (r̂_j· dn̂_j), with dn̂_j dt the surface element on the boundary∂Ω_j, and r̂_j the unit vector in direction of increasing r_j. Crucially, the boundary of Ω_j mayinclude a horizon which is aregular interior submanifoldof the Euclidean spacetime and is not therefore part of ∂𝕊_j. In particular, there is no Gibbons-Hawking-Yorkcontribution there.The boundary integral in (<ref>) receives contributions from the threepieces of ∂Ω_1,2 : the asymptotic cutoff surface𝔹_1∪𝔹_2, the horizon if there is one, and the worldsheet 𝕄.Consider first the contributions from the asymptotic cutoff and horizon. The normal vector n̂_j, in this case, is simply ±r̂_j, so the contribution is proportional to the size Δ x of the boundary. For the membrane, the surface element in the Euclidean space Ω_j is :dn̂_j = (-x_j'(),r_j'()) ,where here this is the outgoing normal vector for one of the two membrane pieces. Both of them will contribute equally to the integral, we integrate on them in opposing directions, while also having opposing normal vectors. Combining this together yields : 1/ℓ_j∮_∂Ω_j r_j^2 (r̂_j· dn̂_j)=r_j^2 L_j/ℓ_j-r_h j^2Δ x_j|_Hor/ℓ_j-∫_∈Σr^2_j()x_j'()/ℓ_j .In the last term, Σ denotes the two "half-membranes", i.e. twice the interval ∈ [_+,∞].Very conveniently, this last term precisely cancels the thirdterm in(<ref>) by virtue of the Israel-Lanczos equation (<ref>). In this way, we fortunately do not have to bother with integrals on the brane worldvolume.Thus, after all the dust has settled,theaction can be written asthe sum of terms evaluated either at the black-hole horizon orat the cutoff. After integrating over periodic time (which simply contributes to a prefactor of 1/T in front of the expressions) the interior part of the action, (<ref>),readsS_ int=r_1^2 L_1/ℓ_1T-M_1ℓ_1Δ x_1|_ Hor/T+ r_2^2 L_2/ℓ_2T-M_2ℓ_2Δ x_2|_ Hor/T .If the slice 𝕊_j does not contain a horizon the corresponding contribution is absent, Δ x_1|_ Hor=0. We wrote r_j for the cutoff radius, which is to be sent to infinity at the end. Note that we replaced r_hj=M_jℓ_j^2, the horizon radius.We now turn to the conformal-boundary contributions from the lower line in the action (<ref>). For a fixed-r_jsurface, the outward-pointing unit normal expressed as a 1-form isn_j = dr_j /√(r_j^2 - M_j ℓ_j^2). Dropping the index j for simplicity, one finds after a little algebra (virtually identical to the computations of sec.<ref>): K_xx = K_tt = - r/ℓ√(r^2 - Mℓ^2)⟹√(ĝ) K =- 1/ℓ (2r^2 - Mℓ^2). Combining the Gibbons-Hawking-York terms and the counterterms gives S_𝔹 =1/ℓ_1 T ( r_1 √(r_1^2 - M_1ℓ_1^2) - 2r_1^2 +M_1ℓ_1^2)Δ x_1|_𝔹_1+(1 ↔ 2). Expandingfor largecutoff radius, r_j|_𝔹_j→∞, and dropping the terms that vanish in the limit we obtain S_𝔹 =1/ℓ_1 T (- r_1^2 +1/2 M_1ℓ_1^2)L_1+(1 → 2).Upon adding up(<ref>) and (<ref>) the leading divergent term cancels, giving the following result for the renormalized on-shell action:S_ gr= M_1ℓ_1/2 T(L_1 -2Δ x_1 |_ Hor) +M_2ℓ_2/2 T( L_2 -2Δ x_2 |_ Hor).We used here the fact thatΔ x_j|_𝔹_j = L_j,and that r_j^2 = M_jℓ_j^2 at the horizon whenone exists. We also used implicitlythe fact that for smooth strings the Hayward term receivesno contributionfromthe interior and is removed by the counterterm at the boundary. As a check of this on-shell actionlet us compute theentropy. Using our formula for the internal energy ⟨ E⟩= 1/2(M_1 ℓ_1 L_1 + M_2 ℓ_2 L_2), andS_ gr = F/T=⟨ E⟩/T- Swe find S=1/T(M_1 ℓ_1Δ x_1 |_ Hor + M_2 ℓ_2Δ x_2 |_ Hor), = 4π^2 T (ℓ_1Δ x_1 |_ Hor +ℓ_2Δ x_2 |_ Hor)= A( horizon)/4G .In the lower line we used the fact thatM_j = (2π T)^2 and r_j^ H = 2π T ℓ_j for sliceswith horizon, plus our choice ofunits 8π G=1. Thecalculation thus reproduces correctly theBekenstein-Hawking entropy. § OPENING ARCS ASELLIPTIC INTEGRALS In this appendix we express the opening arcs, (<ref>-<ref>), in terms of completeelliptic integrals ofthe first, secondand third kind, K(ν)= ∫_0^1 dy/√((1-y^2)(1-ν y^2)) ,E(ν) = ∫_0^1√(1-ν y^2) dy /√(1-y^2) , Π(u,ν) = ∫_0^1dy/(1- uy^2)√((1-y^2)(1-ν y^2)) . Consider the boundary conditions(<ref>). The other conditions (<ref>,<ref>) differ onlyby theconstant periods or horizon arcs, P_j or Δ x_j|_ hor. Insertingthe expression(<ref>) for x_1^' givesL_1 = - ∫_σ_+^∞ℓ_1dσ/(σ+M_1ℓ_1^2) (λ^2+ λ_0^2) σ+M_1-M_2/√( A σ(σ-σ_+)(σ-σ_-)) , and likewiseforL_2.The roots σ_± are given by (<ref>,<ref>). We assume that we are not in thecase [H2, H2] where M_1=M_2>0, nor in the fringecase σ_+ = - M_j ℓ_j^2 whenthe string goes through an AdS center. These cases will be treated separately. Separatingthe integral in two parts,and tradingthe integration variable σ fory,with y^2 := σ_+/σ, we obtain L_1 =- 2ℓ_1/√(A σ_+)[ M_1 - M_2/M_1ℓ_1^2∫_0^1 dy/√((1-y^2)(1-νy^2)),+((λ^2+ λ_0^2)- M_1 - M_2/M_1ℓ_1^2) ∫_0^1 y^2 dy/ (1-u_1y^2) √((1-y^2)(1-ν y^2))],where ν = σ_-/σ _+ and u_1 = - M_1ℓ_1^2/σ_+. Identifying the elliptic integrals finally givesL_1 =- 2ℓ_1/√(Aσ_+)[ M_1-M_2/M_1ℓ_1^ 2(K(ν) - Π(u_1, ν))+ (λ^2+ λ_0^2) Π(u_1, ν)] , and a corresponding expression for L_2L_2 =- 2ℓ_2/√(A σ_+)[ M_2-M_1/M_2ℓ_2^ 2 (K(ν) - Π(u_2, ν))+ (λ^2 - λ_0^2) Π(u_2, ν)] , with u_2 = - M_2ℓ_2^2/σ_+.The prefactors in (<ref>) diverge when M_1→ 0 but the singularity is removed by expanding Π(u_1, ν) around u_1=0. In this limitL_1(M_1=0) = -2ℓ_1/√(A σ_+)[ M_2/σ_-( E(ν)- K(ν))+ (λ^2 +λ_0^2)K(ν) ] , L_2(M_2=0) = -2ℓ_2/√(A σ_+)[ M_1/σ_-( E(ν)- K(ν))+ (λ^2 - λ_0^2)K(ν) ] ,withE(ν) the complete elliptic integral of the second kind. TheM_1=M_2>0geometriescorrespondto the high-temperature phasewhereM_j= (2π T)^2, σ_+=0 andσ_- =- (4π T λ)^2/A. The integrals (<ref>)simplify to elementary functions in this case. This seems to be related to the fact that in this case, the membrane metric becomes AdS_2.: L_1 - Δ_1^ Hor=- ℓ_1 (λ^2+ λ_0^2) /√(A|σ_- |)∫_0^∞ds/(s + a)√(s+1)=- ℓ_1 (λ^2+ λ_0^2) /√(A|σ_- |)2/√(1-a) arctanh(√(1-a)) ,with a = Aℓ_1^2/4λ^2. Using the expression (<ref>) for A, and going through the samesteps for j=2,the expression greatly simplifies :L_1 - Δ_1^ Hor = - 1/π Ttanh^-1( ℓ_1(λ^2 + λ^2_0)/2λ) , L_2 - Δ_2^ Hor = - 1/π Ttanh^-1( ℓ_2 (λ^2 -λ^2_0)/2λ). Interestingly,since Δ_2^ Hormust bepositive,T L_2 is bounded from below in the rangeλ < λ_0 as discussed in section <ref>. In the high-temperaturephase the on-shell action,(<ref>), readsI_ gr^ (high-T) =4 π^2 T[ -1/2(ℓ_1 L_1 + ℓ_2 L_2) +ℓ_1 (L_1-Δ_1^ Hor)+ℓ_2 (L_2-Δ_2^ Hor)] .Usingthe expressions (<ref>) and rearranging the arc-tangent functions gives I_ gr^ (high-T) := E/T- S=- 2π^2 T(ℓ_1 L_1 + ℓ_2 L_2)- logg_I ,wherethe interface entropyS=logg_I is given by (<ref>). By this re-arrangement we are thus able to recover the interface entropy in a somewhat roundabout way, which nonetheless confirms the correctness of our solutions.§ SWEEPING IS CONTINUOUSIn this appendix, we show that sweepingtransitions are continuous. We focus for definiteness on thesweeping of the j=2 AdScenterat zero temperature (all other cases work out the same). The transitiontakes place when μcrosses the critical value μ_2^*given by eq. (<ref>). Setting μ = μ_2^*(1 - δ) in expression (<ref>) gives f_2(μ) = ℓ_2/√(A)∫_s_+^∞ ds(^2-_0^2)(s-μℓ_2^2) + δ/(s - μℓ_2^ 2)√(A s ( s- s_+)(s-s_-)) ,= 2ℓ_2(^2-_0^2)/√(As_+)K(s_-/s_+) + ℓ_2δ/√(A) ∫_s_+^∞ds /(s-μℓ_2^2)√(s (s- s_+)(s-s_-))_J . The first term is continuous at δ=0, but the second requires some care because the integral Jdiverges. This is because forsmall δs_+- μℓ_2^2 = δ^2/4 ^2 μ_2^*+O(δ^3) , as one finds by explicit computation ofthe expression (<ref>). we set δ=0,J diverges near the lower integration limit. To bring the singular behavior to 0 we perform the change of variable u^2 =s -s_+, so thatJ = ∫_0^∞2 du/(u^2 +δ^2/4^2 μ_2^* )√( (u^2+s_+^*)(u^2 +s_+^*-s_-^*)) , where we kept only the leading order in δ,and s_±^* are the roots at μ=μ_2^*. Since s_+^* and s_+^* -s_-^* are positive and finite, the small-δ behavior of the integral is (after rescaling appropriately u)J=4 λ|μ_2^* |/|δ|√(s_+^*(s_+^*-s_-^*))∫_0^∞du/u^2+1_π/2+finite .Inserting in expression (<ref>) and doingsometedious algebra leads finally to a discontinuityofthe function f_2(μ) equal to sign(δ) π/√(μ_2^*).This is precisely what is requiredfor L_2, (<ref>), to becontinuous whenthe red (j=2) slice goes from typeE1 at negative δ to typeE2 at positive δ. This, however, does not tell us anything about the continuity of higher derivatives of L_2(μ). The expansions that one needs to make become exponentially bigger, and it becomes nigh impossible to keep track of all the terms, as even Mathematica struggles to perform the simplifications. However, we made a numerical analysis with several numerical parameters. What this analysis revealed is that the continuity of the Free energy seems to extend at least to the third derivative (going deeper produced numerical instabilities which made it hard to conclude). For this reason, it is safe to say that the transition is completely smooth in terms of Free energy and its derivatives. It is thus a phase transition of another nature.§ BUBBLES EXISTWe show here that the bubble phenomenon of section <ref> is indeed realized in a region of the parameter space of the holographic model.This is the region of non-degenerate gravitational vacua (ℓ_2 strictly bigger than ℓ_1) and a sufficiently light domain wall. Specifically,wewill show that forλ close to its minimal value,λ_ min,the arc L_1(μ=0) is negative, so the wall self-intersects and μ_0 is necessarily finite. Letλ =λ_ min(1+δ)with δ≪ 1. Setting μ=0 and expanding eqs. (<ref>) for smallδ givesA = 8 λ_ min^2 δ/ℓ_1ℓ_2 +O(δ^2),s_+ =ℓ_2/4 λ_ min+O(δ), s_-= - ℓ_1 / 2 λ_ minδ +O(1) .Plugging into (<ref>)with M_2 = μ M_1≈ 0 we find :√(| M_1|) L_1 =-2 /ℓ_1 √(A s_+ )[K(s_-/s_+) +(1-2ℓ_1/ℓ_2) Π(ℓ_1^2s_+, s_-/s_+)] ,where we have only kept leading orders in δ. Now we need the asymptotic formof the elliptic integrals when their argument divergesK[-a/δ] ≈Π[u ,-a/δ] ≈ -ln(δ)√(δ)/2√(a)+O(√(δ)),for δ→ 0_+ with u, a fixed. Using a = 2ℓ_1/ℓ_2 finally gives √(| M_1|) L_1 ≈(ℓ_2/ℓ_1-1)^1/2ln(δ) +subleading . Forδ≪ 1 this is negative, proving our claim. Note that we took the green slice to be of typeE2, as follows from our analysis of the sweeping transitions for light domain walls – seesection <ref>.CHAPTER: STEADY HOLOGRAPHIC INTERFACES § EXTRINSIC CURVATURE FORMULASIn this section we collect some formulas for the extrinsic curvature of the membrane, in the parametrisation :x^μ_m(,τ)=(τ+f(),√(-Mℓ^2),x_m()) ,where the order of coordinates is (t,r,x), and the metric (<ref>). The tangent and normal (co)vector then are :t_τ^μ = (1,0,0) ,t_^μ =(f'(),1/2√(-Mℓ^2),x_m'()) ,n_μ = 1/N(0,-x_m'(),1/2√(-Mℓ^2)) ,where N=√((r')^2/r^2(/h(r)) + h(r) (x')^2/ℓ^2) is the normalisation of the normal vector. Note that we write -Mℓ^2=r^2()≡ r^2 where useful, and we used the definition (<ref>) for h(r).Note that the tangent vector is independent from f(), this is expected as it does not change the geometry of the membrane embedding. It will however have an impact on the matching conditions.Formula (<ref>) gives us the Extrinsic curvature :K_ττ = r x'() h(r)/ℓ^2√(r'^2/r^2 h(r)+h(r)x'^2/ℓ^2)=r^2 x'() h(r)/ℓ√()√(|ĝ|) ,K_τ = r f'()x'()h(r)/ℓ^2-Jℓ x'()/2rh(r)/√(r'^2/r^2 h(r)+h(r)x'^2/ℓ^2)=r^2 x' h(r)f'/ℓ-Jℓ^2 x'/2h(r)/√()√(|ĝ|) ,where all r should be seen as function of . Note we used the shorthand ĝ= det(ĝ) defined in (<ref>). We did not include the equation for K_ as it is very cumbersome, and is not used anywhere in the main-text.Then writing down the traced-reversed Israel condition (<ref>). We indeed find (<ref>) and (<ref>) as in the main-text. As for the () equation, it can be shown with alot of handwork that it is indeed satisfied when the other two are. § HORIZON INEQUALITIESIn section <ref> weassertedthatBTZgeometrieswhose ergoregions can beglued together bya thin brane obey the inequalitiesσ_+^ H1 >σ_+^ H2 if M_1 > M_2 , σ_+^ H2 <σ_+^ H1 ifM_1< M_2 , where the horizon locations are σ_±^ Hj =- M_jℓ_j^2/2±1/2√(M_j^2 ℓ_j^4 -J^2 ℓ_j^2 ) ,and J≡| J_1| = | J_2| >0. This ordering of the outer horizons ismanifest if one expands at theleading order forsmall J. Wewant to showthat it isvalidfor allvalues of J. If as J is cranked upthe ordering was at some point reversed, then we would have σ_+^ H1 = σ_+^ H2, or equivalently M_2ℓ_2^2 - M_1ℓ_1^2 =√( M_2^2 ℓ_2^4 - J^2ℓ_2^2) - √( M_1^2 ℓ_1^4 - J^2ℓ_1^2) . Squaring twice to eliminate the square roots givesJ^2 = 4ℓ_1^2ℓ_2^2(M_1-M_2)(M_2ℓ_2^2-M_1ℓ_1^2)/(ℓ_2^2-ℓ_1^2)^2 .Without loss of generality weassume, as elsewhere in the text, that ℓ_1 ≤ℓ_2. If M_2>M_1, then automatically M_2ℓ_2^2 > M_1ℓ_1^2 and (<ref>) has no solution for real J. In this case the ordering(<ref>) cannot be reversed. If on the other hand M_1>M_2 and M_2ℓ_2^2-M_1 ℓ_1^2 >0we need towork harder. Inserting J^2from(<ref>)back in the original equation (<ref>) gives after rearrangements(ℓ_2^2-ℓ_1^2) (M_2ℓ_2^2-M_1 ℓ_1^2)= ℓ_2^2| (M_2ℓ_2^2-M_1ℓ_1^2) -ℓ_1^2(M_1-M_2) | , - ℓ_1^2| (M_2ℓ_2^2-M_1ℓ_1^2) -ℓ_2^2(M_1-M_2) |,where the absolute values come from the square roots. This equation is not automatically obeyed whenever its doubly-squared version is. A solutiononly exists if M_1-M_2≤ℓ_2^2M_2-ℓ_1^2M_1/ℓ_2^2⇔M_1/M_2≤2ℓ_2^2/ℓ_2^2+ℓ_1^2 .Remember nowthat we onlycare about solutions with walls in the ergoregion, for which M_1-M_2=J, see eq.(<ref>). Pluggingin (<ref>) this givesM_2 = [ 1 - 4_0^2 lam^2/_0^4 +4 ^2/ℓ_1^2] M_1,with λ_0^2 = (ℓ_2^2 - ℓ_1^2)/ℓ_1^2ℓ_2^2, see eq.(<ref>). Consistencywith the bound (<ref>) for a brane tension in theallowed range then requiresλ_ min < ≤ℓ_1_0^2/2 ,where λ_ min = (ℓ_2-ℓ_1)/ℓ_1ℓ_2. As onecan easily check,this impliesℓ_1 > ℓ_2 whichcontradicts our initial assumption. We conclude that (<ref>) has no solution, and the ordering (<ref>) holds for all J. For completeness, let us also consider the ordering of the inner horizons. Clearly σ_+^ Hj >σ_-^ Hj always, and for small Jalsoσ_+^ H1 >σ_-^ H2 and σ_+^ H2 >σ_-^ H1. To violatethese last inequalities we needσ_+^ H1 = σ_-^ H2 or σ_+^ H2 = σ_-^ H1 for some finite J, or equivalently M_2ℓ_2^2 - M_1ℓ_1^2 =∓( √( M_2^2 ℓ_2^4 -J^2ℓ_2^2) + √( M_1^2 ℓ_1^4 - J^2ℓ_1^2) ) .Squaring twice gives backeq.(<ref>) which has no solution if M_2>M_1.But ifM_1>M_2 and M_2ℓ_2^2-M_1 ℓ_1^2 >0,solutions to σ_+^ H2 = σ_-^ H1 cannot be ruled out. Indeed, inserting J from (<ref>) in (<ref>) with the + sign gives (ℓ_2^2-ℓ_1^2) (M_2ℓ_2^2-M_1 ℓ_1^2)= ℓ_2^2 | (M_2ℓ_2^2-M_1ℓ_1^2) -ℓ_1^2(M_1-M_2) | ,+ ℓ_1^2| (M_2ℓ_2^2-M_1ℓ_1^2) -ℓ_2^2(M_1-M_2) |,which requires thatℓ_2^2M_2-ℓ_1^2M_1/ℓ_2^2≤ M_1-M_2≤ℓ_2^2M_2-ℓ_1^2M_1/ℓ_1^2 . These conditions arecompatible withM_1-M_2=J andin the allowed range, so the outer horizon of slice 2 need notalways come beforethe Cauchy horizon of slice 1. Finally one mayask if the inner (Cauchy)horizons can joincontinuously, i.e. if σ_-^ H1 =σ_-^ H2 is allowed.A simple calculation shows that this is indeed possible for ℓ_2/ℓ_1 <3, a critical ratio of central charges that alsoaroseinreferences <cit.>. We don't know if this is a coincidence, or if some deeper reason lurks below. § BACKGROUND ON FLOWINGFUNNELSIn this appendix, we collect someformulae on theflowing funnels discussedin section <ref>. We start with the most general asymptotically-locally-AdS_3solution inFefferman-Graham coordinates,generalizingthe Banados geometries to arbitrary boundary metric, see <ref>, but here g_(0) will be arbitrary instead of flat : ds^2 = ℓ^2dz^2/z^2 + 1/z^2 g_αβ(x, z)dx^α dx^β ,where g_αβ is a quartic polynomial in z (written here as a matrix) g(x,z)=g_(0) + z^2 g_(2) +z^4/4 g_(2)g_(0)^-1g_(2) .In this equationg_(0) is the boundary metric and g_(2) isgiven by (see sec.<ref> for more details : g_(2) αβ= ℓ^2/2 R_(0) g_(0)αβ +ℓ⟨ T_αβ⟩ ,whereR_(0) is theRicci scalarof g_(0), and ⟨ T_αβ⟩ the expectation value of the energy-momentum tensor.This must beconserved, ∇^a_(0)⟨ T_ab⟩ = 0, and should obey thetrace anomalyequation g_(0)^ab⟨ T_ab⟩ = -(c /24π) R_(0).Wemay takethe boundary metric to bethat of the Schwarzschildblack hole(this differs from the metric in <cit.>, but since it is not dynamical we are free tochoose our preferred boundary metric),ds^2_(0) =-f(x)dt^2 + dx^2/f(x) with f(x) = x /x + a.The horizon at x= 0hastemperature Θ_S = (4π a)^-1.Using the familiar tortoise coordinates we can write ds^2_(0) = f(x) (-dt^2 + dx_*^2) where x_*=x + a log x. Letw^± = x_*± t.The expectation valueof the energy-momentum tensor in the black-holemetric can beexpressed in terms of ϕ = log f(x)as follows ⟨ T_±±⟩ = ℓ/2 [ ∂^2_±ϕ - 1/2 (∂_±ϕ)^2 ] + k_±(w^±) ,⟨ T_+-⟩ = -ℓ/2 ∂_+∂_-ϕ ,withk_±arbitrary functions of w^± that depend on the choice of state. At x≫ awhere the metric is flat,k_± determines the incoming and outgoing fluxes of energy. In a stationary solution,these must be constant. If a heat bath at temperature Θ_+ is placed at infinity, k_+ = π^2ℓΘ_+^2. The function k_-,on the other hand,is fixed byrequiring that there is no outgoing flux at the Schwarzschild horizon.Fromℓ/2[ ∂^2_±ϕ - 1/2 (∂_±ϕ)^2 ] =- ℓ(a^2 + 4ax)/16 (x+a)^4 ,we deduce ⟨ T_–⟩|_x=0 = 0⟹ k_- = ℓ/16 a^2 = π^2ℓΘ_S^2. The outgoing flux at infinityis thermalized at the black hole temperature,as expected.Inserting the expressions(<ref>-<ref>)in (<ref>) and (<ref>) gives the flowing-funnel metric in Fefferman-Grahamcoordinates. These are however singular coordinates, not well adapted for calculating the event horizon as shown in <cit.>.Following this reference, one can computethe horizon by goingto BTZ coordinates – this is possible because all solutions arelocally equivalent in three dimensions. The change from any metric (<ref>) - (<ref>)tolocal BTZ coordinates has been worked out in ref. <cit.> (see also <cit.>) and can be used to compute the black-funnel shapes.A noteworthy feature is that the funnelsstartvertically inwards at x=0<cit.> makinga delta-function contribution to the area density. Note that figure <ref> shows two independent flowing funnels with Schwarzschildtemperatures Θ_S = Θ_1^ eff and Θ_2^ eff. CHAPTER: RT SURFACES AND ENTANGLEMENT ENTROPY § EUCLIDEAN CONSTRUCTION FOR SINGLE-CROSSING GEODESICWe use fig. <ref> and aim to express the depicted points as a function of(where O_2=(,0)), φ and þ_i. The basic equation we will use is the law of sines inside various triangles. We will work with fig. <ref> as a reference, but if one is careful the formulas are applicable also to situations not depicted by the figure, for instance, when >0. When writing something like O_1O_2, we mean the length between the segment [O_1,O_2]. This in particular could become negative in the formulas we will give, signaling that O_1 and O_2 cross. This is always accompanied by a sign change in the opposing angle so that the law of sines still works.By the triangle (O_2,O,X), we have : O_2X/sin(þ_2)=OO_2/sin(þ_2-ϕ)⇔ O_2X= (-)sin(þ_2)/sin(þ_2-ϕ) ,where we use OO_2=-.Then we find OO_1 using the triangle (O,O_2,O_1) :OO_1=OO_2sin(π-ϕ)/sin(ϕ-þ_1)=(-)sin(ϕ)/sin(ϕ-þ_1) . The last piece we need is O_1X, using the triangle (O_1,X,O) :O_1X=OO_1 sin(π-þ_2+þ_1)/sin(þ_2-ϕ) . With that in mind we can compute the anchor points _1, _2, as well as Ox as we should require it to be positive :O_2= O_2X-OO_2 = (-)(sin(þ_2)/sin(þ_2-ϕ)-1) ,O_1= OO_1+O_1X = (-)sin(ϕ)/sin(ϕ-þ_1)(1+sin(þ_2-þ_1)/sin(þ_2-ϕ)) ,OX =(-)sin(ϕ)/cos(þ_2-ϕ) . We can re-express these in terms of the initial angles with the identities þ_2=π/2+ψ_2 and þ_1=ψ_1+ψ_2. O_2= (-)(cos(ψ_2)/cos(ψ_2-ϕ)-1) ,O_1= (-)sin(ϕ)/sin(ϕ-ψ_2-ψ_1)(1+cos(ψ_1)/cos(ψ_2-ϕ)) ,OX= (-) sin(ϕ)/cos(ϕ-ψ_2) . Then, by replacing ϕ-ψ_2≡, we obtain the equation (<ref>) used in the main-text.We would like now to compute the length of this geodesic. We will apply the formula (<ref>) to the two pieces composing the geodesic. When the geodesic is anchored at the boundary, we cut it off at z_i=_i. Denoting R the radius of the semi-circle, the initial angle þ_0 is given by : _0/R=sin(_0)≈_0 . Thus because of the presence of the cutoffs, the length of the geodesic does depend on its radius, unlike what was suggested by the formula (<ref>).The semi-circle going from _2→ x has radius R_2 given by :R_2=O_2_2=(-)cos(ψ_2)/cos() . The starting angle for this portion is given by the cutoff _2, while þ_f=ϕ. This gives the length L_2 of this portion :L_2 = ℓ_2 ln(2R_2sin(+ψ_2)/_2(1+cos(+ψ_2)))= ℓ_2 ln(2R_2/_2tan(+ψ_2/2)) ,keeping only the leading order in _2.The other semi-circle from X→_1 has radius R_1 :R_1 = O_1X=(-)cos(ψ_1)sin(+ψ_2)/sin(-ψ_1)cos() ,while the initial and final angles are þ_0=-ψ_1 and þ_f=_1/R_1. Then its length L_1 is :L_1=ℓ_1 ln(2R_1(1+cos(-ψ_1))/_1sin(-ψ_1))=ℓ_1ln(2R_1/_11/tan(-ψ_1/2)) . The full length of the geodesic is obtained simply by summing the two contributions, L=L_1+L_2. After some algebra and re-arranging, we obtain the formula (<ref>) used in the main-text.§ EUCLIDEAN CONSTRUCTION FOR DOUBLE-CROSSING GEODESICWe use fig. <ref> to express the depicted points as a function of(where O_2=(,0)), φ and the þ_i. As in the previous section, this is done by repeated use of the law of sines. Beginning with the triangle (O,O_1,O_3), we have :OO_2=(-)sin(ϕ)/sin(þ_1-ϕ) . With the triangle (O,O_2,O_3) :OO_3 = OO_2sin(2þ_2-þ_1-ϕ)/sin(2þ_2-ϕ)=-sin(ϕ)sin(2þ_2-þ_1-ϕ)/sin(þ_1-ϕ)sin(2þ_2-ϕ) . To find _2, we need the radius of the semi-circle centered at O_3. We use the triangle (O,O_3,X_2) :O_3X_2=OO_3sin(þ_2)/sin(þ_2-ϕ) ,where OO_3 is given by (<ref>). Then O_2=OO_3+O_3X_2 :O_2=OO_3(1+sin(þ_2)/sin(þ_2-ϕ)) . Similarly, O_1=O_1_1-OO_1=O_1X_1-OO_1. We use triangle (O,O_1,X_1) :O_1X_1 =(-)(1-sin(þ_2)/sin(þ_2-ϕ)) ,O_1= (-)(sin(þ_2)/sin(þ_2-ϕ)) . Collecting the expression and replacing with the angles ψ_i :O_1= (-) (cos(ψ_2)/cos(ψ_2 - ϕ)-1) ,O_2= (-)sin(ϕ)/sin(ψ_1+ψ_2-ϕ)sin(ϕ+ψ_1-ψ_2)/sin(ϕ-2ψ_2)(1+cos(ψ_2)/cos(ψ_2 - ϕ)) ,which yield the formulas (<ref>) when doing the change of variables =ϕ-ψ_2. In what follows, we express everythin in terms ofinstead of ϕ.For the Length of the geodesics, we must now compute the length of three separate segments, using the formula (<ref>). Some extra angles need to be determined on fig. <ref>, but we don't details that here.Consider first the segment connecting _1→ X_1. The radius is given by R_1 : R_1=O_1X_1=(-) cos(ψ_2)/cos(al) . The starting angle þ_0 is given by the cutoff, _1, while the ending þ_f=+ψ_2.This gives a contribution to the length : L_1 = ℓ_2 ln(2R_1/_1tan(+ψ_2/2)) . For the second segment X_1→ X_2, the Radius of the geodesic is irrelevant. The initial angle is þ_0=π-(+ψ_1), final þ_f=ψ-(ψ_1-). This gives the contribution L_3 :L_3 = ℓ_1 ln(tan(+ψ_1/2)/tan(ψ_1-/2)) . For the last segment L_2, the radius R_2 is :R_3=O_3X_2=OO_3 sin(þ_2)/sin(þ_2-ϕ) = -sin(+ψ_2)sin(+ψ_1)/sin(ψ_1-)sin(-ψ_2)cos(ψ_2)/cos() . The corresponding length L_2 is then :L_3=ℓ_2ln(2R_3/_2tan(-ψ_2/2)) . Adding the contributions of the L_i gives the length of the full geodesic. After re-arranging, we obtain (<ref>) used in the main-text.§ PECULIAR GEODESICS IN THE SPINNING BLACK STRING GEOMETRYIn this section we briefly look at the shape of the different spacelike geodesics in the spinning string geometry. In particular, we highlight the fact that there are horizon entering geodesics, but that they cannot have two anchors on the boundary.The geodesics are obtained from the equations of sec.<ref>, composed with the change of coordinates (<ref>). Consider first the type K_+K_->0, described in sec.<ref>. In poincaré space, this geodesic has two anchor points on the boundary. However, this does not necessarily translate to Finkelstein coordinates. To see this, it is useful to consider the inverse of (<ref>) (for J>0) :r^2= r_+^2 + (w_+ w_-) (r_+^2 - r_-^2)/z_p^2 ,v= ℓ/2(1/r_+-r_-ln(w_+(r+r_-)/r+r_+)-1/r_++r_-ln(w_-(r+r_-)/r-r_+)) ,y= ℓ/2(1/r_+-r_-ln(w_+(r+r_-)/r+r_+)+1/r_++r_-ln(w_-(r+r_-)/r-r_+)) ,where the r appearing in the RHS should be thought as depending on w_±, z_p.When using Poincaré coordinates to compute geodesics, we must keep in mind the region of Poincaré which describes Finkelstein space, namely w_+>0. Thus, when a geodesics is of the type K_+K_->0, either it remains in the region w_+>0, and thus it is doubly anchored in Finkelstein space (see fig.<ref>), or it reaches the w_+=0 boundary. In this case, by (<ref>), the geodesics escapes at (v→ -∞,r=r_+, y→ -∞). As such, it follows closely the apparent horizon towards the "coordinate" horizon, both in time and space. Note that the divergence is such that v/y≈r_+/r_-. In the context of a pure CFT, these geodesics do not contribute to the RT prescription, as they have only one anchor point. In ICFT, they might be "stopped" by the membrane, and upon crossing to the other side, head back to the boundary, so it is plausible that they play a role in this case.By (<ref>), the outer horizon of the black hole is mapped to w_-=0. Thus, one might argue that it should be possible to enter the horizon of the black hole and head back to the boundary: a geodesic in Poincaré that dips in the w_-<0 region and heads back to the boundary should generate such a horizon probing geodesic in Finkelstein coordinates. However, from (<ref>), the curves w_±() are obviously monotonic, barring such a possibility. Thus if a geodesic crosses the region w_-<0, it will remain in it and thus be unable to leave the black hole. Its precise fate depends on the geodesic, it could again escape at infinity before reaching the singularity if it reaches w_+=0, otherwise, it will reach r=0 as z_p→ 0, as seen from (<ref>). Again, these geodesics are irrelevant in the context of a pure CFT, but might be very important in the context of ICFT. If they intersect the membrane on their way to the singularity, that could give rise to geodesics of the type depicted in fig.<ref>.Consider now the case K_+K_-<0, see sec.<ref>. Now, as explained in the main text, even in Poincaré we are in presence of two disconnected branches each with one anchor point on the boundary. Considering them as separate geodesics, we look at the mapping of one of the two branches to Finkelstein space. The discussion is very similar, namely, we must pay attention to the important surfaces w_±=0 in Poincaré space. However, in this case as the affine parameterapproaches 0 (we assume we begin with _0<0) we will have w_±→ - Sign(w_∓)∞. As such, we are assured to either enter the horizon at w_-=0, or escape at infinity at w_+=0. Indeed, as in this case the signs of ẇ_± are opposing, one of the two situations will happen.We do not comment on the last case K_+K_-=0, as it is irrelevant to the numerical algorithms. Nothing special happens with respect to the two other cases, the same reasoning applies.
http://arxiv.org/abs/2310.18521v1
{ "authors": [ "Vassilis Papadopoulos" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20231027223932", "title": "Membranes, holography, and quantum information" }
Both authors contributed equally to this research.Tongji University Shanghai China [email protected] [1]Tongji University Shanghai China [email protected] [1]Tongji University Shanghai China [email protected] [1]Tongji University Shanghai China [email protected] Tongji University Shanghai China [email protected] Corresponding author.Tongji University Shanghai China [email protected] Bundle generation aims to provide a bundle of items for the user, and has been widely studied and applied on online service platforms. Existing bundle generation methods mainly utilized user's preference from historical interactions in common recommendation paradigm, and ignored the potential textual query which is user's current explicit intention. There can be a scenario in which a user proactively queries a bundle with some natural language description, the system should be able to generate a bundle that exactly matches the user's intention through the user's query and preferences. In this work, we define this user-friendly scenario as Query-based Bundle Generation task and propose a novel framework Text2Bundle that leverages both the user’s short-term interests from the query and the user’s long-term preferences from the historical interactions.Our framework consists of three modules: (1) a query interest extractor that mines the user’s fine-grained interests from the query; (2) a unified state encoder that learns the current bundle context state and the user’s preferences based on historical interaction and current query; and (3) a bundle generator that generates personalized and complementary bundles using a reinforcement learning with specifically designed rewards.We conduct extensive experiments on three real-world datasets and demonstrate the effectiveness of our framework compared with several state-of-the-art methods. Text2Bundle: Towards Personalized Query-based Bundle Generation Zhihua Wei January 14, 2024 =============================================================== § INTRODUCTIONBundles are ubiquitous in real-world scenarios, including the fashion outfit on the e-commerce platform Taobao, the music playlists on NetEase, the game packages on Steam, etc.A bundle is generally defined as a collection of items that are complementary or similar and can be consumed as a whole.Due to bundle's characteristics, the platforms are able to deliver highly relevant and satisfactory content to users while saving their time and effort, which brings more commercial benefits at the same time. Typically, in bundle recommendation scenario, fixed or predefined bundles assembled by human expertise or non-personalized data mining methods are usually recommended <cit.>.Some advanced methods <cit.> are able to recommend personalized bundles according to the interaction histories of users. Besides, in search scenario, there is no method or paradigm for exploring the bundle-level response. For the recommendation scenario without a user's proactive natural language input, the generated bundles based on the user's historical interaction may fail to meet some user specific demands.Firstly, the user's intention is time sensitive, and the interaction based bundle generation methods cannot cope with the user's interest shift over time. Secondly, without the user defined instruction that specifies the context, attributes and constraints of the bundle, the bundle generation result would be uncontrollable. Regarding the search scenario, there are also limitations in current paradigm to retrieve items.In this scenario, the user's textual query could be vague or abstract, but the search results could be excessively matched with the query such that the retrieved items can be homogeneous and only few of them would be selected by the user. Meanwhile, the user may prefer to see a bundle-level result, but it is not supported by modern search systems. Merely returning predefined bundles could not satisfy the user in terms of personalization and controllability. Thereby, the user's query-based bundle generation is a significant task that needs to be tackled but not yet investigated. In this paper, we mainly focus on this novel bundle generation task, which can be referred to as a Query-based Bundle Generation (QBG) process.As shown in Fig. <ref>, when a user is looking for a clothing bundle with the descriptive text “I want some outfits to wear on a beach vacation.”, the system would generate a bundle according to the user's historical interaction (Fig. <ref>.a) like handbag, suit and leather shoes, the generated bundle would be things related like tie, suit, suit pant and leather shoe, which are far away from the user's interest on beach vocation.Or the system would search for matched items with the query (Fig. <ref>.b). But the user may need to select several items by himself, which is not user-friendly and there could be other desired items not listed.Alternatively, in our work (Fig. <ref>.c), the recommender system can generate bundle for user based on the user’s intention inferred from the query and the user's preferences such as attribute men only distilled from the interaction history simultaneously. Previous studies on bundle generation rarely rely on the user's textual query but are simply based on user's interaction, focusing on techniques to generate bundles, for instance, using graph neural networks to represent bundles as graphs or using reinforcement learning to learn the optimal bundle combination strategy.In the new scenario that involves the user's query, the user's current short-term interests can be inferred from the query, along with the user's long-term interests reflected by the user's historical interaction.This suggests the new scenario could surpass the previous ones because both short-term and long-term user interests can be leveraged to generate personalized results for the user. Spontaneously, to handle this scenario, we summarize the following three main research challenges: * How to mine the user's fine-grained interest?User's query is in text format, a straightforward way could be to encode the text by some pre-trained natural language models. However, the embedding of the whole sentence may fail to represent the rich fine-grained interests inside the query.For example, directly encoding user's query in Fig. <ref> would possibly result in a vague interest like “beach outfits”, but the user would also like the “sunglasses” to be included in the expected bundle but may not be captured by “beach outfits”. * How to generate the personalized bundle? It is common that a user would have different preferences for various aspects such as colors, styles, and brands. These preferences can be inferred from the historical item interactions of the user.To generate a personalized bundle, it is essential to utilize these preferences effectively.Otherwise, the system may recommend an unsuitable bundle that matches the user’s current intention but deviates from the user’s preferences.* How to generate the qualified bundle? Items in a qualified bundle should be complementary to each other.For instance, a bundle with an iPhone may also include a pair of earbuds, but not an Android phone.Hence, we need to devise an effective method to measure the relations among items in a bundle.To address these challenges, we propose a novel method named Text2Bundle, which employs Reinforcement learning and Large Language Model (LLM) for bundle generation. Specifically, we utilize the Generative LLM to extract the fine-grained interests from the user's query, owing to its remarkable ability to infer user's intention with extend knowledge, which may be challenging or complicated for a conventional language model. To obtain the representation of the users and items efficiently, we derive both the ID and text embeddings from LightGCN pretraining procedure and natural language model encoding respectively. Moreover, we introduce a unified state encoder to incorporate both short-term, long-term interests and the current bundle state from interaction and text modality. Subsequently, our model selects the candidate item step by step based on the current state representation, with rewards considering the personalization, complementarity, and fine-grained interest coverage of current bundle. Thus, the system can deliver a personalized and qualified bundle to meet user's real-time textual query.The main contributions of this work are as follows:* We propose a novel and reasonable scenario named QBG, in which a user could proactively query a bundle with natural language descriptions, and the system would generate a bundle that exactly matches user's query and preferences.* We propose a novel framework named Text2Bundle based on Reinforcement Learning that generates personalized and qualified bundles by leveraging the user's short-term and long-term interests.* Extensive experiments on three bundle intention datasets are conducted to verify the effectiveness of our Text2Bundle framework. § RELATED WORKS §.§ Bundle Generation Bundle recommendation aims to recommend a bundle of items that are similar or complementary in content for a user to consume together. Existing works can be broadly classified into two categories: 1) ranking pre-defined bundles from the platform to users; 2) creating personalized bundles for users.For bundle ranking, previous works often leverage additional information from user-item interactions and bundle-item affiliations.For instance, DAM <cit.> jointly models user-bundle and user-item interactions using multi-task neural networks to alleviate the scarcity of user-bundle interaction data.BGCN <cit.> integrates the relations among users, bundles, and items into a heterogeneous graph and applies graph neural networks <cit.> to learn the complex relationships.Moreover, CrossCBR <cit.> models the cooperative associations between the two perspectives using cross-view contrastive learning.The key challenge of generating personalized bundles is how to produce a set of items that not only meet the user’s personalized demands but also have internal coherence, which requires accurate modeling of user interests as well as the compatibility of items within the bundle.BGN <cit.> models bundle generation as a structured prediction problem and uses determinantal point processes (DPPs <cit.>) to generate high-quality and diversified bundles. However, this approach models the bundle as a sequence, which may fail to capture the relationships between distant items.BGGN <cit.> represents the bundle as a graph and employs graph neural networks to generate the graph, which learns the structural information and high-order item-item relationships.BYOB <cit.> formulates the problem as a combinatorial optimization problem over a set of candidate items and applies a policy-based deep reinforcement learning algorithm to solve it.However, In real-world recommendation scenarios, such as e-commerce platforms, users may express their own intention through a query and expect to receive personalized bundles that satisfy both their intention and historical preferences.Previous works do not explicitly model this textual user intention, which may lead to inadequate modeling and suboptimal results. §.§ RL in Recommendation Reinforcement learning (RL) is a machine learning method that learns optimal policies by interacting with the environment and maximizing reward signals and can be applied to various domains such as games, robot control, and recommendation systems.In this method, agents receive observations from the environment, make decisions based on the current policy, and update the policy according to the reward signals. Compared to other recommendation methods, RL-based recommendation systems have the capability to handle the dynamics of sequential user-system interactions by adjusting actions according to successive feedback received from the environment.Additionally, RL takes into account the long-term user engagement with the system, allowing for a better understanding of users' preferences. RLUR <cit.>utilizes reinforcement learning to enhance user retention, which models the problem as an infinite horizon request-based Markov Decision Process, with the goal of minimizing the accumulated time interval of multiple sessions, thereby improving the app open frequency and user retention.HRL-Rec <cit.> aims to model user preferences on both item and channel levels, in order to jointly recommend heterogeneous items from multiple channels and satisfy users' personalized and diversified information needs. Bundle generation can be formulated as a combinatorial optimization problem, but the number of possible item combinations increases exponentially with the number of items, and traditional algorithms cannot solve this problem in polynomial time. Moreover, capturing the relationships between items within a bundle is crucial for effective bundle generation. To address these challenges, BYOB <cit.> obtains optimal item combinations through a policy-based deep reinforcement learning algorithm and designs several item-level reward signals to address the data sparsity problem.However, this approach has some limitations.Firstly, it assumes that the size of the generated bundle is fixed, which cannot suit real scenarios. Secondly, it does not adequately model the relationships among users, bundles, and items. The policy network only utilizes mean-pooling and fully connected layers to model the relationships between user and selected items for a bundle, which neglects the higher-order relationships among them.Additionally, a previous study <cit.> has shown that in the cases where actions are interdependent, optimal action decisions should consider other available actions.BYOB only considers compatibility modeling between actions and existing items within the bundle, neglecting the modeling of dependencies among actions themselves.§.§ Large Language Model in Recommendation Recently, the LLMs such as ChatGPT <cit.> and LLaMa <cit.> emerges to show their strength in various tasks involving natural language such as question answering, text summarization, translation, and more, due to training on massive amounts of text data. The capabilities like Chain-of-thought <cit.>, instruction following <cit.> and in-context learning make LLMs much more powerful than traditional natural language models,which would improve the explainability and efficacy of the recommender systems.P5 <cit.> proposes a unified and shared conditional language generation framework which integrates several recommendation tasks, and shows zero-shot generalization ability for novel personalized prompt and new items in unseen domains. ChatRec <cit.> utilizes the LLM's in-context learning ability to establish connections between users and items, enabling interactivity and explainability enhanced multi-round recommendations.While these methods mainly focus their scenarios to integrate LLMs to item recommendation or conversational recommendation, there is no LLMs usage on bundle recommendation for now. Besides, we leverage LLMs for intention decomposition as an enhancement to bundle generation process, and it is not an end-to-end usage of LLMs different from aforementioned methods. § QUERY-BASED BUNDLE GENERATION §.§ Definition In this work, we define the user, item, and bundle as u ∈𝒰,v ∈𝒱, and b ∈ℬ.The QBG scenario can be formulated as: for a user u with interaction history 𝒱_u and a raw textual query q, generate a bundle b^*_u,q = {v_1,v_2,…,v_K} which meets the user's intention in q and preferences in 𝒱_u, where K denotes the variational length of the bundle for each query session.Note that we omit the subscript in b^*_u,q in the following discussion for simplicity.§.§ Framework Inspired by BYOB <cit.>, we formulate the query-based bundle generation as a combinatorial optimization problem over the items in the candidate pool.We then transform the problem into a Markov Decision Process (MDP), in which an item is added to the bundle iteratively, and the process can be mainly regarded as a selection decision from candidate items.The MDP can be formulated as RL formulation ℳ = <𝒮, 𝒜, P, r, γ>:* State space 𝒮. For each step t, we define the state s_t ∈𝒮 in the context of user u as {u, 𝒱_u, p^t, b^t}. Where b^t is current bundle of selected items, and p^t is the candidate pool with selected items excluded. For the initial state s_0, p^0 includes all the candidate items and an additional end action a_end, and b^0 is an empty set.* Action space 𝒜. An action a ∈ p^t denotes the matched item selected from the candidate pool p^t based on current bundle b^t and user's short-term and long-term interests, or the end action a_end to stop the generation process. * Transition 𝒫. The transition between states can be formulated as s_t+1 = (u, 𝒱_u, p^t+1 = p^t∖{a}, b^t+1=b^t ∪{a}).Where a selected item a is added to current bundle b^t and removed from the candidate pool p^t.When the bundle size reaches the maximum bundle size L defined as a hyperparameter, or the end action a_end is selected, the bundle generation process will be stopped. * Reward r. The reward function is to guide the item selection to make the generated bundle qualified, it will be elaborated on in following sections. * Discount factor γ. The discount factor balances the trade-off between immediate and future rewards.The framework is to learn a bundle generation policy π(a|s;θ), to select the proper action a based on current state s by maximizing the expected reward over evaluation episodes, where the θ is the parameters of the policy network.§ METHODOLOGYThe main objective of this work is to generate a bundle that meets the user’s satisfaction and qualification criteria.To achieve this, the user’s short-term and long-term interests are crucial.Furthermore, the reinforcement learning policy and reward design are indispensable for a qualified bundle that contains harmonious items.In this section, we present the details of the user’s short-term and long-term interest and bundle state modeling, and the reinforcement learning policy.§.§ Intention Decomposition As stated earlier, we tend to infer the fine-grained intention instances from the textual query q, while an end-to-end training process can be very difficult.Thus, we harness the remarkable ability of Generative LLMs for intention inferring and knowledge extending <cit.> to facilitate the extraction of user’s fine-grained interests from the query q, which would be challenging for a conventional model.Prompt design For a specific task, Generative LLM is able to generate corresponding answers with a proper prompt provided.Therefore, We design an instruction prompt“I will list a description of a bundle which is an item set, please help to extract 4-5 possible entities implied inside.Please insert a "|" between each entity. Here is an example: Query:I want to expand my storage space; Answer:Hard drive|SSD|USB key|SD card”and with an input query as we discussed before in Fig. <ref>, the detailed intentions will be output with "|" separated for better data processing.Thus, we can decompose user's query q to the textual intention instances 𝒬 = {q_1,q_2,...,q}.Note that we append the original query string q to the collection because the original semantics in the query can be kept in case of the potential sub-optimal intention decomposition. §.§ State Modeling of User Preferences and Bundle ContentTo select a proper candidate item for current bundle, the user's long-term and short-term interests and the content of the bundle should be taken into account for modeling. To model the relation of each candidate item with the user and items in current bundle comprehensively, the naturally textual information of user's input and items, and the rich user-item interaction data should be adequately utilized. To this end, we uniformly apply Transformer <cit.> to encode the user interests and current bundle from the text level and interaction level.§.§.§ ID Embedding Encoder The interactions between user and items in bundle are inherently sparse which would lead to insufficient training by learning the ID embeddings from scratch. We introduce a pre-trained LightGCN <cit.> in our work to address this issue, as detailed in Sec. <ref>. Eventually, we can obtain the effective ID embeddings of users and items. For a specific user u and item v_i, they can be represented as 𝐞_ID^u, 𝐞_ID^v_i∈𝐑^d, where d is the dimension of the user and item embeddings. §.§.§ Text Embedding Encoder We employ the RoBERTa <cit.>, a pre-trained text embedding model, to encode the title corpora of the items as embedding: 𝐞_text^v_i = RoBERTa(title_v_i).For the decomposed textual user intention instances or the query itself, we use the same encoding method to obtain their text embeddings: 𝐞_text^q_j = RoBERTa(q_j), where q_j ∈𝒬. §.§.§ Unified State Encoder In order to obtain the unified state,we expect the textual semantic embeddings can be fused with the ID embeddings, and a shared multi-layer perception (MLP) was employed to transform text embeddings to the space of ID embeddings.In detail, we apply the MLP on the text embeddings 𝐞_text∈𝐑^d_t of historical item v_i ∈𝒱_u, intention instance q_j, or selected item v_k ∈ b^t to obtain the transformed embeddings 𝐞_text'∈𝐑^d, where d_t and d are the embedding sizes.Then the short-term from user's current intention instances, long-term interest from historical interactions and the items in current bundle, in both ID and text modality, can be fused by a Transformer to a representation of current state of the user u and bundle b^t.We concatenate all these embeddings {𝐞_text'^q_j},{𝐞_text'^v_i}, {𝐞_text'^v_k}, 𝐞_ID^u, {𝐞_ID^v_i}, {𝐞_ID^v_k} as the input set, through the Transformer we can obtain the refined representations respectively. Note that besides original position embedding in Transformer, we also add the corresponding type embedding 𝐞_type∈𝐑^d into input embedding, to indicate above 6 different input embedding types. By this Transformer Encoder, we model the cross modal information and relations among items in the current bundle, and fuse the long-term and short-term interests, also mine possible missing items of the current bundle in the context of the user's query.Moreover, we employ average pooling on all output text embedding to obtain the text state embedding 𝐞_text'^state∈𝐑^d, and the same for the state intention embedding 𝐞_ID^state∈𝐑^d.By concatenating these two interest embeddings on their embedding dimension, we can obtain the final state embedding 𝐞^state∈𝐑^2d of current state s_t:𝐞^state = Concat(𝐞_text'^state, 𝐞_ID^state) §.§ Bundle Generation Policy Upon obtaining the user interest, we employ the deep Q-learning network (DQN) <cit.> to perform the policy learning in QBG scenario. Due to the overestimation bias in original DQN, we employ the double DQN <cit.> to copy a target network Q' as a periodic from the online network to train the model following <cit.>. The policy network receives the embeddings of current state and candidate action as input and produces the value 𝐐(𝐬_t,𝐚_t) for each candidate action:𝐐(𝐬_t,𝐚_t) = 𝐞^state (𝐄^act_a_t)^Twhere 𝐄^act_a_t∈𝐑^2d is the embedding of candidate action a_t in action embedding matrix𝐄^act (See Eq. <ref>), generated by following two steps: Action Selection Strategy and Action Relation Modeling. §.§.§ Action Selection StrategyAs the performance of the policy learning could be degraded by a large action space <cit.> that is essential to be dealt with, i.e., massive unfiltered candidate items, in QBG scenario.To this end, we propose a simple but effective recall method.Specifically, we first calculate the similarity <cit.> between text embeddings of the user's query and all the items to filter out items below a specific similarity threshold T_sim to get a recall candidate set 𝒱_recall.Furthermore, we incorporate an FM <cit.> recommendation model with the user and item features to get top R items from 𝒱_recall as the candidate set 𝒱_cand = p^0 for current user's query.§.§.§ Action Relation ModelingIn QBG scenario, the candidate items in the action set could be interactive and relational with each other. For instance, a one-piece swimsuit ranked in the top 1 of candidates would be good for a beach outfit. Meanwhile, beach shirts and pants ranked in the top 2 and 3 are complementary and would be also favored by the user. If a one-piece swimsuit was selected, beach shirts and pants should not be in the bundle anymore, and vice versa. Thus, we need to take the combination composability of candidate items into account.To achieve this, we employ a Transformer to obtain the candidate actions embeddings by modeling the relations among them: 𝐄^v_cand=Concat(𝐄^v_cand_ID, MLP(𝐄^v_cand_text)) 𝐄^act=TRM_act(𝐄^v_cand) ∪{𝐞^a_end}where 𝐄_ID^v_cand∈𝐑^(|p^t|-1) × d and 𝐄^v_cand_text∈𝐑^(|p^t|-1) × d_t are the ID and text embeddings of the candidate items, 𝐞^a_end∈𝐑^2d is the learnable embedding of the end action a_end.§.§.§ Reward Settings We expect the generated bundle b^* to be similar to the target bundle b̂ which can be regarded as the label in the dataset, and items in b^* should be more complementary.Hence, we use the precision value (see details in Sec. <ref>) of b^* and b̂ as the main reward r_main. Personalization and complementarity. However, during the agent's exploration, only a hit can result in a positive precision reward. This leads to a very sparse reward and potentially slow learning progress or even failure to learn. To mitigate this issue, we apply the reward shaping process.Specifically, we use the user-item match score, calculated by the dot-product operation of user and item embeddings from the pre-trained LightGCN (see more details in the Sec. <ref>), as an auxiliary precision reward r_per to indicate the satisfaction of user u with the action item a_t. To ensure the complementarity of items in the bundle, we calculate the dot product of the embeddings of selected items in b^t and the action item a_t as the complementarity reward r_comp = MEAN_v_k ∈ b^t(𝐞^v_k_text(𝐞^a_t_text)^T), where 𝐞^v_k_text, 𝐞^a_t_text∈𝐑^d_t.Coverage of fine-grained interest.Considering all the fine-grained interests extracted from user's query might be preferably covered, we introduce an entropy-like reward to ensure this.In detail, for each item v_k in the generated bundle b^t, we calculate ℐ(v_k) = argmax_q_j ∈𝒬𝐞^v_k(𝐞^q_j)^T to determine which intention instance this item may belong to, measured by the similarity scores between item and each intention instance. Thus, we can obtain the count of items 𝒞(q_j)=|{ v_k| ℐ(v_k) = q_j}| included for each intention instance q_j. The reward can be represented as:r_cover(b^t, 𝒬) = - ∑_q_j ∈𝒬 p(𝒞(q_j)|b^t)logp(𝒞(q_j)|b^t), p(𝒞(q_j)|b^t) = 𝒞(q_j)/|b^t|When the distribution of q_j is more likely even, the r_cover(b^t, 𝒬) will be larger to guide more coverage on all the fine-grained interests.Finally, we aggregate all these rewards into a reward to guide the RL training process:r = ω· r_main + r_per + r_comp + r_cover,where ω is a hyperparameter to determine the weight of the main reward. §.§.§ Recommendation Pre-Training Considering the sample efficiency and instability problems of RL, it is difficult and time-consuming to train the bundle generation policy from scratch. Therefore, we first pretrain the unified state encoder part (Sec . <ref>) with a recommendation task based on semi-synthetic data. Here, we aim to pave the way for the bundle generation task under RL framework by the supervised learning of user-item recommendation and bundle completion recommendation. In detail, we expect the distance between current generation state representation 𝐞^state and the target items, i.e. the items in the target bundle b̂, to be smaller than others. To achieve these goals, we employ pairwise Bayesian Personalized Ranking (BPR) loss <cit.> as follows:ℒ_p=∑_(u,q,b^t, v_+,v_-)∈𝒟_pre -ln(cos(𝐞^state,𝐞^v_+)-cos(𝐞^state,𝐞^v_-)),where 𝒟_pre denote the constructed training set of item pairs,andis sigmoid function. 𝒟_pre:={(u,q,b^t,v_+,v_-)|v_+∈b̂∖ b^t, v_-∈𝒱∖ (b̂∪ b^t)}, where b^t is the synthetic bundle for t-th step, b̂ is the target bundle of query q, v_+ is the ground truth item to be added but uncovered in b^t. Here, b^t is composed of t items randomly selected from the target bundle b̂, where t is ranged from 0 to |b̂|-1, simulating the entire completion steps in subsequent RL learning. Based on the above pre-training process, we further train the policy module and unified state encoder jointly. § EXPERIMENTSIn this section, we conduct experiments on the QBG scenario to evaluate the performance of our method compared with other state-of-the-art (SOTA) models [Our code and data will be released for research purposes.]. §.§ Dataset We conduct experiments on Bundle Intent dataset <cit.> containing user intentions (i.e., the textual query) to evaluate our proposed method. Bundle intention refers to user's intuitive feeling of a bundle.In order to obtain bundle intention, the authors of Bundle Intent Dataset design a crowd-sourcing task to label potential bundles and corresponding intentions hidden in the user session from the three domains (Electronics, Clothing, and Food) extracted from the Amazon dataset <cit.>, thereby construct a high-quality bundle dataset with bundle intentions. We utilize the entire Amazon datasets (the data source of Bundle Intention Dataset) to pre-train the LightGCN <cit.> model, and extract the user-item embeddings (Sec. <ref>) contained in the Bundle Intent Dataset for further training.§.§ Baseline Models * BPR <cit.> is a traditional recommendation method based on Bayesian analysis to rank the items according to the user's implicit feedback.* BGN <cit.> regards bundle generation as a structured prediction problem and utilizes detrimental point processes to generate a high-quality and diversified bundle list.* Bunt <cit.> is a multi-round conversation recommendation method for bundle generation. * BYOB <cit.> is a competitive personalized bundle generation model within RL paradigm, which generates the bundle with the guidance of multiple rewards.* LLM4B is our designed LLM-based bundle generation baseline, following ChatRec <cit.>. It first recalls top-30 items ordered by their semantic similarity <cit.> with the query. Then we choose some items from them as the bundle with the prompt like: “I want you to recommend a bundle of items based on user query and user's historical interactions, user query:{user query}.The historical records include the item title and description. You are encouraged to learn user preference from the interacted items:{historical interactions}. Here is a list of items that user is likely to pick {candidate item list}. Please select some complementary items that meet both user query and preference to form a bundle, separated by commas between the items”, where {user query}, {historical interactions} and {candidate item list} indicate the input query, titles of historical items and candidate items respectively.We note that most of the original methods above are incompatible with our QBG scenario. To achieve a fair comparison, we make several modifications to these baselines.For BGN^†, we consider the top first bundle in its generated bundle list as the result.For Bunt^†, we fuse the text embedding of the user's query to its transformer input and use the bundle generated in the first round for comparison. We add the text embedding to the representation of the state and apply an auxiliary reward which is the textual similarity between user's query and title of the action item, to the model BYOB^†.§.§ Evaluation Metrics For bundle generation, some position-sensitive ranking related metrics, such as Normalized Discounted Cumulative Gain (NDCG) and Mean Reciprocal Rank (MRR), may not provide an accurate evaluation, since bundle is an unordered collection, and the order of items within a bundle should not influence the evaluation metrics.To evaluate the general performance of bundle generation, similar to <cit.>, we use the Precision, Recall, and F1-score to measure the similarity between generated bundle and target bundle. Notably, our model employs token a_end to generate bundles with adaptive lengths, while previous works could only generate bundles with predefined fixed bundle size K.Therefore, evaluation metrics based on fixed bundle size, such as Pre@K and Rec@K, may not be directly applicable to this work.For baselines, we generated bundles with a fixed size of K=5, aligned with our max bundle size setting, based on the common bundle length observed in the dataset. By employing this standardized setting, the performance of our proposed model and other baselines could be compared fairly. Precision: The precision measures the quality of generated bundle, which could be formulated as follows:Precision=1/|𝒟|∑_(u, q, b^*, b̂) ∈𝒟|b^*∩b̂|/|b^*|where 𝒟 represents the evaluation dataset, |·| represents the size of set, b^* and b̂ represents the generated bundle and ground-truth bundle of user u and query q. Recall: Recall measures the ability of a model to predict all target items of the ground-truth bundle.Mathematically, Recall can be defined as follows:Recall=1/|𝒟|∑_(u, q, b^*, b̂) ∈𝒟|b^*∩b̂|/|b̂| F1-score: F1-score is a metric that combines both Precision and Recall, providing a balanced assessment of the bundle generation performance, defined as F1 = 2·Precision·Recall/Precision+Recall.All these metrics range from 0 to 1, with higher values indicating better performance. §.§ Implementation DetailsWe implement the proposed method based on Pytorch. We randomly divide the samples into training, validation, and test parts with the ratio of 8:1:1 for each dataset. The ID embedding dimension d is set as 320, and the text embedding dimension d_t is set as 384. We employ the Adam optimizer to train the bundle recommendation module.The weight of the main reward ω is set to be 3.0.Max bundle size L is set to be 5. During the action selection process (Sec. <ref>), we set the similarity threshold T_sim to filter items to 0.3, and the candidate items size R = |p^0| to 50. For the Transformer encoder in Unified State Encoder and Action Relation Modeling, we employ a single encoder layer with two attention heads. To transform textual embeddings to the space of ID embeddings, we employ a two-layer MLP with dimensions [64, 64]. We set the learning rate to 5e-5 and conduct experiments for a maximum of 1000 epochs on the three datasets to obtain the final results. We employ ChatGLM <cit.>, an open-source dialogue language model based on General Language Model (GLM) framework, to make intention decomposition. For the baseline LLM4B model, we employ GPT-3.5 <cit.> to ensure the performance of bundle generation.We employ the tianshou <cit.> framework to implement the reinforcement learning algorithm.The experience replay memory size is 20,000, the sample batch size is 256, and the discount factor γ is set to be 0.99. We re-run all experiments three times with different random seeds and report the average performance. §.§ Overall ComparisonTo demonstrate the comprehensive performance of the proposed model, we have adapted state-of-the-art (SOTA) methods for bundle generation to suit the Query-based Bundle Generation (QBG) scenario. By comparing our model with these established methods, we aim to provide a thorough evaluation of its effectiveness and superiority in generating high-quality bundles. As shown in <ref>, we can obtain the following observations. (1) In the QBG scenario, our proposed model Text2Bundle outperforms significant performance improvements over all baseline models across three datasets, particularly in terms of the precision metric. The enhanced performance of our model can be attributed to the effective fusion of long-term user preferences and short-term query information.This integration enables our model to consider both the historical behavior and immediate textual query, resulting in a more accurate understanding and representation of their interests, allowing for a comprehensive capture of user interests.In particular, compared to the BYOB, our model incorporates unified state modeling and conducts fine-grained interest decomposition for user queries, which contributes to a deeper understanding and characterization of user profiles.(2)In most cases, text-based models (Bunt^†, BYOB^†, LLM4B and Text2Bundle) perform significantly better than models that solely rely on interaction information (BPR and BGN^†), indicating the effectiveness of incorporating textual information in our scenario. The improved performance of text-based models suggests that leveraging textual data, in addition to interaction information, contributes to a better understanding and modeling of user preferences and interests, leading to more accurate and personalized recommendations in our scenario. (3) For Bunt^†, BYOB^† and LLM4B, as can be seen, the performance order can be summarized as BYOB^† >Bunt^† >LLM4B. The performance disparity between BYOB^† and Bunt^† can be primarily attributed to Bunt's disregard for the complementary among items, resulting in sub-optimal bundle generation. Despite the fact that the LLM4B method ignores user-interaction data, its performance closely approximates that of Bunt^†, which suggests the strong potential of LLM-based models for bundle generation. §.§ Ablation Study In this section, we conduct some ablation studies on the proposed model to investigate the effectiveness of some designs.Specifically, we develop the following variants:(1)“-Generative LLM”: A variant of Text2Bundle that eliminates the Generative LLM for intention decomposition. Instead, during the process of the Unified State Encoder, the original textual query is utilized as a substitute. (2)“-Unified State Encoder”: Replacing the Unified State Encoder with a Multi-Layer Perceptron (MLP). (3)“-Action Relation Modeling”: A variant of Text2Bundle which removes the Action Relation Modeling module. (4)“Only Text” and “Only Id”: The Unified State Encoder module and the Action Relation Modeling module only take ID Embeddings (or Text Embeddings) as input, while another modality inputs are masked.As shown in Table <ref>, the model performance decreases significantly while removing the intention decomposition ability provided by Generative LLM and just using user's original query for intention inferring.This result highlights the crucial role played by intention decomposition in the model's performance.The ability to decompose the textual query into multiple instances that can capture user's fine-grained intentions proves to be pivotal in effectively understanding and representing user interests. Meanwhile, The absence of Unified State Encoder and Action Relation Modeling also leads to a sharp decline in generation performance.Among them, the precision value after removing Unified State Encoder is almost halved, which is even worse than the performance of only use textual embedding or only use id embedding. It shows that the alignment of representations in different semantic spaces is helpful and necessary for state modeling.§.§ Human Evaluation Besides automatic evaluation, we also conduct human evaluations to compare the performance of the baselines and our methods on Clothing and Electronic dataset. The experiment involves 12 post-graduate volunteers who evaluate 40 samples each, with each sample being evaluated by 3 different volunteers.In each sample, volunteers are presented with a user historical items and current query, along with the generated bundle by BYOB^†, LLM4B, and Text2Bundle.Volunteers are instructed to answer following questions: 1). “Is bundle complementary?”2).“Does bundle align with the given query?” 3).“Does bundle match the user's profile?”, and evaluate three results in each sample on a scale of 1-5, with 1 being “worst” and 5 being “best”,from these three evaluation views respectively. The results are shown in Table <ref>, where the metric Comp, Align-Q, and Align-U indicate the average evaluation scores of above three questions correspondingly.On all three evaluation aspects, our Text2Bundle outperforms other two competitive baselines, which is generally consistent with the result of automatic evaluation. However, different from the result of the automatic evaluation, annotators are inclined to give LLM4B results higher scores than BYOB^† when evaluating bundle complementarity and query-level alignment.This could be explained that automatic evaluation computes the similarity between the generated bundle and the target bundle strictly, since some items that may be qualified and substituted but are not in the target bundle are still considered poor choices. While these items may be deemed reasonable by the annotator, and account for a greater proportion of LLM4B's result than BYOB^†'s. Meanwhile, this may also be due to the inconsistency between annotator and reality user, resulting in the bias of the expected bundle.§.§ Case StudyTo intuitively present the personalized generative results of our model, we present a recommendation case in Fig. <ref>.The user (with ID 730) has a history of interacting with predominantly female tops and skirts.Her textual input is expressed as “I want clothes that could be worn on vacation to a sunny country”. During the initial phase, the system decomposes the user's textual query into four finer-grained intention instances: summer hats, sunglasses, Lightweight shirts, and skirts.Subsequently, the system performs several steps of bundle generation, continuing until it encounters end action a_end. The final four items form a complete bundle, and there exists a correspondence (orange curve in Figure) between the generated bundle and the instance derived from the intention decomposition. Additionally, the generated bundle aligns well with the user's preference.As a comparison, LLM4B choose the sunscreen for the bundle, which does not match user's requirement of wearable specified in her query.This suggests that LLM4B may tend to select relevant but not precisely fitting items, while our model avoids this phenomenon by intention decomposition. The intention decomposition in our model extracts user's fine-grained interest as intention instances, which serve as anchors to emphasize the main content of bundle generation.Meanwhile, the bundle generated by LLM4B does not contain tops items, indicating insufficient complementarity.In contrast, our intention decomposition module may alleviate this issue through the explicit intention instance for better coarse to fine planning and instance-level measurement for complementarity. Furthermore, during the RL process, the bundle state modeling, the action relation modeling, and multiple rewards r_comp, r_cover also enhance the complementarity among the items within the bundle. § CONCLUSION In this paper, we first present a novel recommendation scenario, query-based bundle generation, where system can generate a personalized bundle for user's query.For this scenario, we propose a new method named Text2Bundle. Specifically, it generates the bundle item by item, based on the fine-grained user intention instances and long-term user preference. Extensive experiments on three datasets verify the effectiveness and superiority of Text2Bundle in the proposed scenario.The new scenarios and the proposed methods open a new door to more realistic and general bundle generation and potentially to other areas such as conversational recommendation.ACM-Reference-Format § ADDITIONAL RESULTSAs shown in <ref>, we present additional 7 snapshots of generated bundles based on Text2Bundle and LLM4B to demonstrate the specific performance of our model.§ FURTHER DISCUSSIONIn this section, we briefly compare our proposed Query based Bundle Generation scenario to other bundle recommendation related scenarios to highlight the novelty of our work.Traditional bundle generation methods generate bundles mainly based on user history interactions.Here, the user history interactions and the item in the bundle are still in the same modality. One scenario similar to our QBG is Conversational Bundle Generation, as Bunt designed for, which initials the bundle based on user's query and updates the bundle based on user's multi-round feedback.While Bunt is built based on a simple user simulator, where the user's query and feedback are simple attributes instead of natural language. That means, the ability of Bunt‘s instruction following is limited to the attribute-level, and cannot achieve a more fine-grained and scalable natural language understanding. Besides, it focuses more on multi-turn conversations, and the complementarity of the initial bundle is not fully considered. Unlike prior bundle generations, we highlight that our Text2Bundle satisfies the following two desirable properties:Textual-level instruction following, and Comprehensive modeling ofpersonalization and complementarity.As for QBG, it could also be easily compatible and plugged into many search and recommendation scenarios, e.g., as a supplement to the search scenario, or initialization of conversational Bundle recommendation. § ETHICAL CONSIDERATIONSIt is our belief that our proposed novel Query-based Bundle Generation scenario and the Text2Bundle method can mitigate some challenging ethical problems that have been noted in many recommender systems.Text2Bundle has the following desirable properties: *The ability to give personalized and qualified bundle following user‘s query, speeding up and simplifying the process of information retrieval.* Users have the opportunity to control their recommendations in a nuanced, or vague, abstract way through language.*The intention instance generated in the intermediate process provide stronger human interpretability.On the other hand, our proposed system relies on large language models and therefore inherits some well-known problems centered around societal biases, hallucinations, and expensive use of resources.We only employ LLM at intention understanding and planning process instead of the whole pipeline, which may alleviate these problems. Significant further progress needs to be made in areas like debiasing, grounding in factuality and efficient serving before we can safely deploy this type of system in a production setting.
http://arxiv.org/abs/2310.18004v1
{ "authors": [ "Shixuan Zhu", "Chuan Cui", "JunTong Hu", "Qi Shen", "Yu Ji", "Zhihua Wei" ], "categories": [ "cs.IR" ], "primary_category": "cs.IR", "published": "20231027092438", "title": "Text2Bundle: Towards Personalized Query-based Bundle Generation" }
Minibatch Markov chain Monte Carlo Algorithms for Fitting Gaussian Processes Matthew J. Heaton and Jacob A. Johnson January 14, 2024 ============================================================================ Gaussian processes (GPs) are a highly flexible, nonparametric statistical model that are commonly used to fit nonlinear relationships or account for correlation between observations.However, the computational load of fitting a Gaussian process is 𝒪(n^3) making them infeasible for use on large datasets.To make GPs more feasible for large datasets, this research focuses on the use of minibatching to estimate GP parameters.Specifically, we outline both approximate and exact minibatch Markov chain Monte Carlo algorithms that substantially reduce the computation of fitting a GP by only considering small subsets of the data at a time.We demonstrate and compare this methodology using various simulations and real datasets. § INTRODUCTION§.§ Problem BackgroundLet Y(s) be a response variable measured at location s∈𝒟⊂ℝ^d. Y(s) is said to follow a Gaussian process (GP) if for any finite collection of locations s_1,…,s_n thenY ∼𝒩(μ, Σ)where Y = (Y(s_1),…,Y(s_n))' and 𝒩(μ, Σ) is the multivariate normal distribution with mean vector μ = (μ(s_1),…,μ(s_n))' and covariance matrix Σ = {σ_ij}_i,j=1^n.In Gaussian processes, the mean vector is typically taken to be μ(s) = x'(s)β where x(s) = (x_0(s),…,x_P(s))' is a vector of covariates and β = (β_0,…,β_P)' is a vector of linear coefficients and, most commonly, x_0(s) = 1 for all s∈𝒟 so that β_0 corresponds to an intercept term.In contrast, the covariance is governed by a covariance function K(·) (also commonly referred to as a kernel) such thatσ_ij = K(s_i, s_j |ϕ)where ϕ = (ϕ_1,…,ϕ_Q)' is a vector of parameters underlying the covariance (typically consisting of range and smoothness parameters).In the statistics literature, the most common family of covariance functions is the Matérn family which includes both the exponential and Gaussian covariance as special cases (seeandfor details on covariance functions).The power and flexibility of the GP is well documented across numerous papers and books <cit.>.However, the Gaussian process has been limited in more recent years due to the computational complexity associated with model fitting.Specifically, if X is the n × (P+1) matrix of linear covariates, estimates for the parameters Θ = (β',ϕ')' can be obtained by either maximizing the likelihood,ℒ(Θ)∝ |Σ|^-1/2exp{-1/2(Y-Xβ)'Σ^-1/2(Y-Xβ)}or using this likelihood in conjunction with a prior for Θ to obtain the corresponding posterior distribution in a Bayesian framework.The likelihood in (<ref>) immediately shows an issue with using the GP.Specifically, the necessity of storing and calculating the inverse and determinant of a n× n matrix is prohibitively expensive.Given the computational complexities mentioned above, much research regarding GPs has focused on how to apply them to large datasets.Early solutions leaned on the Karhunen-Loève theorem and proposed low rank approximations <cit.> using carefully constructed basis functions.Simultanesouly, other groups investigated the use of compactly-supported covariance functions <cit.> or partitioning <cit.> to introduce sparsity into the covariance matrix to ease the computational burden of matrix inversion.The limitations of these early solutions quickly became apparent <cit.> so that research in this area shifted to where it predominately resides today by using either sparse precision matrices often constructed using Vecchia (or nearest-neighbor) approximations <cit.> or large computing clusters <cit.> or both.Reviews on these methods and their comparative performances on datasets of various size and complexity are available in <cit.>, <cit.> and <cit.>.The majority of the above methods were developed primarily in the statistics community as approximations to a full Gaussian process.In contrast, the computer science community has taken a different approach to computational scalability based on minibatch sampling of the dataset to perform inference.In terms of maximum likelihood, the stochastic gradient descent algorithm and its variants <cit.> has dominated the computer science literature for optimization.In the Bayesian context, <cit.> proposed a sequential hypothesis test for Metropolis-Hastings (MH) proposals based on a fraction of the full dataset.Building on this seminal work, other minibatch MH algorithms were then developed by <cit.> and <cit.>.In more recent years, minibatched approaches in Markov chain Monte Carlo (MCMC) has grown to include tempered methods <cit.>, Gibbs sampling <cit.> and gradient-based proposals <cit.> with a review provided in <cit.>.However, recent work by <cit.> has warned that these minibatch approaches don't necessarily equate to improved sampling from the posterior distribution. §.§ Research Goals and ContributionsThe issue with the above minibatch solutions is that none of them are developed in the context of GPs.Notably, the likelihood in (<ref>) requires the full dataset to compute and is not represented as a product of independent likelihoods as is needed for each of the minibatch algorithms above.As such, the purpose of this research is to merge the statistical and computer science approaches by developing approaches to use minibatching in Markov chain Monte Carlo (MCMC) algorithms to fit GPs to large datasets within the Bayesian paradigm.While <cit.> have also considered subsampling, our approach here is inherently different.That is, <cit.> explicitly model a subsampling mechanism while we choose to use subsampling to either approximate a Metropolis-Hastings acceptance probability or the parameters of the complete conditional distribution (when known).Our key to using minibatches for fitting GPs is to first represent the full likelihood in (<ref>) as a series of conditional distributions using the Vecchia approximation <cit.>.Under this Vecchia approximation, (<ref>) can be written as a product of conditional probability density functions which is then ported into minibatch approaches such as those cited above.In the case of the GP presented above, however, we note that the complete conditional distribution of some of the parameters in (<ref>) are conjugate under certain prior specifications (for example, the linear regression coefficients β are conjugate under a Gaussian prior).Using these conjugate forms can greatly increase the efficiency of any MCMC algorithm. Hence, while we develop minibatch updating schemes for non-conjugate parameters, we also exploit the known form of conjugate complete conditional distributions by using appropriate minibatched approximations of the complete conditional distribution for conjugate model parameters.The remainder of this paper is outlined as follows.Section <ref> provides details of how to use minibatching in a MCMC algorithm.Section <ref> evaluates the upsides and downsides of using minibatching on simulated and real datasets.Finally, Section <ref> provides discussion and areas of future research.§ METHODSThis section describes the details of how we use minibatching within an MCMC algorithm for GPs.Specifically, Section <ref> sets up the GP model including the Vecchia approximation framework and a MCMC algorithm using all available data.Section Section <ref> discusses a minibatch updating scheme for conjugate parameters and <ref> discusses options for an accept-reject rule for non-conjugate parameters based on minibatches.Finally, Section <ref> describes some nuances and details of implementing the minibatch MCMC algorithms. §.§ PreliminariesAs in the previous section, let Y = (Y(s_1),…,Y(s_n))' be a vector of response variables measured at the finite set of locations s_1,…,s_n and X be a n × (P+1) matrix of covariates that are linearly related to Y.For purposes of this research, we let Y(s) follow a Gaussian process such that the ij^th entry of the covariance matrix Σ is given byK(s_i, s_j |σ^2, ω, ϕ)= σ^2ifi=jσ^2(1-ω)ρ(s_i, s_j |ϕ)ifi ≠ jwhere σ^2 is the total variance (also referred to as the sill in spatial statistics terminology), ωσ^2 for ω∈ [0,1] is the nugget effect, (1-ω)σ^2 is the partial sill and ρ(s_i, s_j) is a positive definite correlation function (e.g. Matérn, Exponential, Gaussian, etc.) parameterized by ϕ.Under this choice of covariance function, the covariance matrix takes the simple form Σ = σ^2R = σ^2(ωI+(1-ω)M) where M = {ρ(s_i,s_j|ϕ)}_i,j=1^n.Because the joint distribution of Y is multivariate Gaussian, the likelihood for Θ = (β, σ^2, ω, ϕ')' in (<ref>) can be written as a series of conditional distributions such that:ℒ(Θ)= f_1(Y(s_1) |Θ)∏_i=2^n f_i(Y(s_i) | Y(s_1),…,Y(s_i-1),Θ)where each f_i(·) is a univariate Gaussian probability density function (PDF) with mean μ_i and variance σ^2 v_i.Specifically, the mean terms are given by:μ_i= x'(s_1)β ifi = 1x'(s_i)β + R(i, 𝒩_i)R^-1(𝒩_i, 𝒩_i)(Y_𝒩_i-X_𝒩_iβ)ifi > 1where 𝒩_i = {1,…,i-1} is the set of points preceding observation i, Y_𝒩_i is the set of Y(s) corresponding to 𝒩_i, X_𝒩_i are the rows of X corresponding to 𝒩_i and R(𝒜,ℬ) is the corresponding #𝒜×#ℬ correlation matrix from the above correlation function where # denotes cardinality.Likewise, we definev_i=1ifi = 11- R(i, 𝒩_i)R^-1(𝒩_i, 𝒩_i)R(𝒩_i,i)ifi > 1 Importantly, factoring the full likelihood from (<ref>) using a series of conditional distributions as in (<ref>) does not circumvent the computational difficulties associated with GPs.Specifically, the forms for μ_i and v_i in (<ref>) and (<ref>) still require dealing with large matrices through R(𝒩_i, 𝒩_i) because #𝒩_i grows as i → n.Hence, we adopt the Vecchia process approximation framework <cit.> by redefining 𝒩_i be the set of the M nearest neighbors of s_i in terms of Euclidean distance.In this way, #𝒩_i is at most M so that R(𝒩_i, 𝒩_i) is also at most M × M and can be dealt with computationally.This Vecchia approximation relies on the assumption that all the information about Y(s_i) in the conditional distribution f_i(Y(s_i)) can be adequately summarized by the M nearest neighbors to Y(s_i) among Y(s_1),…,Y(s_i-1).We note that <cit.> discuss the impact of observation ordering on this assumption and recommend certain orderings of the observations to obtain better approximations.For purposes of this research, we assume that our observations have already been ordered according to these suggestions.The unknown parameters of our Gaussian process model are the linear coefficients β, the sill σ^2 and the correlation parameters which include the nugget term ω∈ [0,1] and any correlation function parameters ϕ.Our focus here is on a Bayesian estimation paradigm for these parameters but we note that maximum likelihood can also be used to obtain estimates.Generally, a MCMC algorithm for sampling from the posterior distribution of these parameters can be done via Gibbs sampling where at each iteration, as we show below, β and σ^2 can be directly drawn from their complete conditional distributions while indirect sampling (e.g. Metropolis or Metropolis-Hastings) is used to draw θ = (ω, ϕ')'.Details are as follows.In the Bayesian paradigm, we a priori assume β_p iid∼𝒩(m_p, s^2_p) for p=0,…,P and σ^2 ∼ℐ𝒢(a_σ, b_σ) where ℐ𝒢 denotes the inverse-gamma distribution with shape a_σ and rate b_σ.We specifically choose these priors because the associated complete conditional distribution of each β_p can be shown to be conjugate with respect to the Vecchia likelihood in (<ref>).Specifically, through some algebraic manipulation, the complete conditional distributions for β_p and σ^2 are given byβ_p | -∼𝒩( [ ∑_i=1^nq_1(s_i)/σ^2 + 1/s^2_p]^-1( ∑_i=1^nq_2(s_i)/σ^2 + m_p/s^2_p), [ ∑_i=1^nq_1(s_i)/σ^2 + 1/s^2_p]^-1)σ^2 | -∼ℐ𝒢( n/2 + a_σ, ∑_i=1^nq_3(s_i)/2 + b_σ)where “-” denotes all other parameters and the data and the quantities q_1(s_i), q_2(s_i) and q_3(s_i) are given byq_1(s_i)= 1/v_i[x_p(s_i) - R(i, 𝒩_i)R^-1(𝒩_i, 𝒩_i)X_𝒩_i,p]^2 q_2(s_i)= 1/v_i[x_p(s_i) - R(i, 𝒩_i)R^-1(𝒩_i, 𝒩_i)X_𝒩_i,p]r_p(s_i) q_3(s_i)= 1/v_i[Y(s_i) - μ_i)]^2where X_𝒩_i,p is the p^th column of X_𝒩_i, r_p(s_i) = Y(s_i) - x'_-p(s_i)β_-p - R(i, 𝒩_i)R^-1(𝒩_i, 𝒩_i)(Y_𝒩_i-X_𝒩_i,-pβ_-p)X_𝒩_i,-p is all columns of X_𝒩_i except the p^th column and β_-p are all β coefficients except β_p.We note that the full β vector is conjugate as a multivariate normal but for purposes of exposition we work with each β_p individually but the algorithms below can be implemented for the full β vector.Unlike β and σ^2, there are no conjugate priors for ω and ϕ.Further, because such priors may change depending on which correlation function is chosen, for purposes of this research, we group them together and generally assume θ = (ω, ϕ')' ∼π(·) for some parametric prior π(·).Due to lack of conjugacy, MCMC simulation of θ is done via accept-reject style algorithms.For purposes of the minibatch algorithms below, we write the acceptance rule for a proposal θ_prop given the current draw θ_cur asΔ(θ_prop, θ_cur)= ∑_i=1^n Λ_i - log(π(θ_prop)g(θ_cur|θ_prop)/π(θ_cur)g(θ_prop|θ_cur)) + Lwhere Λ_i = log[f_i(Y(s_i) |Y_𝒩_i, β, σ^2, θ_prop) / f_i(Y(s_i) |Y_𝒩_i, β, σ^2, θ_cur)] is the log-likelihood ratio for observation i, g(·) is the proposal distribution and L is a random variable (note in Metropolis-Hastings algorithms L = -log(U) where U ∼𝒰(0,1)).Accepting a proposed θ_prop occurs if and only if Δ(θ_prop, θ_cur) > 0. §.§ Minibatch Approximation of Quantities in Complete Conditional DistributionsThe complete conditional distributions of β and σ^2 in (<ref>) and (<ref>), respectively, are also computationally challenging due to the large summations of q_1(s_i), q_2(s_i) and q_3(s_i). For conjugate parameters, a minibatch approximation of the summations is given by∑_i=1^n q_j(s_i) = nq_j,n = n1/n∑_i=1^n q_j(s_i) ≈ nq_j, B = n1/B∑_i ∈ℬ q_j(s_i)for j ∈{1, 2, 3} where ℬ⊆{1,…,n} is a minibatch with #ℬ = B.Because q_j, B≠q_j,n, we only want to replace nq_j,n with nq_j,B if they are sufficiently close.Note that by the central limit theorem,nq_j,B d→𝒩(nq_j,n, n^2/B√(n - B/n-1)σ^2_q_j)where σ^2_q_j = 𝕍ar(q_j(s_i)) and, √((n-B)/(n-1)) is the finite population correction factor which ensures that nq_j, B = nq_j,n when B = n.Under this distribution, nq_j, B≈ nq_j,n when (n^2/B)√((n - B)/(n-1))σ^2_q_j is small which occurs as B → n suggesting that larger minibatch sizes should be preferred when the computation is reasonable. §.§ Minibatch Acceptance Test for Non-conjugate ParametersSimilar to the complete conditional distributions, when n is large, the sum in (<ref>) is slow computationally.Hence, we use the minibatch approximation∑_i=1^n Λ_i = nΛ_n= n1/n∑_i=1^n Λ_i ≈ nΛ_B = n1/B∑_i ∈ℬΛ_iwhere ℬ⊆{1,…,n} is a random minibatch of the observations and #ℬ = B.Because our minibatch approximation is an average, by the central limit theorem nΛ_B d→𝒩(nΛ̅_n, n^2/B√(n - B/n - 1)σ_Λ^2)where σ^2_Λ = 𝕍ar(Λ_i) and √((n-B)/(n-1)) is the finite population correction factor which ensures that Λ_B = Λ_n when B = n.One possible approach to using minibatches in accept-reject type algorithms is then to simply replace Λ̅_n in (<ref>) with Λ̅_B and set L = -log(U) in a minibatch approximation of the Metropolis-Hastings algorithm.This approach has the advantage that a fixed batch can be used.In this way, we can randomly determine minibatches prior to running the MCMC algorithm and save on computation time.However, because Λ̅_n ≠Λ̅_B, we detail an alternative minibatch approach that accounts for this discrepancy.To rewrite (<ref>) in terms of Λ_B, we can follow the approach of <cit.>.Specifically, as shown in <cit.>, if L follows a standard logistic distribution then the acceptance test Δ(θ_prop, θ_cur) > 0 in (<ref>) maintains detailed balance.As such, let L follow a standard logistic distribution and be represented as the sum L = L_1 + L_2 where L_1 ∼𝒩(0, σ^2_L_1) and L_2 ∼ h(·) where <cit.> refers toh(·) as a “correction” distribution of the Gaussian distribution to the standard logistic random variable. By convolution, ℓ(z) ≈∑_x ∈𝒳 f_Gaus(z-x | 0, σ^2_L_1) h(x |σ^2_L_1) where ℓ(z) is the standard logistic PDF and 𝒳 is some fine grid on the support of the standard logistic distribution.Under this convolution construction, for a given σ^2_L_1, h(x |σ^2_L_1) can be estimated via penalized least squares (e.g. LASSO) with positivity constraints.Figure <ref> displays the construction of a standard logistic via convolution in this manner.Specifically, the left panel of Figure <ref> displays the correction distribution while the right panel displays the PDF from the convolution of this correction distribution with a Gaussian distribution for a given σ^2_L_1.Notably, as can be seen in Figure <ref>, this representation of L = L_1 + L_2 is best when, approximately, σ^2_L_1≤ 3 but grows in accuracy as σ^2_L_1 decreases. Under L = L_1 + L_2 as above, if we set σ^2_L_1 = (n^2/B)√((n - B)/(n - 1))σ_Λ^2 then nΛ_B = nΛ_n + L_1 so that the acceptance rule in (<ref>) can be rewritten asΔ(θ_prop, θ_cur)= nΛ_n + log(π(θ_prop)g(θ_cur|θ_prop)/π(θ_cur)g(θ_prop|θ_cur)) + L_1 + L_2 = nΛ_B + log(π(θ_prop)g(θ_cur|θ_prop)/π(θ_cur)g(θ_prop|θ_cur)) + L_2.Notably, the above minibatch acceptance rule is only accurate when, approximately, (n^2/B)√((n - B)/(n - 1))σ_Λ^2 ≤ 3.Thus, when implementing this minibatch algorithm in practice, we choose a cutoff c ≤ 3 and then sample a sufficient minibatch size to ensure (n^2/B)√((n - B)/(n - 1))σ_Λ^2 ≤ c.To do so, we first obtain an estimate of σ^2_Λ from an initial batch of Λ_i then increase this initial batch size to ensure the condition (n^2/B)√((n - B)/(n - 1))σ_Λ^2 ≤ c is met so the minibatch acceptance rule in (<ref>) can be used. §.§ Implementation DetailsAlgorithms <ref> and <ref> outline two possible minibatch MCMC algorithms for fitting GPs to data.Algorithm <ref> uses the Barker accept-reject rule while Algorithm <ref> uses a minibatch approximation of the Metropolis-Hastings rule.As there are various subtleties associated with each of these algorithms, we discuss the details of implementing these algorithms and some of their differences here.First, given the results in (<ref>) and (<ref>), the minibatch size required will need to increase with the total sample size n.This is to be expected in that larger minibatches will be required to sufficiently approximate the true acceptance rule or quantities in the complete conditional distributions.Second, for the minibatch algorithm based on the Barker acceptance test in Algorithm <ref>, the required minibatch size (B) willdepend on the proposal θ_prop.This is because if θ_prop - θ_cur is large then σ^2_Λ will likely also be large due to large changes in the likelihood ratios Λ_i.In our observation, this fact means that the minibatch acceptance rule will often result in slower mixing of the Markov chain due to the need to take smaller steps at each iteration.Hence, our minibatch acceptance rule decreases the computation time per iteration but we have found that minibatch chains need to be run longer to achieve convergence.This is consistent with the findings of <cit.> in that there is no free lunch with minibatch MCMC algorithms.Third, again for Algorithm <ref>, we draw a Gaussian random variable L_1^⋆∼𝒩(0, c-(n^2/B_θ)√((n - B_θ)/(n - 1))σ_Λ^2) which is added to the acceptance rule (<ref>).This is done to avoid the computational expense of calculating the correction distribution h(·) at each iteration.That is, note that in Algorithm <ref> we estimate the correction distribution (see Figure <ref>) for a fixed c ≤ 3 outside of the for-loop. Each iteration of Algorithm <ref>, however, will choose a batch size (B_θ) so that (n^2/B_θ)√((n - B_θ)/(n - 1))σ_Λ^2 ≤ c.In the event that (n^2/B_θ)√((n - B_θ)/(n - 1))σ_Λ^2 < c, we also sample L_1^⋆ to ensure a match to the pre-calculated correction distribution L_2 ∼ h(x | c).Fourth, note that Algorithm <ref>, is setup to do E epochs over the M minibatches.This can be done in Algorithm <ref> because it uses a fixed batch size to estimate the Metropolis-Hastings rule rather than adapt the batch size to approximate the Barker acceptance test.This has a few advantages over Algorithm <ref>.The foremost advantage of this type of looping is that the data is split into minibatches once rather than a different random sample taken at each iteration which can be time consuming for very large datasets.Notably, the split into minibatches could occur after each epoch but this will add to computation time.A second advantage of this setup is that it ensures that each data point is used to update parameters.Under Algorithm <ref> note that some data points may never be used by random chance.Finally, the choice of batch size in both algorithms will influence the approximation.Notably, as we show below, the approximation to the full posterior will improve as the batch size increases.This makes the choice of batch size fundamentally different than the batch size used in, say, stochastic gradient descent which speeds up convergence by not as easily getting stuck in local modes.In our case of using minibatching for posterior sampling, we want to use as large of a batch size as can be handled computationally. § EXAMPLES§.§ Small Data Simulation Study To explore the various algorithms explained above, we carry out a simulation study using 50 simulated data sets with n=8000 and the locations s_i simulated uniformly on the unit square.Notably this is not, by any means, considered “big data” but we seek to evaluate the effectiveness of our minibatch algorithms relative to the full Gaussian process model.Hence, for this simulation study, we let n=8000 because the full model in (<ref>) can be used. Of the n data points, 1600 were randomly set aside as a test set, leaving 6400 to be used for model fitting. We simulate data from (<ref>) with covariance given by (<ref>) using a stationary exponential correlation function with range ϕ and nugget ω.We set β = (0,1,-5)', σ^2 = 1, ω = 0.5, and ϕ = 0.236 which corresponds to an effective spatial range of √(2) (half the maximum possible distance on the unit square).To allow for accurate comparison between the algorithms, we used the same prior values across all algorithms. Specifically, we assume β∼ N(0, 1000I), σ^2 ∼ℐ𝒢( 0.01, 0.01) where ℐ𝒢(a,b) is the inverse gamma distribution with shape a and rate b.The parameters for the correlation function, ω and ϕ, are notoriously difficult to estimate so, generally, bounded or discrete priors are commonly used <cit.>.Because a discrete prior will result in a full Gibbs algorithm, we consider both types of priors.First, for continuous priors, we assume that ω∈ [0,1] and ϕ∈ [ϕ_min, ϕ_max].But, in order to better propose values for ω and ϕ, we transformed these to the real line via,ϕ^⋆ = log( (ϕ-ϕ_min)/(ϕ_max-ϕ_min)/1-((ϕ-ϕ_min)/(ϕ_max-ϕ_min)))ω^⋆ = log( ω/1-ω)and assume ω^⋆∼ N(0, 3), ϕ∼ N(0, 3).The mean of zero means that, a priori, we expect that these parameters will be centered at the midpoint between their max and min values and a variance of 3 results in high uncertainty.Finally, for a discrete prior, we choose 20 values for ω and ϕ between [0,1] and [ϕ_min, ϕ_max], respectively, and use a discrete uniform prior.To evaluate and compare various algorithms with the proposed minibatch algorithms, we ran each of the following for 12,800 iterations and discarded the first 6,400 iterations as burn in (evaluations of trace plots suggested this was sufficient for convergence):* (Full) A Metropolis-within-Gibbs algorithm on the full model in Equation (<ref>) using the complete conditional distributions and Metropolis acceptance probability discussed in Section <ref>;* (NN) The same algorithm as Full except we use the nearest neighbor (Vecchia) approximation in the complete conditionals and Metropolis acceptance probability;* (Barker) Algorithm 1;* (FB) Algorithm 2 where we use a specified fixed percentage of the full data as the minibatch size (i.e. a fixed batch size).For Algorithm 2, we split the data into 2, 4, 8, 16, 32, and 64 smaller minibatches equating to batch sizes of 50%, 25%, 12.5%, 6.25%, 3.125%, and 1.5625% of the available data. This variation in batch size will allow us to examine how the amount of data in each minibatch affects the quality of the posterior approximations. Figure <ref> displays the continuous rank probability score (CRPS; ) achieved by the various algorithms for the different parameters of the Gaussian process model.We note that the parameter σ^2 ω/ϕ is mentioned by <cit.> as the parameter that is able to be consistently estimated from a single realization of a Gaussian process so we include it here for comparison.From Figure <ref>, the Barker, NN and Full algorithms all achieve low CRPS values indicating posterior distributions that capture the true parameter value. By comparison, the FB algorithms show increasing ability to estimate parameters (as indicated by decreasing CRPS) as the minibatch size increases (note that 50% corresponds to a minibatch size of 50% of n or 0.5×6400 = 3200).Finally, this same pattern persists between the discrete and continuous priors for ϕ and ω except that the discrete priors tended to have slightly lower CRPS values.As a last comparison, we compare each algorithm in terms of the multivariate energy score (MES; ) to evaluate the overall accuracy of parameter estimates. The bottom right panel of Figure <ref> shows that MES values for each algorithm, averaged across all data sets, revealing a similar pattern to the CRPS values for individual parameters.To further elucidate the impact of the minibatch size on the posterior distribution, Figure <ref> displays density plots for the β parameters.Note that the posterior distribution for each algorithm is centered at the true value but the decrease in CRPS observed in Figure <ref> can be attributed to an increase in the variance of the posterior distribution.That is, algorithms based on minibatching have increased posterior variance relative to posterior distributions using the full data. The predictive accuracy of predictions generated under each of the above algorithms are shown in Figure <ref> in terms of root mean square error (RMSE) and CRPS.Importantly, Figure <ref> shows that RMSE and CRPS values are effectively equivalent for all algorithms.The result that the predictions are equivalent under minibatching relative to the full data is substantial and aligns with the results in <cit.>. This shows that minibatching offers computational advantages with no apparent lack of predictive ability.The motivation behind both minibatch algorithms is computational savings. To this end, we found that as the amount of data included in each minibatch decreases, so does the computation time. Using the full model with either continuous or discrete priors on the spatial parameters took about 4 times longer than the nearest neighbor approximation. However, using half of the data in each minibatch (FB2) takes about half of the time that the nearest neighbor algorithm takes. Using 16 minibatches (all of equal size with about 6% of the data) takes only 10% of the time of the nearest neighbor algorithm.The Barker algorithms chose minibatch sizes of about 25% of the data and, therefore, had similar computation to the FB4 algorithm.§.§ Large Data Simulation Studies In a larger scale study, we also carried out a simulation study using the same setup that was used in Section <ref> above but with n=120,000.We fit all the same algorithms at the same settings used in the previous simulation study but we omit the full model because it is not reasonable to use on this size of dataset.The full results with figures similar to that shown above are given in the supplementary material but we offer a summary of the findings here.First, increasing batch size resulted in increasing performance in terms of CRPS.Specifically, a 50% minibatch size was nearly indistinguishable from the nearest neighbor model with all of the data.While a 25% minibatch size had strong results, decreasing the minibatch size further often resulted in unacceptable performance.The Barker algorithm (Algorithm <ref>) would often result in a minibatch size that was too small for acceptable CRPS performance relative to the nearest neighbor algorithm.In terms of predictive accuracy, we again saw that the minibatch size had no effect on the predictive performance.Any of the minibatch algorithms were effectively equal in predictive accuracy to the nearest neighbor model.Finally, because the minibatch sizes were chosen as a percentage of the data, the computation times relative to the nearest neighbor model stayed about the same as in the smaller data simulation study in Section <ref>.For example, a 50% minibatch size takes about half the time as the nearest neighbor model with the full data but computation time decreased with minibatch size. §.§ Real Data Applications In this section we consider applying the minibatch algorithms to two different real datasets.First, we use the real satellite observations from <cit.> to compare the approach used here with those of other models.Second, we apply our minibatch methods to the forest canopy height (FCH) dataset available in the library <cit.>.Because our focus is on performance of the minibatch algorithms, we refer to the given citations for the scientific details for both of these datasets but Figure <ref> displays the datasets. For both of these datasets, we applied the same minibatch algorithms as were used for the simulation studies above.That is, we fit the model using Algorithm <ref> and Algorithm <ref> at various minibatch sizes, along with the nearest neighbor algorithm to serve as a benchmark. Further, we fit every model twice, once with a continuous prior for ϕ and ω and once with a discrete prior. We measure the computation time associated with each algorithm and along with multiple metrics to compare the effectiveness and accuracy of each approach. In the previous simulation studies, accuracy of the posterior distribution of model parameters was assessed via CRPS because the true parameter values were known. However, for real data applications, the true parameter values are unknown. Hence, here we compare posterior summaries under the various algorithms.Table <ref> displays the posterior mean, standard deviation and width of a 95% credible interval for the various algorithms for the parameter σ^2 ω/ϕ which again, according to <cit.>, is the identifiable parameter in a Gaussian process model.First, from Table <ref> note that the estimates of the posterior means are consistent across algorithms while the posterior standard deviations increase as the minibatch size shrinks.This result further confirms the results from the simulation studies along with those in <cit.> that minibatching results in a tempered posterior distribution with smaller minibatches corresponding to a higher temperature.We note that Algorithm <ref> (Barker) was selecting a minibatch size of, approximately, 8000 which corresponds to about 8% of the data as a minibatch and thus had a higher posterior variance. To measure predictive accuracy for the real data, we split the data into training and test sets.For both the satellite and forest data, we used the train-test split provided in the datasets by the original users.Table <ref> displays the same predictive diagnostics for the satellite data as was calculated in <cit.> while Figure <ref> displays predictive diagnostics for the forest data.For the satellite data, the predictive diagnostics under the minibatch algorithms, while not the best as has been applied to this data, are comparable.For the forest data, each of the fixed batch algorithms, regardless of minibatch size, achieved predictive performance comparable to that of the nearest neighbor model.The Barker algorithm, however, had worse predictive accuracy. § CONCLUSION AND FURTHER RESEARCH In this research, we presented possible approaches for using minibatching with MCMC algorithms when fitting Gaussian processes to large spatial datasets.In terms of parameter estimation, minibatch sizes of greater than about 25% of the data resulted in comparable estimation performance to algorithms that used all of the data.While small minibatches resulted in poor estimation of parameters, such minibatches seemed to have no impact on the predictive performance.Generally, we presented two algorithms for using minibatching within MCMC algorithms: one based on an adaptive minibatch size (Barker) and one based on a fixed minibatch size.The advantage of the Barker approach is that the minibatch size needed to achieve a sufficient approximation of the acceptance rule is automatically chosen within the algorithm.In our studies, however, this advantage was offset by the computational time needed to find the appropriate batch size.Hence, based on our experience, we recommend the fixed batch approach as it it faster and still gives good posterior performance.In practice, the batch size B will be chosen based on the computational demand and in this work we evaluated the performance based on various fixed batch sizes.However, we note that the result in (<ref>) can guide the choice of minibatch size in a few different ways.First, given an approximation of, say, σ^2_q_j (perhaps obtained using the starting values of the MCMC algorithm or from the first several draws of the algorithm), B can be chosen so that 𝕍ar(nq_j,B) in (<ref>) is less than a certain threshold.This will guarantee that the minibatch meets certain approximation criteria. Alternatively, at any given iteration of the MCMC algorithm, an estimate of σ^2_q_j can be obtained based on an initial batch size (similar to Algorithm <ref>) and then B can be chosen to meet an approximation criteria resulting in a different batch size at each iteration.However, varying the batch size by iteration can slow computation.We note that this research focused on stationary GPs because stationarity is commonly assumed in spatial analyses.However, minibatching might impact anisotropy, nonstationarity or spatio-temporal GPs differently.For example, nonstationary models require local information because the correlation changes over the spatial domain.As such, minibatching might result in a loss of local information leading to more variability in the posterior.Future work needs to focus on the use of these methods on more general correlation structures.Our approach approximates accept-reject rules based on minibatch samples.As such, these approximations can be used for any spatial model - not just the one detailed in Equation (<ref>) here. For example, similar approaches to minibatching could be used to fit non-Gaussian spatial linear models.We plan to investigate this in future work.ba
http://arxiv.org/abs/2310.17766v1
{ "authors": [ "Matthew J Heaton", "Jacob A. Johnson" ], "categories": [ "stat.CO", "stat.ME" ], "primary_category": "stat.CO", "published": "20231026202029", "title": "Minibatch Markov chain Monte Carlo Algorithms for Fitting Gaussian Processes" }
http://arxiv.org/abs/2310.17731v1
{ "authors": [ "Jordan Wilson-Gerow" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20231026184200", "title": "Conservative Scattering of Reissner-Nordström Black Holes at Third Post-Minkowskian Order" }
Text2Bundle: Towards Personalized Query-based Bundle Generation Zhihua Wei January 14, 2024 ===============================================================Metastatic spread is a crucial process in which some questions remain unanswered. In this work, we focus on tumor cells circulating in the bloodstream, so-called Circulating Tumor Cells (CTCs). We aim to characterize their trajectories under the influence of hemodynamic forces and adhesion forces resulting from interaction with an endothelial layer using in vitro measurements performed with a microfluidic device. This essential step in tumor spread precedes intravascular arrest and metastatic extravasation. Our strategy is based on a differential equation model – a Poiseuille model for the fluid velocity and an ODE system for the cell adhesion model – and allows us to separate the two phenomena underlying cell motion: transport of the cell through the fluid and adhesion to the endothelial layer. A robust calibration procedure enables us to characterize the dynamics.Our strategy reveals the expected role of the glycoprotein CD44 compared to the integrin ITGB1 in the deceleration of CTCs and quantifies the strong impact of the fluid velocity in the protein binding. [1in] 2010 Mathematics Subject Classification. 62-07; 65L09; 76Z99; 97M60;Keywords and phrases. Differential equations; Parameter estimation; Circulating tumor cells; Biological data [1in] § INTRODUCTIONOne of the most important and deadly features of solid tumors is the increased ability of cancer cells to migrate and invade other organs, which is called metastatic spread.In the last 70 years the number of cancer deaths registered with metastasis has tripled <cit.>. Different tumors have substantial incidence variation. However, metastasis is the major source of cancer-related death <cit.>. The blood and lymphatic circulations are used as a means of transport to reach distant organs. Tumor cells that have previously detached from a primary tumor can invade the surrounding extracellular matrix. Successful intravasation into the vessels means that cancer cells can now leave the original site. Inside the blood vessels, hostile conditions prevail. Circulating Tumor Cells (CTCs) are subjected to physical stresses that include hydrodynamic flow and loss of attachment to a substrate, as well as other obstacles involving the human immune system (and platelets) <cit.>. These factors lead to a significant decrease in the number of CTCs and also to their eventual clustering. The remaining single cells or small cell clusters eventually extravasate, reaching a secondary site where they either stay dormant or form a new tumor <cit.>.CTCs receive much research interest due to their therapeutic potential in liquid biopsy <cit.>. Indeed, they could allow to monitor tumor heterogeneity or response to a treatment, but also to detect the minimal residual disease, and serve as a prognosis biomarker or as a target for personalized therapies <cit.>. However, the detection, identification and characterization of CTCs present important challenges due to their heterogeneity and low abundance <cit.>. From the biological standpoint, understanding the key steps involved in CTCs arrest on the endothelial wall is crucial to explain secondary tumour locations. Indeed, the possibility of extravasating is permitted by CTCs arrest and firm adhesion to the vascular endothelium, phenomena that need further insights <cit.>.Studies previously pursued by biologists Follain et al. <cit.> and Osmani et al. <cit.> have deepened into the mechanical cues that promote CTCs successful arrest and extravasation. In <cit.>, they have shown that an optimal flow is required for CTCs to arrest on the endothelium of the vascular wall. Furthermore, in <cit.>, they have identified the adhesion receptors at play. Early adhesion is mediated by the glycoprotein CD44, involved in a weak form of bonds, while integrin ITGB1 favors stabilization of the adhesions. The team of biologists have performed both in vitro and in vivo experiments. In vitro experiments consist in using a microfluidic channel with controlled fluid velocity (simulating a blood vessel) into which tumor cells are injected.In vivo experiments are led on zebrafish embryos where they can followCTCs pumped by the heart along the vascular architecture. In the present work, these in vitro data will be exploited in combination with mathematical modeling. Various theoretical models of cell adhesion have been developed over time. First studies have focused on the binding dynamics of a single bond in a kinetic setting <cit.>, while bonds clusters sharing a constant or varying load have been considered in <cit.>. In the case of inflow cell dynamics, several biological questions can be addressed, such as the emergence of several cell displacement regimes (freely-flowing, rolling, slipping, stationary arrest) with possible bistability or shear-threshold effect between them. A related issue concerns the bonds response to hydrodynamic forces, with catch bonds whose lifetime increases with load, slip bonds for which it decreases exponentially with load (so-called Bell's law), or a combination of both depending on the shear rate.First computational approaches allowed to describe a hard sphere submitted to hydrodynamics forces and stochastic binding interaction with the wall <cit.>. This framework has been applied to leukocytes adhering through L-selectin ligands <cit.>. A simpler setting with slip bonds allowed to obtain analytical characterizations <cit.>. In <cit.>, the adhesion of a rolling sphere is described following the membrane approaching the wall at the front, and detaching at the rear, for catch-slip bonds. Numerical simulations illustrate the effect of shear rate on the steady state. In <cit.>, both translational and rotational motions of a spherical cell are affected by elastic bonds. This allows to explain the interplay between rolling and slipping, and provides a numerical state diagram of leukocyte motion. Theoretical models can also take ligands positions into account, thus enabling bonds tilting and subsequent cell sliding. The transition between rolling and sliding for a critical shear rate is established for a rigid cylinder in <cit.>, where elastic bonds are described by a distribution function structured by position. This continuum framework is also used in mathematical modeling approaches in deterministic <cit.> and stochastic <cit.> settings. In the same spirit but in the absence of space structure, stochastic and deterministic models are developed for a particle cell in <cit.>. Mathematical analysis provides a parameter space for cell regimes together with an explicit formula for the mean arrest time. Although minimal in the hydrodynamics description, these models have less parameters and are therefore more suited to calibration with experimental data.Theoretical frameworks have been confronted to microfluidics experiments on CTCs in <cit.>. First, using an empirical model, the authors investigate cell detachment in response to fluid acceleration for N-cadherin based adhesion <cit.>. Motivated by CTCs isolation in liquid biopsies, they also consider cells arrest on a wall coated with EpCAM (epithelial-cell adhesion molecules) antibodies <cit.>.Finally, in <cit.>, the authors perform microfluidics experiments to study the effect of the shear rate on the dynamics of breast cancer cells interacting with an EpCAM-coated wall. Three regimes were observed (freely-flowing, firmly adhering, and rolling/slipping). Experimental data consisted in trajectories and in stopping times and lengths that were used to empirically calibrate a model based on <cit.>. More precisely, the cell-wall gap, the typical adhesion force and the spring constant were sequentially identified by numerical investigations. Then, the cell velocity during capture was well fitted by a decreasing exponential function, yielding a typical decreasing time characteristic of the cell-wall interaction. In this work, we aim to capture the role of adhesion proteins and hydrodynamic forces and to understand their interplay focusing on the first phase of CTCs interaction with the endothelial wall. We use a Poiseuille model for the fluid velocity, and weakly couple it to a modification of the model proposed in <cit.>. This modeling approach allows its rigorous calibration using the in vitro experiments carried out by Osmani and collaborators, see <cit.>. In this model, the cell velocity depends on both the fluid velocity and the bonds density, while the binding dynamics takes into account bonds formation, adhesion growth, and unbinding.The work is arranged as follows. Section <ref> contains the main information about the biological data. More specifically, the data consist of 9 videos of CTCs transported by the fluid at 3 different velocities and with 3 different cell types (control, ITGB1-depleted, and CD44-depleted cells), see Subsection <ref> for protocol details and Subsection <ref> for data presentation. Trajectories and velocities of 149 cells were extracted from these data. Section <ref> is devoted to methods. After a brief statistical analysis of the data in Subsection <ref>, which shows the statistically significant slowing behavior of CTC velocities over time in most cases, Subsections <ref> and <ref> present the mathematical modeling. In Subsection <ref>, a strategy for parameter estimation of this model is presented. The results showing the good agreement of the model with the data are presented in Section <ref>. A discussion is presented in Section <ref>, and finally, conclusions are given in Section <ref>. An important result is that the estimated values of the parameters allow the deciphering of the CTC binding. Indeed, this work demonstrates the expected role of the glycoprotein CD44 compared with the integrin ITGB1 in slowing CTCs. It also allows quantification of the strong influence of fluid velocity on protein binding.§ DATAIn this Section, we present the experimental data, beginning with their acquisition and ending with their extraction. First, in Subsection <ref> we present the experimental protocol. Then, in Subsection <ref>, we show what kind of data are obtained, and briefly present the tracking techniques used to extract the trajectory and velocity of 149 cells. We then present the resulting cell velocities.§.§ ProtocolWe consider in this work in vitro experiments. Human Umbilical Vein Endothelialcells (HUVEC, Promocell) were seeded at 30 000 cells per channel in a rectangular microfluidic channel (IBIDI) of length L = 1.7d-2, of width l = 3.8d-3 and height h = 4.0d-4. Medium was changed twice a day until they reach maximal confluency (3 to 4 days). DA21 mouse breast carcinoma cells were with siRNA using Lipofectamine RNAiMAX (Thermo Fisher) following the manufacturer’s instructions. Experiments were performed between 72 hours and 96 hours post-transfection. 3 days after siRNA transfection, DA21 cells were resuspended at a concentration of 10^6 cells/ml in a Hépès-buffered cell culture medium and perfused into the channel usingusing a REGLO Digital MS-2/12 peristaltic pump (Ismatec), Tygon LMT-55 3-stop tubing (IDEX), 0.5 and 1.6 mm silicon tubing and elbow Luer connectors (IBIDI). In the setup of the pump, the mean value of the entry pressure gradient is fixed and denoted by G. The fluid velocity generated – which contains oscillations due to the pump – depends on the position in the channel and is not measured. A cMOS camera (IDS) is placed to record the motion of cells located in a focal plane at a distance _f^m from the endothelial layer. The experimental data consist of timelapse movie acquired at a rate of 24 frame per seconds for 2 minutes, on a rectangle of width ℓ_cam = 5.63d-4 and of height h_cam = 2.99d-4The setup is shown in Figure <ref>.§.§ Data availability An example of a video image is shown in Figure <ref>. The cells forming the endothelial layer are seen in the background, whereas moving CTCs that are not in the focal plane of the camera are seen in the foreground. These CTCs have a well-defined shape, so that their trajectories can easily be followed while they appear in the video. Most cells are smoothly transported through the fluid. Sometimes, a cell stops on the endothelial layer. This arrest can be stable, meaning that the cell remains attached to the endothelial layer, or unstable, i.e. the cell can detach due to a collision with other cells or under the effect of the hydrodynamics forces. However, in this work we focus only on non-arrested cells. The different experimental subgroups are summarized in Table <ref>. Experiments have been realized keeping the fluid at a controlled pressure gradient by the peristaltic pump. Three values of pressure gradient have been considered: G^(1) = 50.33, G^(2) = 100.66 and G^(3) = 201.32. For each of these cohorts, small interfering RNA (siRNA) depletion of adhesion proteins gives rise to three sub-cohorts, see Western-blot results in Supplementary materials in <cit.>: * siCTL: control group (D2A1 cells treated with a CTL siRNA); * siITGB1: depletion of integrin ITGB1 (D2A1 cells treated with a siRNA targeting ITGB1); * siCD44: depletion of CD44 (D2A1 cells treated with a siRNA targeting CD44). We collect cell trajectories from the 9 different videos (3 different pressure gradients × 3 different proteins expressions). For the two first values of pressure gradient, we use a semi-automatic tracker called Channel and Spatial Reliability Tracker (CSRT) that consists in first manually designing a box surrounding the cell of interest and second automatically recording the box evolution at each frame of the video <cit.>, see Figure <ref>. In the videos realised with the highest pressure gradient, the mean fluid velocity is too high for the tracker to automatically track the CTCs. For this reason, they were tracked manually. By deriving the trajectories using a first-order scheme, we can directly determine the cell velocities. Note that the tracking procedure captures only the translational motion. Therefore, the data used in this work do not allow discussing possible CTCs rolling.Table <ref> summarizes the number of cells considered in each of the nine videos. For each velocity cohort, at least 40 cells were considered. Some cells were significantly different from the others since they had very high initial velocities. We assumed that they either collided with another cell before entering the video time-lapse, or that they were located in a different focal plane, and we excluded them. These outliers make up only a small portion of the data, since they counted for at most 5 for each velocity cohort (12 for 149 cells ∼ 8 %), see Table <ref> for details. The extracted velocities over time of the N_cell = 137 cells are given in Figure <ref>. Individual cell velocities are in transparent color and weighted mean across cells is shown in normal color. The choice of weighted means is motivated by a better visualisation of the results, since the noise is reduced. The strategy to obtain these weighted means is detailed in Subsection <ref>. The red curves correspond to siCTL cells, green to siITGB1 cells, and blue to siCD44 cells. For each figure, the straight lines correspond to the linear regressions of the weighted means of velocities after adjustment of the initial time in order to synchronize the oscillations.The data are normalized by 100;200;400. We normalize the data in order to facilitate the comparison between the different cohorts. The values considered will be explained later in Section <ref>. However, it should be noted that the ratio between the selected velocities is equal to the ratio of thepressure gradients.§ METHODSThis study aims at deciphering the influence of hydrodynamic and adhesion forces on the dynamics of CTCs moving in interaction with the wall of a microfluidic device. First, we perform a brief statistical examination of the tracked cell velocities in Subsection <ref>. Second, a fluid velocity model is presented in Subsection <ref>. Third, a model for CTCs velocity under both hydrodynamic transport and adhesion to the wall is derived in Subsection <ref>.Finally, in Subsection <ref> we fit the model to the experimental data using a well-designed technique for parameter estimation.All statistical analyses were performed with . For the t-tests, we use thefunction of the library . §.§ Quick statistical analysis of the data We perform a preliminary statistical analysis of the velocity data shown in Figure <ref>. The raw data are preprocessed as follows. First, the velocities are normalized with respect to the fluid pressure cohort, to remove the linear dependence on the fluid velocity. Each velocity in transparent color is corrected for phase shift to synchronize the oscillations. To do so, we use the estimated cell velocities aligning their maximum points over two periods of oscillation. Finally, spurious data are filtered, to deal with the additional noise brought by the first-order derivation of velocities from positions. Indeed, some velocity values can be artificially large or low and perturb the analysis. To deal with this difficulty, we compute weighted means assigning a null weight on values above 0.85 and below 0.35.We perform a quick statistical analysis of these velocities between the different cohorts and subcohorts. First, we run t-tests of the mean velocity values to determine if the differences between cohorts and subcohorts are significant. Second, we run linear regressions on the velocity values and use them to determine for which cohort and subcohort the observed decreases are significant. §.§ Fluid velocity modelingIn this subsection, we derive a model for the fluid dynamics in the microfluidic device. When the viscous effects of the fluid prevail over convection, the Navier-Stokes equations can be reduced to a Poiseuille equation. In that case, the flow shows a parabolic profile at each time, with a maximal velocity in the center of the channel decreasing to zero at the walls. In case of a time-independent pressure gradient, the Poiseuille regime is valid when the fluid verifies the following properties : (1) it is incompressible and Newtonian ; (2) the gravitational effect on the fluid is negligible;(3) its flow is laminar ; (4) and its velocity profile does not evolve over the pipe's length denoted by L in what follows.Conditions 1 and 2 are allowed when working with a microfluidic device where the fluid is mainly comparable to water. Condition 3, can be checked by calculating the Reynolds number given by Re= ρQ D_h/μS,where ρ is the fluid density, Q the volumetric flow rate, μ the dynamic viscosity, D_h = 2(l× h)/l+h the hydraulic diameter of a fully submerged rectangular channel and S = l× h the cross-section surface. The density can be taken as ρ= 1.00d3. Based on the experiments and procedure of Osmani and coworkers in <cit.>, we have D_h = 7.24d-4, S= 1.52d-6□ and Q≤5.67d-9.For the dynamic viscosity, one may refer to <cit.> (Table 2) to obtain a close approximation of the value (different medium, but similar composition when no FBS is added). The corresponding value is μ = 7.31d-4. It follows that Re ≤ 3.75. This value is much smaller than the critical Reynolds number for the transition from a laminar to a turbulent state, that is equal to 2600 in the case of a rectangular tube with a width eight times larger than the height, see <cit.>.Finally, in order to verify Condition 4, we must determine the hydrodynamic entrance length of our microfluidic device (for more details, see <cit.>, Chapter 8, Section 8.1). For rectangular channels at laminar flow, a formula has been derived in <cit.>. The hydrodynamic entrance ℓ (in meters) is then given as a non-linear function of the aspect ratio AR =h/l and the Reynolds number Re. Using the upper bound of Re determined previously, a quick computation gives ℓ≤1.53d-3.We can thus consider the Poiseuille flow to be fully developed (e.g. independent of the length L) if we observe at least 1.53d-3 away from the pipe inlet. According to the protocol given in <cit.> the data was collected as close as possible to the center of the device lengthwise, which is at about 8.50d-3 meters away from the inlet. In our case, however, the hypothesis of a time-independent pressure gradient is not valid. In fact, the fluid dynamics is affected by the angular velocity of the pump rotor, leading to a time-dependent oscillatory perturbation term to the pressure gradient term, which we model as followsG (1 + ξ_f cos(ω_f t + φ)),where ξ_f is the multiplicative correction amplitude, ω_fthe angular velocity, and φ the cell-dependent phase shift. When working with an oscillating pressure gradient, the condition for the establishment of a parabolic velocity profile is strongly tied to the frequency of the oscillation relative to the viscosity of the fluid. Such relation is given through a dimensionless coefficient 𝐖𝐨 = h/2√(ω_f ρμ) introduced by Womersley in <cit.>,which has to be inferior to 1 when the mean value of the pressure gradient is zero. A non-zero mean value will however relax such constraint, and using the results from <cit.> along with supplementary observations, one can readily show a parabolic profile is obtained in our case, see Remark <ref> and Supplementary Material <ref> for more details. The fluid velocity can therefore be written as G/2νρ_f^m (-_f^m)+ξ_f G/ρω_f i (sinh(Z _f^m)+sinh(Z(-_f^m))/sinh(Z) - 1) e^i ( ω_f t +φ),where i is the imaginary unit and Z= (1+i) √(ω_f/2ν) with ν=μ/ρ the kinematic viscosity.We recall that _f^m is the distance between the wall and cells in the focal plane and = 4.00d-4 is the channel height. Taking the real part of this solution and performing computations (based on the linearity of the system and the principle of superposition), the fluid velocity in our context writes u_f(t) =u̅_f(_f^m) + i_f(_f^m,ξ_f,ω_f) cos(ω_f t +φ) + r_f(_f^m,ξ_f,ω_f) sin(ω_f t+φ),where u̅_f(_f^m) = G/2νρ_f^m(-_f^m), r_f(_f^m,ξ_f,ω_f)= ξ_f G/ω_fρRe( 1- sinh(Z_f^m)+sinh(Z(-_f^m))/sinh(Z)), i_f(_f^m,ξ_f,ω_f)= ξ_f G/ω_fρIm( 1- sinh(Z_f^m)+sinh(Z(-_f^m))/sinh(Z)).Note that u̅_f is the mean fluid velocity, while the unknown parameters are _f^m, ξ_f, ω_f and φ.Numerical approximations for the full device were also performed to test the hypothesis and investigate its limitations. These results confirming our fluid modelling can be found in the Supplementary Materials <ref>. More specifically, we begin by verifying the value of the hydrodynamic entrance length, see Subsection <ref>, and we follow with validation of the fluid expression in Subsection <ref>. A final subsection <ref> focuses on how the non-zero mean value of the pressure gradient allows us to work with a Poiseuille velocity profile. §.§ Cells velocity modelingIn this subsection, we define a deterministic model for cell motion based on a coupling between fluid velocity and adhesion dynamics, following previous studies <cit.>.The interest is in capturing the different behaviours induced by varying fluid velocity and the number of expressed proteins. Both changes have an impact on cells velocity.By denoting N the bonds density and u_c the cell velocity, the model writes ∀ t > 0,{[N'(t) = c + (r - d)N,; u_c(t) = u_f(t) - B( u_f(t),u_c(t) ) N(t), ].together with the initial condition N(0)=0. In System (<ref>), c, r, d are given inand stand respectively for the global binding rate, the growth rate and the unbinding rate. The function B accounts for the velocity decrease arising from a unit adhesion density. All parameters are nonnegative.We assume that the adhesion parameters are time-independent and depend only on the mean fluid velocity:c=c(u̅_f), r=r(u̅_f),andd=d(u̅_f).This amounts to neglecting the effects of fluid velocity oscillations on the binding dynamics. The function B can depend either on fluid velocity or on the cell one. Three models are considered: * Constant force model:B( u_f(t),u_c(t) )= b ,where b (in ) quantifies the absolute velocity decrease induced by each unit of bonds density.* Fluid-dependent force model: B( u_f(t),u_c(t) )= bu_f(t) , with b is the dimensionless proportion of velocity decrease induced by each unit of bonds density.* Cell-dependent force model:B(u_f(t), u_c(t)) = b u_c(t) , where b is the dimensionless parameter for the friction ratio between bonds stiffness and fluid viscosity. We consider the cell-dependent model for the cell velocity equation: B(u_f(t), u_c(t)) = b u_c(t). Under Assumptions (<ref>)-(<ref>), System (<ref>) has an explicit solution. If d-r0, we obtain for t>0 N(t) = cd-r( 1-e^-(d-r) t ),and thenu_c(t) = u_f(t) /1 - b c/d-r (e^-(d-r)t-1) .Note that at u_c(0) = u_f(0), so that the cell has not formed adhesion bonds at initial time.On the other hand, experimental observations may occur only after the cell has initiated an adhesive interaction with the wall. This is why we introduce an additional parameter τ≥ 0 that stands for the observation time lag. A cell with a small value of τ would be observed with an initial velocity approaching the fluid velocity, while a cell with a large value of τ would be entering the observation zone with a lower velocity.Finally, the percentage of decrease between the cell and the fluid velocities at time t≥ 0 is given by the quantity 1-u_c/u_f(t). Its limit as t →∞ then quantifies the asymptotic cell regime, and is given byd_% := bc/d-r+bc. To conclude, our coupled model of parameters _f^m, ξ_f, ω_f, φ, τ, b, c, r and d reads [ u_f(t) = u̅_f(_f^m) + i_f(_f^m, ξ_f, ω_f) cos(ω_f t+φ) + r_f(_f^m, ξ_f, ω_f) sin(ω_f t +φ),; u_c(t)= u_f(t) / (1 - b c/d-r (e^-(d-r)(t+τ)-1)). ]§.§ Parameters estimationWe calibrate the model using a well-adapted estimation procedure. The main difficulties in fitting our modelto the dataare (a) the data noise (see Figure <ref>), (b) the little information on the fluid velocity, and (c) the fact that the adhesion parameters are strongly correlated with the fluid parameters. Therefore, we choose a mixed-effects parameter estimation procedure. The nonlinear mixed-effects model consists of pooling all subjects in a population and estimating a global distribution of uncertainties in the population to compensate for identifiability problems <cit.>. For example, the parameters of each cell i could be divided into two types of uncertainties: a first part that is the same for all cells (denoted by θ_pop for the parameter θ^i) and corresponds to the fixed effect, anda second part that represents individual variability (denoted by θ_ind^i) and corresponds to random effects, i.e. θ^i = θ_pop+θ_ind^i. Different covariates can also be added, e.g., for different cohorts of the population. A nonlinear mixed-effect estimation algorithm – called the stochastic approximation expectation maximization (SAEM) algorithm <cit.> – is implemented in the software  <cit.>. Thanks to the R package , we could easily runusing R. The code and extracted cell velocities are available https://plmlab.math.cnrs.fr/gciavolella/ctc_adhesion_microfluidichere (a recent version ofis required). Fluid parameters The only known fluid parameter is the mean pressure gradient given by G^() = 2^-1G^(1), where ·^()denotes here and in the following the velocity cohort for ∈{1, 2, 3}, and G^(1)=50.33. Then, for most of fluid parameters, individual variability is not considered. In addition always to avoid identification problems, we strongly incorporate the information we have between the three cohorts by considering_f^m^() = _f^m^(1) m, ω_f^() = 2^-1 ω_f^(1) rad.s^-1, ξ_f^() =2^1-ξ_f^(1),for ∈{1,2,3}. The hypothesis on ω_f is obvious and the hypothesis on ξ_f implies that the oscillation amplitude of the pressure gradient is constant as G^()ξ_f^() = G^(1)ξ_f^(1). Only the phase shift φ depends on the considered cell. This implies that to represent the fluid velocity of the 3 cohorts, 142 parameters must be estimated: 3 fixed effects for _f^m^(1), ω_f^(1), ξ_f^(1), 2 fixed effects for φ (mean and standard deviation) and N_cells randoms effects for φ. Adhesion parameters Since b, c, r, and d are strongly paired, they can not be identified independently from the observations. We will then estimate only bc and d-r.The fact that we link all cohorts has the great advantage that we can constrain the values of the fluid parameters, but it introduces a bias in the estimation of the individual variability of the parameters, since the fixed effects are the same for the 3 cohorts. For this reason, we consider cohort covariates for bc: log(bc^i) = log(bc_pop) + β^(2)_bc[if =2] + β^(3)_bc[if =3] + bc^i_ind. For the adhesion parameters bc, d-r and τ, the behaviour of the individual cell is integrated meaning that 420 parameters must be estimated: 2× 3 fixed effects, 3 covariates for bc and 3 N_cells random effects.Distribution lawsFor parameter distributions, we consider logitnormal distributions for _f^m, ξ_f^m, φ and τ to keep them respectively in 01.5 d-5, 01, [parse-numbers = false]02π, and010.We consider lognormal distributions for the other parameters. Furthermore, whenever mixed effects are estimated, prior values and prior standard deviations are given to the SAEM algorithm. We will denote by the superscript (·)^* the priors and by (·)^s the prior standard deviations. We consider(_f^m^(1))^* = 7.5d-6, (ω_f^(1))^* = 12 rad.s^-1, (ξ_f^(1))^* = 0.3,(φ)^* = π, (φ)^s = 1, (bc)^* = 0.5 s^-1, (bc)^s = 1, (d-r)^* = 0.5 s^-1, (d-r)^s = 1, (τ)^* = 0.5s, (τ)^s = 1.Model errorWe consider a constant error model and we estimate theerror model standard deviation starting from the prior value 25. Adding this value to the estimation of the fluid and adhesion parameters, 563 parameters are estimated during the procedure.§ RESULTS§.§ Quick statistical analysis of the dataThe p-values resulting from the t-tests are given in Table <ref>. The diagonal blocks correspond to comparison between different protein modifications at the same fluid pressure, while the upper diagonal blocks account for comparison of subcohorts having the same protein modification and different fluid pressures. Significant differences can be seen between different protein modifications at the same fluid pressure and between the same protein modifications at different fluid pressure. Indeed, p-values are smaller than 2 × 10^-3, except between siITGB1^(2) and siCD44^(2) (p = 0.15) and between siITGB1^(3) and siCD44^(3) (p = 0.1). Thus, at high fluid pressure gradients, the differences depleting the first or second adhesion protein are not relevant, but at smaller pressure gradients they are informative. [table]labelfont=bf,textfont=it [table]labelfont=bf,textfont=it The outputs of the linear regressions can be found in Table <ref>.The intercepts – corresponding to the value of cell velocity at time t=0 – increase with fluid pressure for the same protein modification, and for a given fluid pressure it increases with respect to protein modifications or it remains stable. Anyways, intercepts are always smaller than 1, value corresponding to the normalisation by 100;200;400.The slope estimate and its p-value – related to the adhesions effects during the observation duration – show a significant decrease from the intercept value for almost all cases except for siCD44^(2) and siCTL^(3).§.§ Parameters estimationIn what follows, when the parameters depend only on the velocity cohort, the 3 estimated values are given in a vector where the ^th value corresponds to the ^th cohort for ∈{1,2,3}.We estimate the following values for the fluid parameters:_f^m^(1) = 7.2d-6, ξ_f^() =(0.27, 0.13, 0.07),ω_f^() = (12.2,24.4 ,48.8) rad.s^-1 , logit(φ) ∼𝒩(logit(0.41), 1.63).Using these estimated values, the mean velocity values are computedu̅_f^() = (99.8, 199.6, 399.3). These estimated mean velocity values justify the normalization considered in Figure <ref>. For the adhesion parameters,Figure <ref> shows the individual estimated values. It corresponds to the box plots of the estimated parameters bc (top) and d-r (middle) and the resulting percentage of decrease in cell velocity d_% (bottom) for the 3 cohorts (left: pressure gradient fixed at G^(1) middle: at G^(2), right: at G^(3)) and for all considered cells (red: siCTL, green: siITGB1, blue: siCD44). We also add p-values ranges of the t-tests between the estimated values of parameters for different protein modifications at the same fluid pressure. To facilitate the comparison between the same protein modifications at different fluid pressure, Figure <ref> shows the same estimated values but sorted by protein modifications instead of fluid pressure gradient. The p-values ranges of the t-tests between the estimated values of parameters at different fluid pressure gradients for the same proteins are shown. Table <ref> summarises the mean and standard deviations by cohorts and subcohorts. To facilitate the reading of this table, the mean values and the associated standard deviations are plotted in Figure <ref>.To conclude this section, Figure <ref> shows numerical fits compared to the experimental data. Independently of the fluid cohort, two typical behaviours are observed: the CTCs velocity either remains stationary or decreases. Therefore, we show examples of these behaviours for each velocity cohort in the siCTL case only. Velocity values are normalised by 2^-1×100 for each ∈{1,2,3}.[table]labelfont=bf,textfont=it§ DISCUSSION §.§ Data extraction and statistical analysis One difficulty in detecting cells was to select the correct cells. Indeed, among the cells with an apparently free trajectory, we had to pick out the CTCs without collisions, without arrests, or without problems in tracking, e.g., due to proximity to other cells. This selection obviously has an impact on the results. This selection could be automated by a more efficient tracker, especially a fully automatic tracker even for large velocities.In addition, both the tracking method and the experimental setup only resulted in the measurement of translational cell velocities, which did not provide any information about possible rolling of the cells.Concerning the statistical analysis of velocity data, significant differences between mean velocities in almost all of the cohorts and subcohorts (p-values smaller than 2× 10^-3, see Table <ref>) illustrate the importance of the fluid velocity and of the CD44 and ITGB1 proteins in the adhesion phenomenon. The mean velocities were not significantly different between siITGB1^(2) and siCD44^(2) (p = 0.15) and between siITGB1^(3) and siCD44^(3) (p = 0.1), probably due to the higher fluid velocity impeding adhesion. This is also supported by the linear regressions on the velocities, see Table <ref>). In each cohort and subcohort, the intercept smaller than 1 indicates the presence of adhesion, higher for lower fluid velocities and for siCTL cases. The slopes show a decelerating dynamics for 7 out of 9 subcohorts (p-values less than 2× 10^-2). These values are negative but close to zero, indicating the presence of stationary velocity profiles, see Figure <ref>. The goal of the mathematical model – presented in Subsections <ref> and <ref> – was to understand and quantify these initial observations. §.§ Fluid modelingAs for the fluid modeling, the choice of the pressure gradient under the cosinusoidal form has an important impact, since it can be shown that in our context the imaginary part i_f is larger than the real part r_f, which means that most of the oscillations of the cell velocities are under the cosinusoidal form. To make more complex assumptions, a better knowledge of the pump is needed.As for the estimation of fluid parameters, the strong correlation between fluid and adhesion parameters leads to identification problems. For example, we observed that multiple values of the parameter pair (bc,u̅_f(h^m_f)) resulted in similar fits. We then decided to constrain the fluid parameters using priors from the literature to allow more variation in the adhesion parameters. The prior value of h_f^m (only parameter appearing in the mean velocity values u̅_f) has a strong impact on the results. The value has been selected to be coherent with the velocity measures with single-particles performed in <cit.> (see Figure 6-C). Its estimation led to mean velocity values very close to the estimated ones in <cit.>, which are 100;200;400. The prior of the angular velocity ω_f and its dependence on the pressure gradient can be derived directly from Figure <ref>-Top-Left (10 oscillations 5 gives ∼ 10/5× 2π∼12.6). As for the correction amplitude ξ_f, which appears in the amplitudes r_f and i_f in Equation (<ref>), we have halved its value in each successive cohort. This is confirmed by the amplitudes observed in Figures <ref> and <ref>. This last hypothesis was also confirmed by looking at the AIC for the case where ξ_f was constant (AIC=130019.6) as opposed to ξ_f depending on fluid pressure (AIC=129908.7). Of course, more complex hypotheses could be tested, but this would also require a better understanding of how the pump works. §.§ Adhesion modeling ModelingThe adhesion dynamics is described by an ODE on the adhesion density. The modeling choice is built on Assumptions (<ref>)-(<ref>). Assumption (<ref>) states constant binding rates in each velocity cohort. By doing so, the effects of velocity oscillations on the adhesion dynamics are neglected. Several biophysical studies investigated the relation between the load applied on a cell and its binding dynamics, and introduced catch bonds or slip bonds (see e.g <cit.>). These studies focus mainly on L-selectin bonds for leukocyte dynamics. For CTCs, it is known that CD44 mediating transient adhesion might bind glycocalyx, glycoproteins such as E-selectin or endothelial CD44, or fibronectin at the surface of endothelial cells. Stable adhesions are mediated by alpha5beta1 integrins that also bind fibronectin <cit.>.However, fluid velocities being quite high, capturing the binding response to variations of u_f seemed out-of-reach. Furthermore, the identification issues we had to deal with convinced us that the data were not well-suited for investigating this question.Assumption (<ref>) then enables to derive an explicit solution for the bonds density over time given in Equation (<ref>). It can be noted that this equation makes biological sense only when d-r > 0, which is indeed found during the estimation of parameters. In this case, the bonds density increases exponentially and then saturates at c/(d-r). When d-r > 0, the asymptotic cell regime d_% cannot reach 1, preventing cell arrest. This shows that our modeling is not suitable to account for arrested cells.In our model, cell velocity is given by the difference between the fluid velocity and an adhesion term. This formulation is classically found in other modeling approaches, see e.g <cit.>. In a macroscopic setting, the adhesion term is proportional to the closed bonds density, and involves both geometric features and forces and torques exerted by the bonds. Rather than making this term explicit, we kept a minimal framework and compared three expressions.The constant force model, given by u_c(t) = u_f(t) - b N(t), accounts for constant binding forces at the cell scale, see <cit.>. The fluid-dependent force model writes u_c(t) = u_f(t) (1 - b N(t)) and can be seen as a simplification of a model for elastic bonds given in <cit.>.Finally, the cell-dependent force model writes u_c(t) = u_f(t) - b u_c(t) N(t), leading to Equation (<ref>). This framework can be seen as a macroscopic viewpoint for the average force exerted by an elastic bond over its lifetime. Several studies show matching microscopic viewpoints involving structured bonds density capturing bonds elongation. In <cit.>, the membrane at the cell rear moves away from the wall at a normal velocity proportional to u_c. In <cit.>, elongation is the product of the bond's age and the cell velocity. Furthermore, in <cit.>, a scaling limit in fast bonds turnover and rigid forces allowed to justify rigorously a macroscopic adhesion term proportional to u_c(t). Assumption (<ref>) consists in the choice of the cell-dependent force model over the others. The fluid-dependent force model (AIC=129910.6) slightly differs from the cell-dependent one (AIC=129908.7) in AIC. Using the constant force model strongly degrades the AIC (AIC=130066.8) showing that this assumption should be excluded. Parameters estimation Concerning the adhesion parameters, we could only estimate bc, d-r and τ. In contrast with the fluid whose behaviour is the same in all experiments, cell parameters vary from cell to cell. They were taken as the sum of a fixed population effect and of an individual random term. Several attempts have been realised to obtain the optimal parameters from which we could interpret the biological phenomenon at study. In particular, we have considered a covariate model for bc in order to integrate the cohorts effects.One could also add covariates with respect to subcohorts, but this seems only to complexify the definition of bcwithout improving the AIC. The same procedure can be applied to d-r.Again we have tried this strategy adding covariates respect to cohorts, subcohorts and adhesion group and both fixing d-r in the population and considering individual variability. We got either similar or higher AICs (superior to 129920) in estimated parameters. Consequently, we considered only individual variability of d-r. These choices aim to reduce the number of estimated parameters, while making sure to have enough decelerating cells in each estimation group. Interpretation of the estimated adhesion parametersFigures <ref> and <ref> show the estimated value of the parameter bc – accounting for elastic bonds and fluid friction forces and the binding rate – of the parameter d-r – related to bonds instability, since 1/(d-r) is a typical adhesion lifetime at the macroscopic scale – and also the percentage of velocity decrease d_%, given by a combination of the both of them. We have considered as significant p-values lower than 10% as it was a very natural threshold with only 6 values between 5% and 10% of which half of them below 6%.Figure <ref> is sorted by fluid velocity. High fluid velocity is characterised by non-significant differences among protein modification experiments which means that adhesion is more difficult to establish. For low fluid velocity, both bc and d_% are decreasing with respect to the protein modification, whereas d-r is increasing.Consequently, we deduce that the control case is the one where adhesion is more efficient with larger binding dynamics and more stable bonds. On the other hand, depleting CD44 (siCD44) impedes the adhesion the most. Finally, the intermediate fluid velocity shows only significant differences between the control case and the modified conditions.Figure <ref> is sorted by protein modification. Both bc and d_% are decreasing with respect to the fluid velocity, whereas d-r is increasing, but the results are not significant in the case siCD44 and for d-r in the case siITGB1.With these exceptions, we deduce that fluid velocity has a significant impact on adhesion and the lower the velocity, the higher the possibility of observing this phenomenon.The fact that for d-r we can not observe significant differences could also be related to the available data set and on the bias in the observation of cells deceleration. §.§ Comparison with the literature In the literature, the adhesive dynamics of MDA-MB-231 cells in a microfluidic device interacting with anti-EpCAM ligands-coated wall has been investigated in <cit.>. Sequential fitting of the computational model of <cit.> on mean translational velocities for several shear rates allowed to identify the average cell height, binding force, and bonds spring constant. Normalized cell velocities for all shear rates were successfully fitted by a generic exponentially decreasing curve, suggesting a strong dependence of the cell velocity magnitude on the fluid velocity, whereas it was not the case for the typical decay time. Several differences exist between our frameworks. From the biological viewpoint, the wall in <cit.> is passive, while ours is a monolayer of endothelial cells, whose flow-driven active behaviour in cell arrest has been observed <cit.>. Together with lower fluid velocities, it may explain their measures of smooth velocity decays until cell arrests, that were not observed in our case. Moreover, we worked with partial observations since cell velocities were not measured from their entrance in the experimental setting. On the other hand, we considered the time-oscillating fluid velocity that had to be reconstructed, and developed a mixed-effects calibration strategy able to deal with the individual cell velocities over time. In this setting, we obtained insights on the role of the fluid velocity that are consistent with previous observations <cit.>. Furthermore, our original and robust approach allowed to investigate the respective roles of ITGB1 and CD44 proteins in the cell dynamics, which affect both the magnitude of velocity decrease and the typical velocity decay time.§ CONCLUSION AND PERSPECTIVESIn this work, we have attempted to characterise CTCs in the flow and their interaction with the vessel wall, relying on the in vitro experiments performed by Osmani and collaborators in <cit.>. Whereas previous analyses focused on cell arrest, the use of the CSRT tracker allowed us to record trajectories and velocities of individual cells. We were able to analyse different cell cohorts with respect to three different values of fluid pressure gradient (below the threshold for efficient CTC adhesion found by Osmani and collaborators in <cit.>) and three different protein expressions (siCTL, the control case; siITGB1, depletion of ITGB1, integrin that promotes adhesion stabilisation; siCD44, depletion of CD44, protein involved in early adhesion). Statistical analysis of the mean of the extracted cell velocities and linear regression allowed the observation of a slowing behaviour over time, see Tables <ref> and <ref>. This shows that adhesion is a continuous-time phenomenon involving CTCs in a fluid with a velocity below the threshold of 400.Since the fluid velocity was not measured directly, we only knew the values of the pressure gradient generated by the peristaltic pump that made up the device. This lack of data was compounded by our lack of knowledge about the pump. However, we were able to establish a Poiseuille regime and describe the fluid velocity as a combination of oscillatory functions induced by the pump and evident in the tracked cell velocity in Figure <ref>. We then focus on the modeling of the cell velocity. The oscillating Poiseuille flow was weakly coupled to a simple ODE model for cell adhesion that describes the cell velocity as the fluid velocity affected by bond formation and disruption.Optimal parameters for our model were not easy to find. Indeed, there are practical problems with identifiability, mainly due to data noise and little information about the fluid parameters. Our strategy to overcome this problem is based on a mixed-effects model and careful selection of fluid parameter priors.The well-designed parameter estimation has led to very attractive results, also from a biological point of view. Indeed, it turns out that a low fluid velocity favours a decrease of cell velocity and the formation of bonds. In contrast, a high fluid velocity makes it difficult to observe this adhesion phenomenon, even when both adhesion proteins are expressed. At the same time, we can demonstrate the role of CD44 and ITGB1 proteins in adhesion. Without the expression of CD44 (case siCD44), CTCs do not show a favourable deceleration behaviour, preventing the formation of the first (albeit weak) interactions with the wall. In the absence of ITGB1 expression (case siITGB1), the slowdown is less important than in the control case, but still present. Both pieces of information indicate that the ITGB1 protein, in contrast to the CD44 protein, does not promote early cell adhesion to the vessel wall, even if the combination of both leads to better adhesion.These conclusions are reported in the in vivo experiments, whereas they could not be extracted from the in vitro experiments before our work. This highlights the quality of the strategy – based on mathematical modeling and data assimilation – we have developed. This work confirms that efficient CTC arrest relies on a 2-step mechanism: (1) an early step which requires a low energy but fast to engage CD44-dependent adhesion promotes the early arrest of CTCs in flow and (2) a high energy but slow to engage integrin beta1-dependent adhesioncounteracts shear-ripping flow forces on arrested CTCs. This second step requires an early but transient arrest of the circulating cell.As for the perspectives, the first one concerns the improvement of the CTC tracker from the experimental videos, since it is not fully automatic and has a great need of optimization. The second is the development of a mathematical model adapted to the in vivo experiments. These data should allow us to incorporate cell arrest into our model. The more complex geometry will require a more complex model of blood circulation. Finally, from a biological standpoint, it is possible to use the model presented to study and predict additional molecular modes involved in the arrest of CTCs at the vascular wall.*Author contributions AC and CE designed the study based on the biological data generated by NO in the team of JGG. GC and JG analyzed the data. AC and GC implemented the software code. AC, CE and GC interpreted the results. AC, CE and GC wrote the manuscript and JG the supplementary materials.§ ACKNOWLEDGEMENTSThe work of JGG and NO has been funded by Plan Cancer 2014-2019 (OptoMetaTrap), CNRS IMAG’IN (to S.H. and J.G.G.) and by institutional funds from INSERM and the University of Strasbourg.§ DECLARATION OF INTERESTThe authors declare no competing interests.unsrtnat Supplementary Materials of << Deciphering circulating tumor cells binding in a microfluidic system thanks to aparameterized mathematical model>> Giorgia CiavolellaJulien GranetJacky G. GoetzNaël Osmani Christèle EtchegarayAnnabelle Collin § NUMERICAL MODELING OF THE FLUID §.§ Validation of the hydrodynamic entrance length ℓ Let us define the domain Ω := ( 0, L ) × (0, h ) × ( 0, l ),∂Ω its boundary and ∂Ω_in := {0 }× (0,h) × (0,l) the boundary corresponding to the channel inlet. We would like to numerically establish the distance ℓ (along the x-axis) at which a uniform velocity profile from ∂Ω_in fully develops into a Poiseuille profile. To do so, we compare the solution of the two following problems: (i) a Poiseuille profile is already developed over the entire domain Ω, (ii) boundary layer equations are used to take into account the entrance effect near the inlet. Let u_P be the fluid velocity anywhere in Ω according to the Poiseuille equations1exρ∂ u_P/∂ t - μΔ u_P= G (1 + ξ_f cos(ω_f t + φ))t>0,Ω,u_P(t,·) = 0 t>0,∂Ω,u_P(0,·) = 0 Ω, and u_B be the fluid velocity anywhere in Ω according to the boundary layer equations1exρ(∂ u_B/∂ t + u_B ∇· u_B ) - μΔ u_B= G (1 + ξ_f cos(ω_f t + φ))t>0,Ω,u_B(t,·) = 0 t>0, ∂Ω\∂Ω_in,u_B(t,·) = max_t>0 u_Pt>0, ∂Ω_in,u_B(0,·) = 0 Ω.The third equation of Problem (<ref>) corresponds to a uniform velocity profile at the inlet of the channel, which is here equal to the maximum velocity of the fluid with a fully developed Poiseuille profile. This choice was made to be consistent with the steady state hypothesis. Problems (<ref>) and (<ref>) are numerically solved using  <cit.>. The domain is discretized on a 120 × 10 × 20 mesh, using P1 Lagrange elements. The timestep Δ t is taken to be 0.5 and the total time of the simulation is 15. For the parameters, we work with the worst case scenario, e.g. the highest Reynolds number possible, with Re = 3.75. We recall that L = 1.70d-2, h =4.00d-4, l = 3.80d-3, ρ = 1.00d3, μ = 7.20d-4, G = 201.32, ω_f = 4.88d1, and φ = 0, see Subsections <ref>, <ref> and <ref> for details. By convention, a flow can be considered fully developed when its velocity profile matches the asymptotic one with an error margin of less than a percent. Assuming u_B(t,·)0, for all t >0, the relative error in percent iscomputed as follows ε_1 (·):= max_t>0 ( |u_B(t,·) - u_P(t,·) | |u_B(t,·) | ) × 100, Ω.Figure <ref>-Middle shows the relative error ε_1 over the longitudinal cross-section (0, L) × (0, h) ×{l/2}.One may remark the hydrodynamic entrance length ℓ – displayed along the cross-section – seemingly over-predicts the distance at which ε_1 drops below one percent. The numerical entrance length at which ε_1 ≤ 1% is given by ℓ_num=5.12d-4. Numerical tests including spatial and temporal convergence wereperformed to validate the value of ℓ_num.From Subsection <ref>, we know the data was obtained L/2=8.50d-3 away from the inlet, which is a full order of magnitude above the upper bound ℓ of the hydrodynamic entrance length. Under those circumstances, the velocity profile of u_B closely matches the velocity profile of u_P and is considered as a fully developed Poiseuille flow.The value ℓ = 1.53d-3 given in Subsection <ref> and coming from <cit.> is superior to our numerical evaluation of the hydrodynamic entrance length ℓ_num. Furthermore, additional hydrodynamic entrance length comparisons were performed with other formulas available in the literature (see <cit.>, Table 1), and all of them were found to overpredict ℓ_num. This difference is explained by the relation between the velocity of the fluid far from the inlet and its velocity at the inlet boundary. As shown in <cit.>, their study was performed for a ratio whose maximum is between 1.5 and 2, while in our study we set this maximum at 1. Therefore, their entry velocity is much higher than the asymptotic velocity of the fluid, which is reflected in the reported value of ℓ.§.§ Validation of the fluid expression In Subsection <ref>, we have shown the device was long enough for a Poiseuille flow to become fully developed and therefore independent on the x-axis. While this reduces Problem (<ref>) into a 2D one, we can simplify it even further by showing the channel is wide enough that we can neglect a change in the z-axis as well. We recall that aspect ratio AR = h/l equals to 1.05d-1, which tells us the microfluidic device is roughly 10 times wider than it is tall.To show that 1D reduced model (in y-axis direction) is reasonable, we compare the fluid velocity u_P to the expression u_f, which must verify1exρ∂ u_f/∂ t - μ∂^2 u_f/∂ y^2= G (1 + ξ_f cos(ω_f t + φ))t>0, (0,h),u_f(t,·) = 0 t>0, (0,h),u_f(0,·) = 0(0,h). Using the same parameter values and mesh as in Subsection <ref>, we solve Problem (<ref>) with . The same error threshold of one percent is taken, with ε_2 (·):= max_t>0 ( |u_P(t,·) - u_f(t,·) | |u_P(t,·) | ) × 100,(0,h) × ( 0, l ). Figure <ref>-Bottom-Left gives ε_2 on the lateral cross-section {L/2}× (0,h) × (0,l) corresponding to the section from which the data were obtained.For z∈ [ 6.46d-4, 3.15d-3 ] we have, ε_2(·) ≤ 1 %. Figure <ref>-Bottom-Right shows the relative error ε_2 at the camera's focal plane, using the value of h_f^m found in Subsection <ref>. As expected from the observation made in Figure <ref>-Bottom-Left, the distance between the two inner ticks does not increase nor decrease in any significant way. The sub-1% relative error area thus spans most of the microfluidic device, with {L/2}× (0,h) × (6.46d-4,3.15d-3). Upon closer inspection of the data provided in <cit.>, we know the data was obtained at the middle of the device length wise, but also width wise, as the lateral sides of the device are not visible in the videos. Given the width of the sub-1% area, u_P can therefore be represented by u_f.§.§ Validation of the parabolic shape for the velocity profile For a purely oscillating pressure gradient, a parabolic velocity profile can be considered for a 𝐖𝐨 up to 1, after which the profile rapidly evolves into a plug-like shape <cit.>. Counting the number of oscillations in Figure <ref>, it can be seen that ω_f depends on the cohort. The highest value is reached at the third cohort and is close to 50 (∼12 oscillations on 1.5 gives ∼ 12/1.5× 2π∼50), resulting in a 𝐖𝐨 around 1.6. However, it is shown in <cit.> that a parabolic profile can still emerge for moderately low values of 𝐖𝐨 (≈ 2) when|Re(u_f(t,y))| ≫ |Im(u_f(t,y))|t>0,y∈ (0,h).Using the superposition principle, one can rewrite u_f as a sum of its constant and oscillating part, that we respectively denote by u_f(t,y) and u_f(t,y). We now have to check that|Re(u_f(t,y))| + |Re(u_f(t,y))| ≫ |Im(u_f(t,y))| + |Im(u_f(t,y))|t>0,y∈ (0,h) .From Equation (<ref>), one has |Re(u_f(t,y))| = C(y) > 0 and |Im(u_f(t,y))| = 0, so that the condition rewrites|Re(u_f(t,y))| + C(y) ≫ |Im(u_f(t,y))| t>0,y∈ (0,h) ,which is true for all t if C(y) is taken large enough. In the most pessimistic case, we have |Re(u_f(t,y))|= 0 andmax_t>0|Im(u_f(t,y))| = C(y) > 0. Therefore, it is enough to show that C(y)C(y) ≫ 1. Figure <ref> shows the value of C(y)/C(y) for y∈ (0,h). It can be deduced that Condition (<ref>) is always verified. This means that for u_f = u_f + u_f, the constant component is large enough to ensure that the magnitude of oscillations does not disrupt the parabolic shape of the velocity profile. This therefore validates our hypothesis.
http://arxiv.org/abs/2311.02091v1
{ "authors": [ "Giorgia Ciavolella", "Julien Granet", "Nael Osmani", "Jacky Goetz", "Christele Etchegaray", "Annabelle Collin" ], "categories": [ "q-bio.TO", "physics.bio-ph" ], "primary_category": "q-bio.TO", "published": "20231027084120", "title": "Deciphering circulating tumor cells binding in a microfluidic system thanks to a parameterized mathematical model" }
Induced subdivisions in K_s,s-free graphs with polynomial average degree [ January 14, 2024 ======================================================================== In this paper we prove that for every s≥ 2 and every graph H the following holds. Let G be a graph with average degree Ω_H(s^C|H|^2), for some absolute constant C>0, then G either contains a K_s,s or an induced subdivision of H. This is essentially tight and confirms a conjecture of Bonamy, Bousquet, Pilipczuk, Rzążewski, Thomassé, and Walczak in <cit.>. A slightly weaker form of this has been independently proved by Bourneuf, Bucić, Cook and, Davies <cit.>.We actually prove a much more general result which implies the above (with worse dependance on |H|). We show that for every k≥ 2 there is C_k>0 such that any graph G with average degree s^C_k either contains a K_s,s or an induced subgraph G'⊆ G without C_4's and with average degree at least k. Finally, using similar methods we can prove the following. For every k,t≥ 2 every graph G with average degree at least C_tk^Ω(t) must contain either a K_k, an induced K_t,t or an induced subdivision of K_k. This is again essentially tight up to the implied constants and answers in a strong form a question of Davies. § INTRODUCTION The study of the unavoidable substructures in graphs of high chromatic number is a well known area of graph theory.A graph can have high chromatic number due to the presence of a large clique so it is natural to ask for which families ℋ of graphs does there exist a function f:ℕ→ℕ with the property that any graph with clique number at most k and chromatic number at least f(k) must contain an induced subgraph isomorphic to a graph H∈ℋ. More generally, let ℋ be a hereditary class of graphs then we say ℋ is χ-bounded if there is a function f: ℕ→ℕ such that for every G∈ℋ satisfies χ(G)≤ f(ω(G)), where ω(G) is the order of the largest clique in G. Finally we say ℋ is polynomially χ-bounded if f can be taken to be a polynomial. As usual, given a family of graphs ℱ, we say G is ℱ-free if there is no induced subgraph of G isomorphic to a graph in ℱ.A classical result of Erdős states that all finite families ℱ for which the class of ℱ-free graphs is χ-bounded must contain a tree and it is a long standing open conjecture due Gyárfás <cit.> and independently by Sumner <cit.> that the converse holds. Namely, that for every T, the family of T-free graphs is χ-bounded. When the family of excluded graphs is not finite the situation seems to be more delicate. A remarkable result of Scott and Seymour <cit.> (building on previous work with Chudnovsky and Spirkl <cit.>) confirms in a very strong form several conjectures of Gyárfás <cit.>. It says that for every m,n ∈ℕ the family of graphs without an induced cycle of length congruent with m (modulo n) is χ-bounded. One might then be tempted to conjecture that the class of graphs avoiding all induced subdivisions of a graph H is χ-bounded but this is false as shown by Pawlik, Kozik, Krawczyk, Lasoń, Micek, Trotter, and Walczak <cit.> who constructed for every k≥ 2 a family of line segments in the plane for which its intersection graph is triangle-free and has chromatic number greater than k. It is not hard to see that any proper (i.e. all subdivided paths have length at least 2) induced subdivision of a non-planar graph cannot be represented as an intersection graph of line segments in the plane.A lovely result ofBriański, Davies, and Walczak <cit.> (extending ideas of Carbonero, Hompe, Moore, and Spirkl <cit.>) shows there are χ-bounded families of graphs for which the growth rate of f is arbitrarily high which sparks the broader question of when a χ-bounded family is polynomially χ-bounded. This question has attracted a lot of attention (see e.g. <cit.>) but a full classification seems to be out of reach.Somewhat surprisingly, we do not even know whether the family of graphs avoiding an induced fixed path of length t≥ 5 is polynomially χ-bounded (as noted by Trotignon and Pham <cit.>).A similar notion to χ-boundedness, but with average degree instead of chromatic number was recently introduced. Intuitively, while a class is χ-bounded if cliques are the only thing that can force large chromatic number, it is “degree-bounded” if, instead, balanced bicliques are the only thing that can force large average degree.Formally, we say that a hereditary family ℱ is degree-bounded if there exists a function g: ℕ→ℕ such that for every G∈ℱ, we have d(G) ≤ g(τ(G)), where we write d(G) for the average degree of G and τ(G) for the biclique number of G, which is the largest integer s so that G has a (not necessarily induced) copy of K_s,s. Any such function g is called a degree-bounding function for the class.An important result in this area due to Kühn and Osthus <cit.> states that for every graph H, the class of graphs which do not contain an induced subdivision of H is degree-bounded. For every graph H and integer s, there is an integer p(s,H) such that every graph G without a K_s,s and with average degree at least p(s,H) contains an induced subdivision of H. Their bounds for p(s,H) are roughly triply exponential in s, for fixed H. A natural conjecture raised by Bonamy et al. <cit.> asserts that actually p(s,H) could be taken to be a polynomial in s. Some partial results to this conjecture are known. Indeed, Scott, Seymour, and Spirkl <cit.> established a quantitative strengthening of a result of Kierstead and Penrice <cit.>, by showing that the class of T-induced-free graphs is polynomially degree bounded. This in turn generalized another theorem of Bonamy, Bousquet, Pilipczuk, Rzążewski, Thomassé and Walczak <cit.>, who proved the same result when T is a path.Our first theorem confirms this conjecture, in a very strong form. For each integer h, and all large s the following holds. Let G be a graph with d(G)≥ s^500h^2. Then G either contains a (not necessarily induced) K_s,s or an induced proper subdivision of K_h. In fact, we ensure this subdivision is balanced (meaning all each edge of K_h is replaced by a path of some common length, ℓ).By taking a random graph G ∼ G(N,p) on N:=s^h^2/100 vertices (for[Occasionally we will write `x y' to informally say that x is sufficiently big compared to y.] s h) with p=1-h^2log s/s≥ 1/2, we have with positive probability that d(G)≥ s^h^2/100/4 and G does not contain a K_s, s nor an independent set of size h^2/4 (which implies there is no proper induced subdivision of a K_h). This shows the theorem is tight as a function of s. It is also clear that any proper induced subdivision of a K_h contains an induced subdivision of every graph H with |H|≤ h. In other words, Theorem <ref> says that the class of graphs without an induced subdivision of a fixed graph H is degree-bounded with a polynomial degree-bounding function. It is therefore natural to ask whether the same phenomenon holds for every such class i.e. whether every degree-bounded class of graphs has a polynomial degree-bounding function. In a very recent paper <cit.> the authors together with Du, McCarty and Scott proved that every degree-bounded family of graphs is essentially exponentially degree-bounded very much in contrast to the χ-boundedness case.More precisely, it was shown that for every hereditary degree-bounded class of graphs ℱ, there exists a constant C_ℱ so that (C_ℱ)^s^3 is a degree-bounding function for ℱ. Our strongest result establishes a polynomial bound for the degree-bounding function of every degree-bounded hereditary class of graphs which follows straightforwardly from the following stronger statement. Fix any k≥ 2 and suppose s is sufficiently large. Any graph G with average degree ≥ s^5000k^4 either contains a K_s,s (not necessarily induced) or an induced subgraph G'⊆ G with no C_4 and d(G)≥ k.The exponent `5000k^4' is of roughly the right shape (being polynomial in k). Indeed, by considering G ∼ G(s^k^2/100,1-1/s^9/10) for large enough s, one gets that the exponent must be at least k^2/100. We turn now to a recent question asked by James Davies <cit.> on χ-boundedness. For t ≥ 1, letbe the family of graphs of without an induced subdivision of K_2,t (not necessarily proper). Is this family polynomially χ-bounded? An affirmative answer follows as a simple corollary of Theorem <ref>. Indeed, let ωω(G). We shall show that for every G∈, d(G)=(tω)^O(t^2). Observe that if G contains a K_ω+1, (tω(G))^t then by Ramsey' Theorem, we must have an independent edge joined to an independent set of size t which forms an induced K_2,t. If not, applying Theorem <ref> with H=K_2,t and s=(tω)^t, we must have an induced subdivision of K_2,t, as required. We can actually show a much stronger result which answers Problem <ref> with essentially tight bounds. As before, we say a subdivision is proper if every edge is subdivided at least once and further we say a subdivision is balanced if all paths have the same length. For every s,t ∈ℕ with s≤ t, the following holds for all large k. Let G be a graph with d(G)≥ k^4000t, then either: * G contains a k-clique;* G contains an induced proper balanced subdivision of K_h, with h≥ d(G)^1/(10^9s);* G contains an induced K_s,t.We did not try to optimize the constants `4000' and `1/10^9', here (nor the constants `500' and `5000' from our last two theorems). We note that this result is essentially sharp (up to the constant in the exponent) when k t. Indeed, if G is a graph without k-cliques or independent sets of size t, then it will also lack an induced proper subdivision of K_t; and a simple probabilistic construction shows that there exist such G with average degree ≥ k^Ω(t) (cf. <cit.>). Meanwhile, if G does not contain K_s,s as a subgraph, then it is K_2s-free and K_s,s-free, and if it lacks independent sets of size h it will lack a proper subdivision of K_h; another probabilistic construction shows that there are such graphs with average degree ≥ h^Ω(s) (cf. <cit.>).Also, the above result becomes false if we remove either of the first two bullets (by either taking G to be an arbitrarily large clique, or a graph with no C_4 of arbitrarily large degree). Moreover, if we removed the last bullet, then we know that G could be triangle-free with arbitrarily high average degree and without an induced proper subdivision of K_5 (due to <cit.>).A weaker version of Theorem <ref> was proved independently by Bourneuf, Bucić, Cook and Davies <cit.>. The same authors also show a version of Theorem <ref> which is quantitatively far from optimal.In independent contemporaneous work <cit.>, Bourneouf, Bucić, Cook and Davies have obtained some results that overlap with those proved here. Firstly, they proved a version of Theorem <ref> (cf. <cit.>), but the dependence on H is not established. Secondly, they showed a reduction (cf. <cit.>) which made some partial progress towards our main theorem (Theorem <ref>). Lastly, Lemma <ref>, a key lemma of ours, is a quantitatively explicit version of <cit.>. While both this paper and <cit.> build upon previous techniques of <cit.>, the intermediate steps are quite different. § PRELIMINARIES §.§ NotationThroughout this paper, for positive integer n we write [n]:={1,2,…,n}. Given a multigraph G, we denote d(G) to be the average degree of G (i.e. d(G)2|E(G)|/|G|). Similarly for s-uniform multihypergraphswe write d():= s|E(G)|/|V()|.For a family of graphs ℱ, we say G is ℱ-free if for all F∈ℱ, G does not contain any copy of F as an induced subgraph.As usual, we denote ω(G) to be the size of the largest clique in G and e(G):= |E(G)|. Moreover, for a subset B⊂ V(G), we letd_B(x)=|N_B(x)| where N_B(x)={ y∈ B: y∈ N(x)}. Finally, we denote G[A,B] to be the bipartite graph induced between A and B.§.§ Tools We shall need the classical Ramsey's Theorem. R(s,k)≤s+ks=O_s(k^s). Next we recall a standard consequence of dependent random choice.Let G be graph with n vertices, and consider some integer s≥ 1. If #(x∈ V(G):d(x)≥ n^1-1/100s) ≥√(n), then G has a set S of n^1/3-1 vertices so that any s-subset S'⊂ S has a common neighborhood of size at least n^0.9. Let S_* := {x∈ V(G): d(x)≥ n^1-1/100s}. Now pick vertices v_1,…,v_10s∼ V(G) uniformly at random (independently).Take S_0 := ⋂_i=1^10s N(v_i). For x∈ S_*, we have that (x∈ S_0)= (d(x)/n)^10s≥ n^-1/10, thus [|S_0|] ≥ |S_*|n^-1/10≥ n^1/3. Meanwhile, we have that[#(x_1,…,x_s∈ S_0^s: |N(x_1)∩…∩ N(x_s)|≤ n^0.9)] ≤ n^s(n^0.9/n)^10s = 1. Deleting a vertex from such bad s-tuple gives the result.Putting these together yields the following supersaturation result.Let t≥ s ≥ 2. For all sufficiently large k with respect to s,t the following holds. Let G be a graph on n vertices which is K_s,t-free and contains no K_k. Then, G has at least Ω_s(n^s) independent sets of size s, provided n≥ k^100t.If e(G)/n2≤ 1/s^2, then sampling a s-subset S∈V(G)s uniformly at random, we will have that[e(G[S])] ≤s2/s^2<1/2. Thus, with probability ≥ 1/2, S is an independent set. This implies that there are at least 1/2ns≥n^s/3(s!) independent sets (assuming n is sufficiently large).Now suppose this was not the case. Then, for sufficiently large n, we must have that e(G)≥ 2n^2-1/100s. This easily implies that #(x∈ V(G): d(x)≥ n^1-1/100s)≥√(n). Applying Lemma <ref>, we can find S⊂ V(G) with |S|≥ n^1/4, so that every s-subset S'⊂ S has a common neighborhood of size at least √(n). Meanwhile, by Ramsey's Theorem, and the assumption n≥ k^100t, we have that every set of n^1/4 vertices either has a k-clique or independent set of size t. Thus, we can find an independent set I⊂ S of size s≤ t (since G is assumed to be K_k-free). Then, |N(I)|≥√(n), so we can find an independent set J⊂ N(I) of size t. But then G[I∪ J] ≅ K_s,t, contradicting the assumption that G was K_s,t-free.We can also deduce the following handy result.Fix t≥ s≥ 2, and let k be sufficiently large. Suppose G is an n-vertex graph with d(G)≥ n^1-1/(100s) and n≥ k^100t. Then G must contain either a K_k or an induced K_s,t.The above follows from the second case in our proof of Corollary <ref>, we omit the details. A classical result of Bollobás and Thomason <cit.> and independently shown by Kómlos and Szemerédi <cit.> gives the correct bounds for the extremal numbers of a subdivision of a complete graph. Let G be a graph with average degree at least 100k^2 then G contains a subdivision of a complete graph on k vertices.We shall use a recent result of Gil Fernández, Hyde, Liu, Pikhurko, and Wu <cit.> which is a quantitative strengthening of a breakthrough result of Liu and Montgomery <cit.>.Let G be a graph with average degree d. Then, G contains a balanced subdivision of K_h, where h = Ω(√(d)).§ OUTLINE §.§ Structural assumptions§.§ Rough strategy Suppose we are given some graph G with average degree at least d. To prove Theorems <ref> and <ref>, we may assume that G that has certain structural properties (namely, it belongs to a certain hereditary family ), and wish to deduce that G must have a K_s,s-subgraph for an appropriately large value of s. The story for Theorem <ref> is similar, only now we want to find one of two large things. Each of our arguments breaks into two parts. In the first phase, we make no additional assumptions about G, and iteratively “clean” it. We initialize with G_0 := G, and pass to induced subgraphs G_0⊃ G_1 ⊃… G_τ where d(G_t+1)≥ d(G_t)^Ω(1) for t<τ, and τ is some stopping time bounded by 3. Next, we have a “clean” graph G^*:= G_τ, which falls into one of several structured classes. Here, we finally use some assumptions about G. Namely, we can now assume that G^* does not contain a K_s,s-subgraph (or, a K_k, and potentially that G^* does not contain some speicifc subgraph ( §.§ Some details about cleaningnot finished yet, feel free to give an attempt. or I can try later.§ AUXILIARY CLEANING LEMMAS §.§ Key dichotomyWe require a lemma from <cit.>. We recreate their proof for completeness. We say a bipartite graph Γ = (A,B,E) is L-almost-biregular if: d_Γ(a)≤ L|E|/|A| for all a∈ A,and d_Γ(b)≤ L|E|/|B| for all b∈ B. Let Γ = (A,B,E) be L-almost-biregular. Then Γ has an induced subgraph Γ' with with d(Γ')≥ d(Γ)/4 and Δ(Γ')≤ 24Ld(Γ'). If E = ∅, there is nothing to prove (we may take Γ' = Γ). Supposing otherwise, we now have that L|E|/|A|,L|E|/|B|≥ 1.We may assume |A|≤ |B|. Let p = |A|/|B|. We take a random subset B'⊂ B by adding each b∈ B independently with probability p. We take A'⊂ A to be the set of a∈ A where |N_Γ(v)∩ B'|≤ 1+2p(d_Γ(v)-1). We shall take Γ' = Γ[A',B'], and show this works with positive probability.By construction, we have Δ(Γ')≤ 1+2L|E|/|B|≤ 3L|E|/|B|. Indeed, each vertex a∈ A' has degree at most 1+2pL|E|/|A|≤ 3L |E|/|B|, and each vertex b∈ B' has degree at most d_Γ(b)≤ L|E|/|B|.Consider any e= ab∈ E, and let U = (N_Γ(a)∖{b}) ∩ B'. Applying Markov's inequality, we see thatP(a∈ A'|b∈ B') = P(|U| ≤ 2[|U|])> 1/2.It follows that [e(Γ')] = ∑_e=ab∈ EP(b∈ B')P(a∈ A'|b∈ B') > p|E|/2. Also, it is obvious that [|A'|+|B'|] ≤ |A|+[|B'|]=2|A|. Thus, we have [4e(Γ') - |E|/|B|(|A'|+|B'|)]> 4(p|E|/2) - |E|/|B|2|A| =0. Consequently, we can choose A',B' such that the LHS is positive, and so Γ' is non-empty withd(Γ')≥ 2(|E|/4|B|) ≥ d(Γ)/4, and, as Δ(Γ')≤ 3L|E|/|B|,12Le(Γ')≥Δ(Γ')(|A'|+|B'|)which implies that Δ(Γ')≤ 24L d(Γ')as desired. Lemma <ref> allows us to `bootstrap' almost-regular graphs as follows.Let L≥ 2 be a constant and let G be a n-vertex graph with average degree d(G)≥ d. Furthermore, suppose that Δ(G)≤ Ld. Then G has an induced subgraph H⊂ G where Δ(H)= O(d(H)·log^2 (L)) and d(H)=Ω(d/log^2(L)). We may and will assume L is large enough so that 2log(L) ≥log(L)+loglog(L)+7 (recall that all logarithms are in base 2). This is simply a straightforward modification of an argument that appeared in the proof of <cit.>. We need a slightly broader range of parameters for the current paper, so we have presented them here. This result will be black-boxed to prove our key “dichotomy result” (Lemma <ref>), which is a minor generalization of <cit.>. Slightly sharper versions of Lemma <ref> may be possible by using more involved choices of parameters, in which case we hope the more streamlined presentation will be helpful in future applications.First, we may and will assume δ(G)≥ d/2 and that G is d-degenerate, otherwise we may pass to a subgraph with higher average degree. Write V:= V(G).We now split the vertices in V into V_i={x∈ V: 2^i-2d≤ d(x)<2^i-1d }, for i∈ [log(L)+1].By pigeonhole principle, there is some i∈ [log(L)+1], for which∑_x∈ V_i d(x)≥dn/log(L)+1≥dn/2log(L).We fix one such i. For technical reasons, we shall take a random subset of V_i uniformly at random. Let F⊂ V_i be such a random set.Moreover, let F'⊂ F be the set of vertices x∈ F which send at least d(x)/2 edges to V∖ F. It is easy to see that ℙ[x ∈ F']≥ℙ[x∈ F]/2= 1/4. Hence, we may fix an outcome where |F'| ≥𝔼[|F'|]≥ |V_i|/4 and therefore e(F', V ∖ F')≥∑_x∈ V_i d(x)/2· 4≥nd/16log(L). Now, because G is d-degenerate, we can find some F”⊂ F' such that Δ(G[F”]) ≤ 4d and |F”|≥ |F'|/2 (indeed, G[F”] must have at most d|F'| edges, thus |{x∈ F':|N(x)∩ F'|>4d}|≤ |F'|/2). We note thate(F”,V∖ F”) ≥e(F',V∖ F')/6≥nd/100log(L)since 6|N(x)∩ (V∖ F)|≥ d(x)+2^i-1≥ d(x)+d(x') for any x,x'∈ F'. Finally, we can move on. We now shall consider a partition of V∖ F” according to the degree of these vertices into F”. Specifically, let U_j={x∈ V ∖ F”: d2^j-1/100log(L)≤ d_F”(x)<d2^j/100log(L)} for j=0,…, ℓ:= log(L)+loglog(L)+7. Note this might not be a partition of all the vertices in V∖ F”, however all but at most nd/200log(L) edges belong to G[F”,⋃_j=0^ℓU_j].Again by pigeonhole principle there some j∈{0,1,…,ℓ}, for which e(G[F”,U_j])≥e(F”,⋃_j=0^ℓ U_j)/2log(L)≥nd/200log^2(L).By d-degeneracy, there are at most |U_j|/2 vertices in U_j which send more than 4d edges inside U_j. Jettison those vertices and denote the leftover vertices by U_j'. This leaves us with e(G[F”,U_j']) ≥ e(G[F',U_j])/3.Now, observe that G[F”,U'_j] is a M-almost-regular bipartite graph where M=1200log(L) since every vertex in U'_j sends roughly (up to a factor of two) the same number of edges to F” and the maximum degree of vertices in F” is at most 2^i+1d and e(G[F”,U'_j])≥2^id|F”|/200 · 3log(L). We may now apply Lemma <ref> to G[F',U'_j] with L:=M, to find an induced subgraph H⊂ G[F”,U'_j] with d(H)≥ d(G[F',U'_j])/4 and Δ(H)=O(log^2(L)d(H)) and so H is the required induced subgraph. There might be some vertices in H which send 4d edges within V(H)∩ U'_j or within V(H)∩ F' but this is fine as d(H)=Ω(d/log(L)^2).The proof of our main result shall be split into two cases. Either we can find an induced subgraph of G which is almost regular and which still has many edges or we can pass to an induced bipartite subgraph H consisting of two parts A and B where |A| |B| also preserving many edges. This dichotomy is shown in the next lemma. Let L, d≥ 16 be positive integers. Consider an n-vertex graph G with average degree d(G)≥ d, which is d-degenerate. Then, one of the following holds.* There is a partition of V(G)=A∪ B where |A|≥L|B|/2 and e(G[A,B])≥nd/8; * There is an induced subgraph H⊂ G where Δ(H)= O(d(G[H])·log^2(L)) and d(G[H])=Ω(d/log^2(L)). Let A_heavy={x∈ V(G)| d(x)≥ Ld} and[Perhaps a more apt term is `not-too-abnormally-heavy', rather than `light', but we opt for the latter term for the sake of brevity.] A_light=V(G)∖ A_heavy. Note that by assumption on the degeneracy, we must have that |A_heavy|≤ 2n/L. Now first suppose that e(G[A_light,A_heavy]) ≥ nd/8. Then, taking A:=A_light, B:= A_heavy forms the required partition.If not, then since G is d-degenerate and |A_heavy|≤ 2n/L, we have that e(G[A_heavy])≤ 2nd/L≤ nd/8.Hence,e(G[A_light])= e(G)-e(G[A_light,A_heavy])-e(G[A_heavy])≥ nd(G)/2-nd/8-nd/8≥ nd/4.We are now done by applying Lemma <ref> to G[A_light] with d:= d/2,L:= 2L. §.§ The subdichotomy Now, in the case where we are almost-regular we show how to further clean our graph.Fix a graph F and ,δ>0 with <δ/2. Suppose that d is sufficiently large with respect to ,δ.Let G be an n-vertex graph with d(G) ≥ d and Δ(G)≤ d^1+. Furthermore, suppose that G has less than nd^|V(F)|-1-δ copies of F (not necessarily induced). Then G contains an induced subgraph G' which does not contain F as a subgraph, such that d(G')≥ d^δ/(10|V(F)|). Letdenote the set of subgraphs F'⊂ G with F'≅ F. Also for convenience write v_0:= |V(F)|.Let 𝒮 be the set of pairs (e,F') such that F'∈ℱ and e∈ E(G) is an edge with exactly one vertex in V(F'). Clearly, we have |𝒮|≤ v_0d^1+|𝒞|<v_0nd^v_0-δ/2. Now set δ' := δ/5v_0, and consider a random set V'⊂ V(G) where each vertex is included with probability p := d^δ'-1 (independently). We let ' be the set of F∈ where V(F)⊂ V'. Finally we considerX:= [e(G[V'])],Y:=[|'|], Z:= [#{(e,F')∈𝒮:e∈ E(G[V']) andF'∈'}] .Let G' be the induced subgraph on vertex set V' ∖⋃_F∈'V(F). It is clear that e(G')≥ X-v_02Y-Z, and that G' will not have F as a subgraph.Now, we simply observe that[X] ≥ n(d/2)p^2= 1/2d^δ'(np),[Y]=||p^v_0≤ d^v_0δ'-δ(np)and [Z]= |𝒮|p^v_0+1≤ v_0d^(v_0+1)δ'-δ/2(np) . Thus (noting that (v_0+1)δ'<δ/3, and assuming d is sufficiently large), Y,Z become lower order terms and we get[X-v_02Y-Z] ≥ d^δ'/2(np)= d^δ'/2[|V'|], whence there is some outcome where e(G')≥ d^δ'/2|V(G')|, giving us our desired subgraph. § USING THE SUBDICHOTOMY In this short section, we collect some useful corollaries of our cleaning lemmas. These will allow us to pass to a favorable outcome if the edges are noticeably concentrated in some local area of G. Fix ε∈ (0,1/10) and assume d is sufficiently large with respect to ε. Let G be an n-vertex graph with d(G)≥ d and Δ(G)≤ d^1+ε. Then, we can either find an induced subgraph G'⊂ G on n' vertices where n'≥ d^1-3/2 and with d(G')≥ (n')^1-5, or an induced subgraph G”⊂ G with d(G”)≥ d^ε/20 which does not have any C_4. Let _4 be the set of 4-cycles C⊂ G. If |_4|≤ nd^3-2, then we can find G” by applying Lemma <ref> (with F:= C_4).Otherwise, there must be some vertex x∈ V(G) which belongs to at least d^3-2 cycles of length 4. Next, since |N(x)| ≤ d^1+, there must be some y∈ N(x) such that {x,y} is an edge which belongs to at least d^2-3 cycles of length 4. But the number of such cycles exactly counts the number of choices of x'∈ N(x)∖{y}, y'∈ N(y)∖{x} where {x',y'}∈ E(G). Thus taking G'= G[N(x)∪ N(y)] gives a graph with ≥ d^2-3/2 edges and m:=|V(G')|≤ 2d^1+ vertices. This implies that d(G')≥ d^1-4/4≥ m^1-5 (using that <1/10 and d is sufficiently large). Fix ∈ (0,1/10), and suppose n is sufficiently large with respect to ε. Let G be an n-vertex graph, without an independent set of size √(n). Then, G either has an induced subgraph G' on n'≥ (n')^1/4 vertices where d(G')≥ (n')^1-5, or an induced subgraph G”⊂ G with d(G”)≥ n^/100 which does not contain any C_4. By Túran's Theorem, we may assume d_0:=d(G)≥√(n)/2, otherwise we have an independent set of size √(n). Applying the dichotomy from Lemma <ref> with L = 2^d_0^/12≥ 2^n^/25 and assuming n is sufficiently large, we have that L/2>n, whence the first outcome cannot happen, which means we can pass to an induced subgraph G^* of G with d(G^*)≥Ω(d_0/d_0^ε/12)≥Ω(n^1/2-/6) and Δ(G^*)≤ O(d(G^*)n^/6).Again assuming n is sufficiently large, we have that Δ(G^*) ≤ d(G^*)^1+, so we can now apply Lemma <ref> to get the result (taking d:= d(G^*)≥ n^1/3).I don't know where this should go now.Obviously this implies that χ(G)=C(ℓ, t)k^O(t).In the concluding remarks we shall explain how it follows easily from a result of Chudnovski, Seymour, Scott and Spirkl an extension of Theorem <ref>, namely that families of graphs avoiding k vertex disjoint and pairwise non-adjacent t-theta graphs are polynomially χ-bounded.§ FINDING AND USING SUBDIVISIONS§.§ A reductionGiven a multihypergraph = (V,E), we define the 1-subdivision ofto be the bipartite graph Γ with bipartition A,B, having bijections ϕ_A:A→ E,ϕ_B:B→ V, so thatN_Γ(a) = {b∈ B: ϕ_B(b)∈ϕ_A(a)} for each a∈ A.By definition, it is obvious that the following property holds. Let G be the 1-subdivision of a multihypergraph ℋ. Suppose ℋ' is a subhypergraph of ℋ (obtained by removing some edges and vertices), then the 1-subdivision of ℋ' is an induced subgraph of G.This is quite helpful, since it allows us to reduce problems about finding induced subgraphs in G into problems about finding (not necessarily induced) subgraphs inside the auxiliary graph H (which are understood much better).We show now how to find a 1-subdivision of a simple graph with many edges from a 1-subdivision of an s-uniform hypergraph of high enough average degree, provided the hyperedges do not clump too much.Let s,t≥ 2 and H be a 1-subdivision of a s-uniform multihypergraph , with d() ≥ d. Suppose H is K_s,t-free, then H contains an induced subgraph H' which is a 1-subdivision of a simple graph G with average degree Ω_s,t(d^1/(s-1)). We will first remove some hyperedges from , to obtain a “cleaner” hypergraph ^*. This corresponds to deleting vertices from H.First, since H is K_s,t-free, no hyperedge e∈ E() can have multiplicity greater than t-1. Thus, by removing some hyperedges, we can obtain a simple hypergraph ^*⊂ with d(^*)≥ d/t. Next, by randomly partitioning the vertices of ^* into s parts, we can find parts V_1,…,V_s, so that the s-partite subhypergraph ^** := ^*[V_1,…,V_s] has d(^**) ≥ d(^*)s!/s^s= Ω_s,t(d).Finally, pass to a choice of V' where ^***:= ^**[V'] has maximal average degree, and write V_i':= V'∩ V_i for i∈ [s]. We have that d_^***(v)≥ d':= d(^**)/s= Ω_s,t(d) for each v∈ V'.We may and will assume |V_s'|≥ |V_i'|, for all i∈ [s-1]. For each v∈ V_s' and i∈ [s-1], writeN_i(v) := V_i' ∩⋃_e∈ E(^***):v∈ ee. Clearly, we have that ∏_i=1^s-1|N_i(v)|≥ d_^***(v)≥ d'. Thus for each v∈ V_s', there is some choice of i=i_v so that |N_i(v)|≥ d'^1/(s-1). By pigeonhole, we can find some choice of i∈ [s-1] so that ∑_v∈ V_s'|N_i(v)| ≥d'^1/(s-1)/s-1|V_s'| = Ω_s,t(d^1/(s-1))|V_s'∪ V_i'| (recall |V_s'|≥ |V_i'|). Consider the graph G with vertex set V(G) V'_s∪ V'_i where (x_s,x_i)∈ E(G), if x_s∈ V'_s and x_i∈ V'_i and {x_s,x_i}⊂ e, for some e∈ℋ^***.By the above d(G)=Ω_s,t(d^1/(s-1)). Finally, note that the 1-subdivison of G is an induced subgraph of H. Combining Proposition <ref> with Theorem <ref> and Observation <ref>, we immediately get the following (since the 1-subdivision of a balanced subdivision of K_h is still a balanced subdivision of K_h).Fix t≥ s≥ 2. Let G be a K_s,t-free graph and suppose it contains an induced 1-subdivision of an s-uniform multihypergraph . Then, G contains an induced proper balanced subdivision of K_h, where h = Ω_s,t(d()^1/2(s-1)). Applying Proposition <ref>, G contains an induced subgraph G', which is the 1-subdivision of some simple graph H with d(H) = Ω_s,t(d()^1/(s-1)). Then by Theorem <ref>, H contains a balanced subdivision of K_h for some h= Ω(d(H)^1/2) as a (not necessarily induced) subgraph. Finally, Observation <ref> will tell us that G' contains the 1-subdivision of this balanced subdivision of K_h, as an induced subgraph. This gives us our desired induced subgraph H'.§.§ Finding subdivisions of multihypergraphsLet k,s,d≥ 1 be integers so that the following estimates hold:d≥ (2(10+s))^s, and d/20>s+ks . Let G be a d-degenerate graph with a bipartition V(G)=A ∪ B with e(G[A,B])≥d|A|/10 and |A|> 40d^s+2|B|. If ω(G)≤ k, then G contain an induced 1-subdivision of a s-uniform hypergraphwith d()≥ d.First, since G is d-degenerate, one can prove the following claim.There exists some A'⊂ A with |A'|≥ |A|/40d, so that G[A'] = ∅ and d_B(x) ∈ [d/20,10d] for each x∈ A'.We shall delay this proof for the time being. Instead, let us fix such a choice of A' and focus on G[A'∪ B]. We have that |A'|> d^s+1|B|. By d-degeneracy we may order the vertices in B, say b_1,… b_|B|, such that for all i∈ [|B|], b_i sends at most d edges to b_1,b_2,…, b_i-1. We now take a random subset of B where each vertex is chosen with probability p=1/2(10+s)d independently. Let B_p be the random subset.Let x∈ A' and X(x) be a random variable which is 1 if x has exactly s non-neighbours b_1,b_2,…,b_s ∈ B_p such that each of them send no edges to its left in the ordering, and 0 otherwise. We will now compute 𝔼[X(x)]. Let N_B(x)={b_i_1,…, b_i_ℓ}, for some ℓ∈ [d/20,10d], with i_1<i_2<… <i_ℓ.Note that since d/20> s+kk, Ramsey's Theorem guarantees an independent set of size s inside N_B(x). Thus,𝔼[X(x)]=ℙ[X(x)=1]≥ p^s (1-p)^sd+ℓ> p^s(1-(10+s)dp)≥ d^-(s+1)/2.Finally,𝔼 [∑_x∈ A'X(x)-d|B_p| ]≥|A'|/2d^s+1-1/2(10+s)|B|>0Fix a choice for which the above is positive. We have thus constructed a subset A”⊂ A' where every vertex in A” sends exactly a pair of edges to B_p, G[B_p]=∅ and |A”|≥ d|B_p|. Since each a∈ A” has exactly s neighbors in B_p, we see that G' :=G[A”∪ B_p] is the 1-subdivision of an s-uniform multihypergraph , moreover we have that d() = s|A”|/|B_p|≥ d as desired. First, write A_- := {x∈ A: d_B(x) <d/20}. Noting that e(G[A∖ A_-,B])≥d|A|/20, we must have |A∖ A_-|≥ |A|/10, otherwise we will have d(G[A∖ A_-,B])>d, contradicting that G is d-degenerate.Next, write A_+:= {x∈ A: d_B(x)>10d}. We must have |A_+|≤ |B|, otherwise d(G[A_+,B])>d (again contradicting d-degeneracy).So now write A_0 := {x∈ A: d_B(x)∈ [d/20,10d]}. We have that |A_0|≥ |A|/10-|B|≥ |A|/20. Finally, since G is d-degenerate, we can partition A_0 into d+1≤ 2d independent sets. Take A' to be the largest of these sets which will have size at least |A|/40 d. On the other hand, if we are almost-regular, then we have the following result.Fix t≥ s≥ 2, and set _0 = 1/(50000s^2). The following holds for all sufficiently large k with respect to s and t.Let G_0 be a graph with ω(G_0)≤ k and d(G_0)= d_0≥ k^1000t. Suppose furthermore that Δ(G_0) ≤ d_0^1+ and G_0 is K_s,t-free. Then G_0 contains an induced 1-subdivision of an s-uniform multihypergraphwith d()≥Ω_s,t(d_0^1/(40000s)). The exponents are sharp (up to the constants `1000' and `1/40000') for a few reasons. By considering G∼ G(2k^t/1000,1-1/k^1/2) for k≫ t, we can get graphs with average degree k^t/1000 that lack independent sets of size t (so it is K_s,t-free and has no 1-subdivision on >2t vertices), and no cliques of size k. Meanwhile, if we consider G∼ G(n,d/n), then some calculations involving Chernoff bounds (cf. https://arxiv.org/pdf/2207.02170.pdf, Theorem 1.1)tell us there exist G with average degree >d/2, no 4-cycles and no 1-subdivisions of s-uniform multigraphswith d() > d^1/(s-1)+ (for fixed s≥ 2,>0, and all large d). Combined with Corollary <ref>, Lemma <ref> would allow us to find a balanced subdivision of K_h' for some h' = Ω_s,t(d_0^1/(1000000s^2)) inside G_0. Thus this will not give the sharp exponent of Ω(1/s) in the second bullet from Theorem <ref>. Instead, to get the Ω(1/s) bound, it will suffice to establish a polynomial bound when s=t=2, and then bootstrap that. But we saw no reason not to prove the result for general s,t here. Write d:= d_0^1/10,:= 20_0. We shall use the following cleaning result.Let G_0 be as above. Then we can find an induced subgraph G⊂ G_0 with d(G)≥ d/10, Δ(G')≤ d^1+, where |N_G'(x)∩ N_G'(y)|≤ 3d^1-1/(1000s) for all y∈ N_G'(x).We shall prove Claim <ref> by sampling vertices uniformly at random with probability d_0^-9/10 and using the fact that G_0 is K_k-free and K_s,t-free. We delay the details until the end of the proof, and now show how to find the 1-subdivision of some appropriate hypergraph .Without loss generality, we may assume that δ(G)≥ d/20, else we can pass to an induced subgraph of G with higher average degree.Let η=1/2000 (chosen so that η/(2s)>(2s+3) and η<1/1000), and write δ := η/s. We assume k is sufficiently large so that s d^-1/1000 < 1/2.Take p := d^-1-δ, and let B_0 be random subset of V(G) where each vertex is included with probability p. We then define A_0 to be the set of vertices with exactly s neighbors inside B_0. Then define B to be the set of vertices b∈ B_0 with |N(b)∩ B_0| = 0. Finally, we define A to be the set of vertices in a∈ A_0 where N(a)∩ B = N(a) ∩ B_0 (in particular, this implies a∉B_0, since a is adjacent to vertices in B).We will now analyze the expected number of vertices inside A and B, and the number of edges within G[A]. Set n:= |V(G)|.For B, it suffices to note that(x∈ B) ≤(x∈ B_0) =p for x∈ V(G).Understanding A and e(G[A]) is more sensitive. For x∈ V(G), set P_x := (d(x)p)^s. Assuming k is sufficiently large, we shall establish for every x∈ V(G) and y∈ N(y) that(x∈ A) = Θ_s(P_x) (x∈ A andy∈ A) ≤ O_s(P_xP_y).From here, we get that [|A|] ≥ n Ω_s(d^-η) (since G having minimum degree ≥ d/20 implies P_x≥ (d^-δ/20)^s). Meanwhile, [e(G[A])] ≤ e(G)max_x≠ y{P_xP_y}≤ O_s(nd^1-2η+(2s+1)).So now, let A' be a random subset of A where each vertex is kept with probability p':= d^-(1-η+(2s+2)) = O_s(d^-[Y]/[X]). We have that [|A'|-e(G[A'])]= p'[|A|]-p'^2[e(G[A])]>(p'/2)[|A|] > nd^-1-(2s+3) for all large d. Thus, we can find some outcome where[|A'|-e(G[A'])-d^-δ/2|B|] >0.By deleting a vertex from each edge in A', we obtain independent sets A”,B with |A”|>d^-δ/2|B| and d_B(x) = s for each x∈ A”. This gives our desired subdivision. We now prove Equations <ref> and <ref>. Here, we shall frequently use the inequalities sd(x)p<1/2 and d(x)p< d^-δd^-1/(1000s) for any x∈ V(G), which are guaranteed by our assumptions. So fix some x, and consider N(x).Letbe the collection of independent sets I⊂ N(x) of size s. By Corollary <ref>, we get that ||= Ω_s(d(x)^s) since d(x)≥ d/20≥ k^100t. And obviously ||≤d(x)s≤ d(x)^s. Thus, we have that(x∈ A_0)= ∑_I∈(B_0∩ N(x) = I) = ||p^s(1-p)^d(x)-s≥Θ_s(d(x)^s)(p^s/2)= Θ_s(P_x).Furthermore, for I∈ we have (x∈ A|B_0∩ N(x) = I) =(B_0 ∩⋃_y∈ I N(y) = ∅|B_0∩ N(x) = I) = (1-p)^|⋃_y∈ IN(y)|≥ 1-sd^1+p ≥ 1/2. So we see (x∈ A) ≥(x∈ A_0)/2, whence (x∈ A) = Θ_s(P_x) (since A⊂ A_0). This establishes Equation <ref>.Finally, we bound (y∈ A_0| x∈ A_0) for y∈ N_G(x). Let _x be the event that |B_0∩ N(x)| = s. Again by Corollary <ref>, we see that (x∈ A_0|_x) = Ω_s(1). So now, for any vertex y∈ V(G), we have that(y∈ A_0 |x∈ A_0) ≤(y∈ A_0| _x)/(x∈ A_0|_x) by Bayes' theorem, and(y∈ A_0|_x) = ∑_i=0^s(|B_0∩ N(x)∩ N(y)| =i|_x)(|B_0∩ (N(y)∖ N(x))| = s-i)≤∑_i=0^s O_s(|N(x)∩ N(y)|/d(x))^i (d(y)p)^s-i≤ O_s( max{|N(x)∩ N(y)|/d,d(y)p}^s). Thus, for y∈ N(x), we have (y∈ A_0|x∈ A_0)≤ O_s(d^-1/1000+P_y) = O_s(P_y) (since P_y =Ω_s(d^-η)). This establishes Equation <ref>.For x∈ V(G), write Y_heavy(x)⊂ N(x) to be the set of y∈ N(x) with |N(y) ∩ N(x)|≥ d_0^1-1/(1000s). Noting d_0^1-1/1000s≥ |N(x)|^1-1/100s (recall |N(x)|≤ d_0^1+_0 and = 1/(5000s^2)), we must have that |Y_heavy,x|≤√(|N(x)|)≤ d_0^2/3 by Lemma <ref> (since G_0 has not K_k and is K_s,t-free).Now W sample vertices with probability d_0^-9/10, and letZ_1 := {x∈ W: d_W(x) > 1+ 10d_0^1/10+_0} Z_2 := {x∈ W: |N(x)∩ N(y)∩ W|> 1+2d_0^1/10-1/(1000s) for some y∈ N(x) ∩ W}.Now take W' := W ∖ (Z_1∪ Z_2) and G := G_0[W']. Deterministically we have Δ(G)≤ 11d_0^1/10+_0 (which is less than d_0^1/10(1+20_0) for large d_0) and that |N_G(x)∩ N_G(y)|≤ 3d_0^1/10(1-1/(1000s)) for adjacent x,y∈ W'. So we just seek an outcome with the right average degree.Clearly, we have [|W'|]≤[|W|] = nd_0^-9/10. The result will now follow if we can get a good lower bound on [e(G)].Let E_light be the set of edges e = {x,y} where |N(x)∩ N(y)|< d_0^1-1/(1000s). It is not hard to see that |E_light|≥ e(G_0)≥ nd_0/4 (since each x belongs to exactly |Y_heavy,x|≤ d_0^2/3 edges not in E_light). For every e= {x,y}∈ E_light, we claim that({x,y}⊂ W' | {x,y}⊂ W)≥ 1/2. This will imply that [e(G)] ≥(d_0^-9/10)^2/2|E_light| > d_0^1/10/10[|W'|], so there is an outcome of G with average degree ≥ d_0^1/10/10, as desired. To establish Equation <ref>, we just consider e={x,y}∈ E_light. Conditioning on {x,y}⊂ W, we will control the probability that {x,y}∩ (Z_1∪ Z_2) is non-empty. Controlling Z_1 is easy. By Markov's inequality (in a similar fashion to in Lemma <ref>), we have that (x∈ Z_1|{x,y}∈ W)≤[|W∩ (N_G_0(x)∖{y})|]/10d_0^1/10+_0<1/10.Meanwhile for Z_2, we have that(x∈ Z_2|{x,y}⊂ W)≤∑_y' ∈ N_G_0(x)(y'∈ W)(|(N(x)∖{y})∩ N(y')∩ W|>2d_0^1/10-1/(1000s)). Since |Y_heavy(x)|≤ d_0^2/3, and the contribution from those vertices in the above sum is at most d_0^2/3d_0^-9/10<1/20. Meanwhile, for any y'∈ N_G_0(x)∖ Y_heavy(x), we have that (|(N(x)∖{y})∩ N(y')∩ W|≥ 2d_0^1/10-1/(1000s)) ≤(Bin(m,p)≥ 2mp) with m:= d_0^1-1/(1000s),p:= d_0^-9/10. By a Chernoff bound, we have that (Bin(m,p)≥ 2mp)≤exp(-mp/3) for arbitrary m≥ 1,p∈ (0,1). So for large d_0, we have that exp(-d_0^1/10-1/(1000s))<d_0^-(1+)/20. Whence, the contribution from the other terms is at most 1/20.So altogether (x∈ Z_1∪ Z_2|{x,y}⊂ W)≤ 1/5. And by symmetry the same should hold for y. Thus ({x,y}⊂ W'|{x,y}⊂ W) ≥ 1-({x,y}∩ (Z_1∪ Z_2)|{x,y}⊂ W)≥ 3/5≥ 1/2. This completes the proof.§.§ Remove 4-cyclesLet ε≤ 1/18.Let H be a graph with clique number k and δ(G)=d≥ C(t)k^100t. Suppose furthermore that Δ(H)≤ d^1+ε and H does not contain an induced K_2,t. Then, H contains a C_4-free induced subgraph H' with d(H')≥ d^1/10. Let us count the number of C_4's in H. We defines_1=#((x,y,z,w): xy, yz,zw,wx∈ E(H)andG[{x,w,z,w}]≠ K_4)and define s_K_4 to be the number of K_4's in H. A simple double counting shows that the total number of ordered C_4's is at most 4!(s_1+s_K_4).s_1≥s_K_4/k-d^1+2εk n.Every K_4 can be counted in the following way. First, we fix a vertex x∈ V(G), for which there are n ways. Now, we choose another vertex y∈ N(x), for which there are at most d^1+ε ways. Finally, it remains to count the number of edges in G[N(x)∩ N(y)]. Let |N(x)∩ N(y)|=m. Observe that since G is K_k+1-free, we have by Turán's Theoreme(G[N(x)∩ N(y)])≤ (1-1/2k) m2 and therefore the number of non-edges within N(x)∩ N(y) is at least 1/2k-proportion of the number of edges and hence s_1≥s_K_4/k-d^1+2εk^2 n.Let us assume for now that the number of ordered C_4's is at least d^43/18n. It follows from the above claim that s_1≥ d^42/18 (using that k≤ d^1/18).As there are at most nd^2+2ε non-edges with a common neighbour, a double counting shows that there is a non-edge (x,y) for which |N(x)∩ N(y)|≥ d^6/18-2ε . As ε≤ 1/16,|N(x)∩ N(y)|≥ k^5t and so by Ramsey's Theorem N(x)∩ N(y) must span an independent on at least t vertices which an induced K_2,t.Therefore, we may assume that the number of ordered C_4's is at most d^43/18n. Now, let p=d^-7/8 and G_p be a random induced sub-graph of G where each vertex is chosen with probability p independently. Consider the following random variables.Let X=V(G_p), Y=e(G_p), Z= #{edges forming a C_4's in G_p} and W=#{(C,e): C is a C_4 and e is an edge incident with a vertex in V(C) and the other in V∖ V(C)}.An easy calculation shows that 𝔼[X]=pn, 𝔼[Y]≥ dp^2n, 𝔼[Z]≤ 4d^43/18p^4 and finally 𝔼[W]≤ 4d^43/18d^1+εp^5. Therefore,𝔼[Y-Z-W-d^1/10X]≥ dp^2n-4d^43/18p^4-4d^43/18d^1+εp^5 - d^1/10np>0.Fix a choice for which the above is positive.Hence, by deleting a vertex incident with every C_4 in G_p we have that the remaining graph still has d^1/10|V(G_p)| edges and hence average degree at least d^1/10 and no C_4's. I guess the important lemma is the following.Let d,k be positive integers and s ≥ t where d≥ C(s)k^100t. Let G be a graph on d ≤ n≤ d^1+1/100t vertices. Moreover, suppose that G has no clique of size k and no induced K_s,t. Then, e(G)≤ nd^2-1/10t. Therefore, G contains an independent of size d^1/20t. For general graphs H, we cannot hope to find an induced copy of K_s,t. Indeed, we can have sparse graphs of large average degree, that do not even any C_4's as subgraphs!But we can use the following trick. If a graph does have multiple triangles containing a shared edge, then there do now exist C_4's (meaning we can't be totally sparse, like in the last example). Thus, we shall establish the following dichotomy: either H does not have too many triangles, in which case we can pass to a truly sparse triangle-free graph, or there are many triangles, which will allow us to find an induced K_s,tLet s≥ 2, and take ≤ 1/20s. Furthermore, consider t≥ 2s^2 and k so that k+ss≤ k^s/6 and Let H be a graph with no k-clique and no induced K_s,t, such thatδ(H) =d ≥ C k^100tandΔ(H)≤ d^1+.Then H contains a C_4-free induced subgraph H' with d(H')≥ d^1/10s.For r≥ 1, let _r⊂V(H)r denote the set of r-subsets, X, whose common neighborhood N(X):=⋂_x∈ X N(x) is non-empty. Also, let's further write _r⊂_r to denote the set of X∈_r where H[X] = ∅ (meaning X is an independent set of size r). Let _4 be the set of 4-cycles C⊂ H. We shall consider two cases.Write δ := 1/20s.Case 1 (|_4|≥ nd^3-δ):Note that nd^3-δ≤ |_4|≤∑_X∈_2|N(X)|^2. Now, we have that |_2|≤ nd^2(1+) and |N(X)|≤ d^1+ for each X∈_2. Thus, writing _2' ⊂_2 to denote the 2-subsets X with |N(X)| ≥ d^1-(δ+2)/2, we get that∑_X∈_2' |N(X)|^2≥ nd^3-δ/2. By convexity, we have that∑_X∈ X_2' |N(X)|^s≥ nd^2(1+) (d^1-(δ+2))^s-12^-(s+1)≥ nd^s+1-s(δ+2)2^-(s+1). Applying Corollary <ref>, for each X∈_2' we get that#(I∈_s: I⊂ N(X))/|N(X)|^s≥1/k+ss^s≥ 2^s+1d^-1/2 (since our assumptions on d,s,t imply d^1/2≥ 2^s+1k+ss^s). By double counting, we get that∑_I∈_s |N(I)|^2 ≥∑_X∈_2#(I∈_r: I⊂ N(X)) ≥ nd^s+1-s(δ+2)-1/2. Next, we have that |_s|≤ |_s| ≤ nd^s(1+). So by pigeonhole, we have that|N(I)|^2 ≥ d^1/2-s(δ+3)≥ d^1/2-1/5= d^3/10 for some I∈_s. So, we get that |N(I)| ≥ d^3/20≥k+tt, implying H[N(I)] contains an independent set J of size t. Whence, H[I∪ J] ≅ K_s,t. This contradicts the initial assumption.Case 2 (|𝒯| <nd^2-δ): Let 𝒮 be the set of pairs (e,C) such that C∈𝒞_4 and e is an edge which intersects V(T).Clearly, we have |𝒮|≤ 4d^1+|𝒞_4|. *just do a standard deletion argument*For general graphs H, we cannot hope to find an induced copy of K_s,t. Indeed, we can have sparse graphs of large average degree, that do not even any C_4's as subgraphs!But we can use the following trick. If a graph does have C_4's containing a shared edge, then there do now exist C_4's (meaning we can't be totally sparse, like in the last example). Thus, we shall establish the following dichotomy: either H does not have too many triangles, in which case we can pass to a truly sparse triangle-free graph, or there are many triangles, which will allow us to find an induced K_s,t Let s≥ 2, and take ≤ 1/40s. Furthermore, consider t≥ s and k so that k+ss≤ k^s/6 and Let H be a graph with no k-clique and no induced K_s,t, such thatδ(H) =d ≥ C k^100tandΔ(H)≤ d^1+.Then H contains a C_4-free induced subgraph H' with d(H')≥ d^1/10s.For r≥ 1, let _r⊂V(H)r denote the set of r-subsets, X, whose common neighborhood N(X):=⋂_x∈ X N(x) is non-empty. Also, let's further write _r⊂_r to denote the set of X∈_r where H[X] = ∅ (meaning X is an independent set of size r). Let _4 be the set of 4-cycles C⊂ H. We shall consider two cases.Write δ := 1/20s.Case 1 (|_4|≥ nd^3-δ):Note that nd^3-δ≤ |_4|≤∑_X∈_2|N(X)|^2. Now, we have that |_2|≤ nd^2(1+) and |N(X)|≤ d^1+ for each X∈_2. Thus, writing _2' ⊂_2 to denote the 2-subsets X with |N(X)| ≥ d^1-(δ+2)/2, we get that∑_X∈_2' |N(X)|^2≥ nd^3-δ/2. By convexity, we have that∑_X∈ X_2' |N(X)|^s≥ nd^2(1+) (d^1-(δ+2))^s-12^-(s+1)≥ nd^s+1-s(δ+2)2^-(s+1). Applying Corollary <ref>, for each X∈_2' we get that#(I∈_s: I⊂ N(X))/|N(X)|^s≥Ω_s(1)≥ 2^s+1d^-1/2 (since our assumptions on d,s,t imply |N(X)|≥ k^100tand d^1/2 is arbitrarily large). By double counting, we get that∑_I∈_s |N(I)|^2 ≥∑_X∈_2#(I∈_r: I⊂ N(X)) ≥ nd^s+1-s(δ+2)-1/2. Next, we have that |_s|≤ |_s| ≤ nd^s(1+). So by pigeonhole, we have that|N(I)|^2 ≥ d^1/2-s(δ+3)≥ d^1/2-1/8= d^3/8 for some I∈_s. So, we get that |N(I)| ≥ d^3/16≥k+tt, implying H[N(I)] contains an independent set J of size t. Whence, H[I∪ J] ≅ K_s,t. This contradicts the initial assumption.Case 2 (|_4| <nd^3-δ): Let 𝒮 be the set of pairs (e,C) such that C∈𝒞_4 and e is an edge which intersects V(T).Clearly, we have |𝒮|≤ 4d^1+|𝒞_4|<4nd^4-1/40s. Now a random set V'⊂ V(G) where each vertex is included with probability p = d^1/100s-1 (independently). We let ' be the set of C∈_4 where V(C)⊂ V'. Finally we considerY:= [e(G[V'])],Y:=[|'|], Z:= [#((e,C)∈𝒮:e∈ E(G[V']) andC∈') .Let H' be the induced subgraph on vertex set V' ∖⋃_C∈'V(C). It is clear that e(H')≥ X-6Y-Z and d(H'). Now, we simply calculate that[X] = n(d/2)p^2= 1/2d^1/100s(np),[Y]=|_4|p^4≤ d^3/100s-1/20s(np),[Z]= |𝒮|p^5≤ d^4/100s-1/40s(np) . Thus (assuming d is sufficiently large), we clearly have[X-6Y-Z] ≥ d^1/200s(np)= d^1/200s[|V'|], whence there is some outcome where e(H')≥ d^1/200s|V(H')|, giving us out C_4-free graph with average degree d^1/200s.§ PROOF OF THEOREM <REF>§.§ Avoiding K_2,2 We first go through the proof for the case of s=t=2 (i.e. G has no induced K_2,2). This result will allow us to finish off our proof of the general case by passing to some induced G' that contains no K_2,2. For all sufficiently large k, the following holds. Let G be a C_4-free graph with average degree d:=d(G)≥ k^2500. Furthermore, suppose G has no K_k. Then, G contains an induced (proper) balanced subdivision of K_h, where h≥ d^1/50000.By our reductions (specifically Corollary <ref>), it suffices to find an induced subgraph G'⊂ G, which is the 1-subdivision of some multihypergraph ℋ with average degree d(ℋ)>Ch^2 (here C is some absolute constant).We may assume G is d-degenerate, otherwise we can pass to some subgraph with higher average degree. We now invoke our dichotomy result (Lemma <ref>) with L:= 80d^4. Suppose the first outcome holds. Then, there exist disjoint sets A,B with |A|>40d^4|B| and e(G[A,B])≥ e(G)/4. We may now finish off by invokingLemma <ref> and obtain the desired G'. Otherwise, the second outcome holds. We may then pass to an induced subgraph G^* with d^*:=d(G^*)≥Ω(d/log(d^4)) and Δ(G^*)=O(d^*log(d^4)). Assuming k (and thus d) is sufficiently large, we get that d^*≥ k^2000 and Δ(G^*)≤ (d^*)^1+1/200000. Finally, we apply Lemma <ref> to find our G' (since we may assume d^*≥ d^4/5).There is actually quite a lot of flexibility with how one chooses L. As long as L<2^d^o(1), in the second outcome of Lemma <ref> we are sufficiently close to being regular for the same proof to follow through. §.§ General case We now seek to prove the general case. Here, we need an argument to rule out the case when G is `polynomially dense'. This will be handled by invoking Lemma <ref>. Write d := d(G)≥ k^4000t.Our goal will be to find an induced subgraph G'⊂ G where either: * G' is the 1-subdivision of an s-uniform multihypergraphwith d()≥ d/100;* G' is C_4-free with d(G')≥ d(G)^1/20000s.If the first bullet happens, then we are done by Corollary <ref>. And if the second bullet happens, then we are done by Proposition <ref>.Again, we can assume that G is d-degenerate, otherwise we could pass to a subgraph of higher average degree. We then apply our dichotomy (namely Lemma <ref>) with Ld^s+3.Suppose the first case of the dichotomy holds. Then, we can obtain a G' satisfying the first bullet by applying Lemma <ref>.Meanwhile, if the second case of the dichotomy holds, we have found an induced subgraph G^*⊂ G with d^*:= d(G^*)≥ d^1/2 and Δ(G^*)≤ (d^*)^1+1/(500s). Now applying Lemma <ref> to G^* (with := 1/500s), we either get an induced G'⊂ G^* satisfying the second bullet or an induced subgraph G^** on n^**≥ (d^*)^1/2 vertices with d^**≥ (n^**)^1-1/(100s). But this latter outcome is impossible by Lemma <ref> (as we are assuming G is K_s,t-free and has no K_k). Thus we see that we must find one of the desired G', completing the proof. § THEOREM <REF> AND THEOREM <REF>§.§ Bounded induced subdivisionsWe proceed as in the proof of Theorem <ref> except that now things are a bit more involved. First, we need an analogue of Lemma <ref>,which will again allow us to “win” if we can pass to an induced subgraph G' with n'≥ d^Ω(1) vertices where d(G')≥ (n')^1- for an appropriately small choice of .Fix a bipartite graph H = (A,B,E). Write _H:= 1/100 Δ(H) and C^*_H := 100|A|+10|A|^2+2|B|. Then the following holds for all sufficiently large s with respect to |H|. Let G be a graph with n≥ s^C^*_H,d(G)≥ n^1-_H, which has no K_s,s (as a subgraph). Then G contains an induced copy of H. This result can be viewed as a one-sided Erdős-Hajnal result (in the same spirit as a result of Fox and Sudakov <cit.>). Our proof gives better bounds (<cit.> required n>Ω_H(s^Ω(|H|^3|))) and has a weaker assumption (they require that G lacks independent sets of size n^Ω(1), which we have replaced by an average degree condition).okay? or is this rude? Write a:= |A|, and Δ:= Δ(H). First, note that we can find A' ⊂ V(G) with |A'| = √(n) so that every x∈ A' has d_G(x)≥ n^1-2ε_H. By Lemma <ref>, we must have a set A^*⊂ A' where every Δ-subset inside A^* has at least n^9/10 common neighbors inside V(G) and |A^*|≥ n^1/3. Set B^*:= V(G)∖ A^*. We need now a simple observation.For every set W⊂ V(G) with |W|≥ 2s, we have that #{x∈ V(G): |W∖ N(x)|≤ |W|/2s} <s.This clearly holds as we do not have any K_s,s. Now pick x_1,…,x_a∈ A^* independently and uniformly at random. For S ⊂ [a], letY_S := {y∈ V(G): x_i∈ N(y)i∈ Sfor i∈ [a]}. Whenever |S|≤Δ, we will argue that with very high probability, |Y_S|≥ n^9/10/(2s)^a-|S|, for every S⊂ [a]. To start, observe that Y_S has the same distribution as Y_{1,…,|S|} for every S⊂ [a]. Now fix 0≤ℓ≤Δ. For t=1,2,…,a, defineY^(t) =Y_[ℓ]^(t) := {y ∈ V(G): y ∈ N(x_i)i≤ℓfor i∈ [t]}.By assumption on A^*, we have |Y^(ℓ)| ≥ n^9/10 with probability 1. Next, for all t∈{ℓ,…, a-1}, conditioned on the event _t that |Y^(t)|≥ 2s, we have that (|Y^(t+1)|< |Y^(t)|/2s ) < s/|A^*| ≤ n^-1/4 by Observation <ref>. So iterating through each ℓ≤ t<a, we see that (|Y^(a)|≥ n^9/10/(2s)^a-ℓ) ≥ (1-n^-1/4)^a-ℓ≥ 1-a n^-1/4.So by a union bound, we get that(|Y_S|< n^4/5for some S⊂ [a] with |S|≤Δ )≤ 2^a a n^-1/4≤ n^-1/3 (here the estimates n^4/5≤ n^9/10/(2s)^a and 2^a n^-1/4≤ n^-1/3 follow from the assumption that n≥ s^100a).Meanwhile, again by Observation <ref>, we have that {x_1,…,x_a} is an independent set of size a with probability greater than ∏_i=1^a [1/|A^*|(|A^*|/(2s)^i-1-s-(i-1))]≥1/(4s)^a^2 (assuming |A^*| = n^1/3≥ (2s)^a). Thus, since n≥ s^10a^2, we can find some independent set I⊂ A^* of size a, so that |Y_S|≥ n^4/5 for all S⊂ [a] with |S|≤Δ. We may now find the required H.Fix some ι:A→ [a], ψ:B→ [|B|]. For each j∈ [|B|], set S_j := ι(N(ψ(j))), and pick some vertex y_j∈ Y_S_j uniformly at random (and independently of the other y_j's). Similar to before, we have that {y_1,…,y_|B|} is an independent set of size |B| with probability at least ∏_j=1^|B|1/|S_j|(|S_j|/(2s)^j-1-(|B|-(j-1))s-(j-1)-a)>0 (assuming n^4/5> (|A|+|B|)(2s)^|B|, which certainly holds for large enough s). This yields our copy of H. We now require a “polychotomy” variant of Lemma <ref> in order to handle unbalanced graphs. Here, either something “abnormal” happens (one of the three bullets listed below), or a fourth “expected” outcome must hold; in each case we shall pass to some induced G' which will lead us to victory. Let h≥ 2 and suppose d is sufficiently large with respect to h.Let G be an n-vertex d-degenerate graph with disjoint vertex sets A_0,B so that |A_0|> d^6|B| and e(G[A_0,B])≥nd/10. Assume G does not have an induced subgraph G' satisfying any of the following:* G' is the 1-subdivision of K_h;* G' has n'≥ d^1/5 vertices with d(G')≥ (n')^1-1/(100h);* G' has no C_4 and d(G')≥ d^1/(50000h). Then we can find some induced subgraph G' which is the 1-subdivision of a simple graph H with average degree d(H)≥ d. We note that in the statement of Proposition <ref> (and Proposition <ref>, stated later), we do not have any assumptions about lacking K_s,s-subgraphs. These are general results which are true for arbitrary unbalanced graphs G. To finish our arguments, we will simply use that any such G' that lacks K_s,s-subgraphs will have a desirable induced subgraph. We delay the proof of Proposition <ref> to the next section. We are now ready to prove our main theorem.Fix h≥ 2 and consider s sufficiently large. Let G have no K_s,s-subgraphs with d:=d(G)≥ s^500h^2. By the largeness of s, we may assume d is sufficiently large. We want to find an induced proper balanced subdivision of K_h. We shall assume that G does not contain an induced copy of the 1-subdivision of K_h, as otherwise we are done. We will argue that we can always pass to an induced subgraph G' satisfying one of the four outcomes: * G' is the 1-subdivision of K_h;* G' has n'≥ d^1/5 vertices with d(G') ≥ (n')^1-1/(100h);* G' has no C_4 and d(G')≥ d^1/(62500h);* G' is the 1-subdivision of a (simple) graph H with d(H)≥ d. If the first bullet holds, we obtain a contradiction. On the other hand, if the second bullet holds, we apply Lemma <ref> to G' (using the facts that G has no K_s,s, n'≥ d^1/5≥ s^C^*_F and that ε_F≥ 1/(100h), where F is the 1-subdivision of K_h) which brings us back to the first bullet, a contradiction.Meanwhile, if the third bullet holds, then we find a balanced subdivision of K_h' with h' ≥ d^1/(250000000h) by Proposition <ref>, which is far greater than h for large d. Finally, if the fourth bullet holds, we may find an induced balanced subdivision of K_h'⊂ G' with h' = Ω(d^1/2) by Corollary <ref> (taking s=t=2 and using the fact H is simple and therefore G' has no C_4).We are going to show now that one of the four bullets must necessarily hold. The result will then follow, as explained above.We may and will assume as well that G is d-degenerate for some d≥ s^500h^2. First, we apply Lemma <ref> (with Ld^6) to G. If the first case in Lemma <ref> holds, then we have the exact requirements to apply Proposition <ref> and therefore find some induced G' satisfying one of the first three bullets with the same d.Otherwise, the second case happens and we can pass to some induced subgraph G^* which is almost-regular (with d^*:=d(G^*) ≥ d^1/2 and Δ(G^*)≤ (d^*)^1+1/(500h^2), as d^* is sufficiently large). Here, we apply Lemma <ref> to find some G' satisfying either the second or third bullet.This completes the proof of Theorem <ref>.§.§ General degree-bounded classesWe have now reached the final frontier. In the previous subsection, one of our conditions for success was when G contained the 1-subdivision of K_h as an induced subgraph. Hence we could assume throughout our analysis that G did not contain such an induced subgraph, allowing us to apply Lemma <ref>. Here, we will replace the role of the 1-subdivision of K_h by another graph H_k which has no C_4 and average degree k, so that we may assume throughout G is H_k-free (or else we will be immediately done).For each k≥ 2, there exists a bipartite graph H_k on ≤ 8k^2 vertices with Δ(H_k)≤ 2k, so that d(H_k)≥ k which contains no C_4. Additionally H_k is an induced subgraph of the 1-subdivision of K_8k^2^(2k) (the complete 2k-uniform hypergraph on 8k^2 vertices). Whenever k is a prime-power, one can construct a bipartite k-regular graph H_k on 2k^2 vertices without a C_4 (see e.g. <cit.>). The result now follows for general k because we can always find a power of 2, q= 2^t, where k≤ q< 2k.Now for n,s, write Γ_n,s for the 1-subdivision of K_n^(s). It remains to show that H_k is an induced subgraphΓ_8k^2,2k. This follows from two observations, which can be easily verified. Γ_n,s contains Γ_n-1,s-1 and Γ_n-1,s as induced subgraphs. Let H be an s-regular bipartite graph with bipartition A,B, where N(y)≠ N(y') for distinct y,y'∈ B. Then H is an induced subgraph of Γ_|A|,s.Now because H_k is q-regular for some q∈ [k,2k), it must have distinct neighborhoods (else it would have a 4-cycle). Also H_k has 2q^2≤ 8k^2-(2k-q) vertices (since q<2k), so we can apply the above observations. We now state our modification of Proposition <ref>. Let D,k ≥ 1, and suppose d is sufficiently large with respect to D and k. Let G be an n-vertex d-degenerate graph with disjoint vertex sets A_0,B such that |A_0|>d^D+3|B| ande(G[A_0,B]) ≥nd/10. Moreover, supposeG does not have an induced subgraph G' satisfying any of the following:* G' is the 1-subdivision of K_8k^2^(2k);* G' has n'≥ d^1/5 vertices and d(G')≥ (n')^1-1/(200k);* G' has no C_4 and d(G')≥ d^1/(125000 k). Then, G has an induced subgraph G' with d(G')≥ D that has no K_2k,2k-subgraphs.This fourth outcome will be combined with the following result the authors together with Du, McCarty and Scott showed in <cit.>. Let k,s≥ 2 and let G be a graph with no K_s,s. Then, G has an induced subgraph G'⊂ G with d(G')≥ k, provided d(G)≥ k^Cs^3, for some absolute constant C>0.It would be enough to have a much weaker quantitative version of the above which is easily deduced from two results proved by McCarty <cit.> and Letzter, Kwan, Sudakov, and Tran <cit.>. Fix k≥ 2, and set Dk^Ck^3 as in Theorem <ref>. Consider s large. Now let G be a graph with average degree d:=d(G)≥ s^5000k^4 that contains no K_s,s-subgraphs. We shall deduce that G has an induced subgraph G”⊂ G that has no 4-cycles with d(G”)≥ k. We may and will assume that G does not contain the graph H_k from Lemma <ref> (else we are immediately done). We now shall show that we can pass to an induced subgraph G' satisfying one of five outcomes: * G' is H_k;* G' is the subdivision of K_(8k)^2k;* G' has n'≥ d^1/5 vertices with d(G')≥ (n')^1-1/(200k); * G' has no C_4 and d(G')≥ d^1/(125000k);* G' has no K_2k,2k (as a subgraph) and d(G')≥ D.Clearly, the first outcome would be a contradiction, and the second outcome implies the first (by Lemma <ref>). Moreover, the third outcome also implies the first (via Lemma <ref> with H:= H_k, noting that ε_H_k≥ 1/(200k) and n'≥ d^1/5≥ s^1000k^4≥ s^C^*_H_k). Now, the fourth outcome is exactly what we want with room to spare, since d^1/(125000k)>k. Finally, if the fifth outcome happens, then by definition of D, we can pass to some induced subgraph G”⊂ G' that lacks C_4's with d(G”)≥ k, as we wanted to show. Therefore, all that is left to do is to show that one of the five bullets must hold.As before, we may assume G is d-degenerate. Next, we apply Lemma <ref> to G with L 4d^D+3. If the first case holds, then we have some heavily unbalanced partition of G satisfying the requirements as in Proposition <ref> and therefore one of the last four bullets must happen. Otherwise, the second case holds and we can pass to some induced subgraph G^* which is almost-regular (with d^*:= d(G^*)=Ω(d/log(L)^2) ≥ d^1/2 and Δ(G^*)≤ (d^*)^1+1/(1000k), since d is large enough). We may then finish off by invoking Lemma <ref> (taking := 1/(1000k)) to find some G' satisfying either the third or fourth bullet. This completes the proof.§ PROOF OF THE POLYCHOTOMY PROPOSITIONS §.§ Big picture The purpose of this section is to establish Propositions <ref> and <ref>. They will both follow from a single general argument.We first need a “model result”, in the same vein as to Lemma <ref>. This is how we intend to proceed in both of the aforementioned results when none of the “forbidden bullets” occur. Let d,D,k≥ 2 and suppose d is sufficiently large with respect to D and k.Let G be a graph which is d-degenerate with a bi-partition into disjoint vertex sets A,B so that the following holds: * |A|>d^D+1|B|;* G[A] = ∅;* d(x)≤ 100 d for each x∈ A.Furthermore, suppose that for every x∈ A, there is an independent set of size D, I_x:={y_1,…,y_D}∈N(x)D, so that |I_x∩ N(x')|<k for all x'∈ A∖{x}. Then, G contains an induced subgraph G' which is the 1-subdivision H of some D-uniform multihypergraphwith d()≥ d and with the property that H has no K_k,k. The first three bullets are essentially what we get after applying Claim <ref> (in the proof of Lemma <ref>). The crucial condition is the existence of the independent sets I_x, which guarantee us that G' does not have any copy of K_k,k. Let I_x be as in the statement above. Since G is d-degenerate, we can direct the edges of G[B] to obtain a digraph Q with maximum out-degree at most d.Now, let B' be a random subset of B where each vertex is included in B' with probability p:= 1/(1000Dd) (independently). Let A' be the set of x∈ A for which N(x)∩ B' = I_x and B” be the set of y∈ B' with |N_Q^+(y)∩ B'| = 0. Finally, let take A”:= {x∈ A': N(x)∩ B” = N(x)∩ B'}.It is clear that [|B”|]≤[|B'|] = |B|p. Meanwhile, for a fixed a∈ A, we have that (a∈ A') = p^D(1-p)^d(x)-D≥ p^D(1-p(100d))≥ p^D/2 and furthermore(x∈ A”|x∈ A') ≥ 1-∑_y∈ I_x(y∉B”|x∈ A') ≥ 1-Ddp≥ 1/2. Using that G[I_x]=∅. Therefore, we have [|A”|] ≥ |A|p^D/4≥ |B| (using that d is sufficiently large). Whence [|A”|-d|B”|]≥ 0 and so there is some outcome with |A”|≥ d|B”|.Taking G' := G[A”∪ B”] gives the desired graph. Indeed, clearly this is the 1-subdivision of some s-uniform hypergraphwith d()≥ d. Observe that by assumption we have |N_G'(x)∩ N_G'(x')|≤ |I_x∩ N(x')|<s for distinct x,x'∈ A” which will imply G' cannot have a K_s,s, as we wanted to show. To finish, we show that we can always reduce to this case, unless one of the three forbidden bullets happens. Let h,k,D ≥ 2 along with an ∈ (0,1/10), and suppose d is sufficiently large with respect to h,k and D.Let G be an n-vertex d-degenerate graph with disjoint vertex sets A_0,B so that |A_0|> 100d^D+3|B| and e(G[A_0,B])≥nd/10. Suppose G does not have an induced subgraph G' satisfying any of the following:* G' is the 1-subdivision of K_h^(k); * G' has n'≥ d^1/5 vertices with d(G')≥ (n')^1-5;* G' has no C_4 and d(G')≥ d^/125. Then, we can find A⊂ A_0 so that A,B satisfies all the conditions of Lemma <ref> (with the same choice of D,k).We delay the details to the upcoming subsection (Subsection <ref>). We now quickly discuss how to deduce our polychotomies propositions.Apply Propositions <ref> and <ref> with h:=h, k:=2,D:=2,:= 1/500h. If none of the three forbidden bullets hold then we may pass to an induced subgraph G'⊂ G, which lacks 4-cycles and is the 1-subdivision of a multihypergraph H with d(H)≥ d. Since G' has no C_4, we have that H must be a simple graph (since if we had multiple edges in H, then this would create a C_4 inside G'). Thus, we may now apply Corollary <ref> to find a balanced subdivision of K_h' inside G' where h'≥Ω(√(d)). Assuming d is sufficiently large, we get h'≥ h, giving the desired result.Exactly as above, apply Propositions <ref> and <ref>to G with h:=8k^2, k:=2k,D:=D, := 1/1000k. If none of the three bullets hold, then we may as before pass to an induced subgraph G'⊂ G which has no K_k,k and which is the 1-subdivision of a D-uniform hypergraphwith d()≥ d.Since d() is larger than s, it is not hard to see that d(G')≥ D ( will have more edges than vertices, so G' is a bipartite graph where all vertices in the larger part have degree D).§.§ Establishing the last propositionThis section is devoted to proving Proposition <ref>. After appealing to results like Claim <ref> and Lemma <ref>, we will morally reduce to the following problem. Consider x∈ A_0 with N(x)⊂ B being an independent set of size d, and with |N(x)∩ N(x')| not being “too large” for any other x'∈ A_0∖{x}. Here, we would like to find some I_x∈N(x)D where |I_x∩ N(x')|<k for each x'∈ A_0∖{x'}, or otherwise find an induced subgraph G' isomorphic to the 1-subdivision of K_h^(s). We handle the task of finding I_x or a copy of K_h^(s) via a “shattering lemma” (Lemma <ref>), which is inspired by the notion of VC-dimension. Although a more quantitative shattering result follows by applying the well-known Sauer-Shelah lemma, we choose to give a simple self-contained proof in the spirit of Lemma <ref>. Given a family ℱ⊂𝒫([n]), we say that ℱ k-shatters a set R⊂ [n] if for every S ∈Rk, there is F∈ℱ such that R∩ F=S. We start with an easy preliminary observation.Let r≥ k≥ 2 and n ≥ 10 r^2, and suppose ⊂([n]) has the property that for every S∈[n]k, there exists some F∈ with F⊃ S. Furthermore, suppose that |F|≤ n/r^k+1 for every F∈. Then, there is some r-subset R⊂ [n] which is k-shattered by F.For each subset S∈[n]k, we assign some F_S ∈ so that S⊂ F_S (if there are multiple choices, pick one arbitrarily). Now, let x_1,…,x_r∈ [n] be chosen uniformly at random (with repetitions allowed). We have that (x_1,…,x_r are all distinct)≥ 1- r21/n≥ 9/10. Next, for e∈[r]k, we bound the probability of the event, _e, that (x_i)_i∈ e are all distinct and x_ℓ∈ F_{x_i:i∈ e} for some other ℓ∈ [r]∖ e. Since |F_{x_i:i∈ e}|≤ n/r^k+1, and each x_i is chosen uniformly at random, a union bound gives (_e) ≤ (r-k)/r^k+1≤ 1/r^k. Summing over the e, we have that (_e holds for some e)≤rk1/r^k≤ 1/2. Thus, with positive probability x_1,…,x_r are all distinct, and none of the events _e happen. Which means that R:={x_1,…,x_r} is a set of size r which is s-shattered by . We now simply apply the hypergraph Ramsey theorem (which says that for every k,h,D, there exists some finite N so that every 2-coloring of [N]k either has a h-set R where Rk is monochromatic in the first color, or a D-set B where Bk is monochromatic in the second color) to finish. Let D,h,k≥ 2. Then, there exists δ:= δ(D,h,s)>0 so that the following holds for all sufficiently large n.Let ⊂([n]) where F<δ n for each F∈, and which does not k-shatter a set of size h. Then there exists a set I⊂ [n] of size D where |I ∩ F|<k for all F∈.Let N be the 2-color Ramsey number of K_h^(k) and K_D^(k) and let δ := 1/N^k+1. Now consider a family ⊂([n]) where |F|<δ n. Define ⊂[n]k to be the set of all S∈[n]k such that S⊄F for all F∈. As n is large, we have that ∪ satisfies the assumptions of Lemma <ref> (with k:=k,r:=N), thus we can find some set X of size N which is k-shattered by ∪. For each k-subset S∈Xk, we color it red if S = F∩ X for some F∈, and otherwise we color it blue. Note that if S is colored blue, we must have that S∈. By our choice of N, we can either find a set R∈Xh where all k-subsets are red (which implies thats-shatters a set of size h), or a set B∈XD where all k-subsets are blue. By assumption R does not exist, so we are done taking I:= B (we cannot have |I∩ F|≥ k for any F∈, as this would mean B contains a non-blue s-subset).Let r≥ 2 and n≥ 10r^2 and suppose we have a family of subsets ℱ⊂𝒫([n]) with the following two properties.* For all F∈ℱ, |F|≤ n/r^3. * For every x,y∈ [n], there is F∈ℱ, with {x,y}⊂ S. Then, ℱ 2-shatters a set of size r.For each pair e∈[n]2, we assign some F_e ∈ so that e⊂ F (if there are multiple choices, pick one arbitrarily). Now, let x_1,…,x_r∈ [n] be chosen uniformly at random (with repetitions allowed). We have that (x_1,…,x_r are all distinct)≥ 1- r21/n≥ 9/10. Next, for i,j∈ [r], we bound the probability _i,j, that x_i≠ x_j and x_k∈ F_{x_i,x_j} for some other k∈ [r]∖{i,j}. Since |F_{x_i,x_j}|≤ n/r^3, and each x_k is chosen uniformly at random, a union bound gives (_i,j) ≤ (r-2)/r^3≤ 1/r^2. Summing over the i,j, we have that (_i,j holds for some i,j)≤r21/r^2≤ 1/2. Thus ({x_1,…,x_r} is a set of size r which is 2-shattered by )≥ 4/10>0.OLD PROOF BELOW:Delete sets from ℱ maintaining property (2). Let ℱ' be this new family. Observe that for every S∈ℱ', there must exist some x≠ y∈ S where no other S'≠ S contains both x,y. Consider an auxiliary graph G on [n] where we add an edge (x,y) if that is the private pair of vertices of a set S ∈ℱ'. Now, there must exist at least n2/s2 such edges since each S contributes with at most s2 private edges and hence there is a vertex v_1 with degree d_1≥ n/2s^2. Let N_G(v_1)={x_1,…, x_d_1} and let the sets S_1,…, S_d_1 be so that (v_1,x_i) is a private edge of S_i.We now construct a nice neighbourhood N_1⊂ N_G(v_1) of v_1. Indeed, choose x_1∈ V_1 and delete from V_1 all other vertices of S_1 and add x_1 to N_1. Note that we delete at most s-1 vertices from V_1. We choose now a leftover vertex x_2 (without loss of generality) add it to N_1 and again delete from V_1 all other vertices of S_2, we do the same until there are no vertices left. It is clear at the end of the process ℓ |N_1|≥ d_1/s≥ n/2s^3≥ n/s^6. Let N_1={x_i_1,…, x_i_ℓ}, note by construction we have that S_x_j∩ N_1={x_i_j} and no S'∈ℱ' distinct from all S_x_i_j contain v_1, x_i_f, for any f∈ [ℓ]. Now, look at the induced graph G[N_1] and precede as above i.e. pick a vertex v_2 of degree |N_1|/s^2 in G and find a subset N_2⊂ N_G(v_2)∩ N_1 as above of order at least |N_1|/s^6. It is clear that after r steps the set {v_1,… v_r} is 2-shattered.Finally we deduce Proposition <ref>. We apply first Claim <ref> so we may find some A'⊂ A_0 with |A'|>3 d^D+2|B|, so that G[A'] = ∅ and d_B(x) ∈ [d/20,10d] for each x∈ A'. Thus, the requirements in the bullet points in order to apply Lemma <ref> are satisfied. All that is left is to find the appropriate choices for I_x. Actually, we will have to pass to a subset A⊂ A' with |A|≥ |A'|/(3d) and find the I_x, for all x∈ A. To find the I_x, we shall use the following result.Let x∈ A' be any vertex. Suppose we cannot find an induced subgraph G' satisfying any of the three bullets (in Proposition <ref>), then there must exist some independent set I_x = {y_1,…,y_D}∈N_B(x)D where #{x'∈ A' ∖{x}: |N(x')∩ I_x|≥ s} <d.We shall prove the above claim in a moment. But first, we quickly show how to deduce our result using this claim. For each x∈ A', we can fix some I_x given by Claim <ref>. Now create an auxiliary graph H on A', where x∼ x' if |I_x∩ N(x')|≥ s and/or |I_x'∩ N(x)|≥ s. We have that Δ(H)<2d, so we can greedily find an independent set A⊂ V(H) = A' with |A|≥ |A'|/(2d+1)≥ |A'|/(3d). This choice of A has all the desired properties. Indeed, all the bulleted assumptions from Proposition <ref> still hold. Meanwhile for x∈ A with I_x = {y_1,…,y_D}, we have {x'∈ A∖{x}: |N(x')∩ I_x|≥ s}⊂ N_H(x)∩ A =∅ since H[A] = ∅ (so the required conditions about the I_x are satisfied). It remains to prove Claim <ref>.Fix x∈ A', and assume none of the aforementioned G' exist. By construction of A', we have that d_B(x)≥ d/20.First, we claim there must be an independent set J=J_x⊂ N(x) with |J|=√(d/20). Assuming otherwise, then applying Lemma <ref> to G[N(x)] with :=, we get some induced subgraph G'⊂ G[N(x)] satisfying either the second or third forbidden bullet (since d_B(x)≥ d/20> d^4/5 assuming d is large).We now seek to find I_x within J. Let S=S_x be the set of vertices x'∈ A_0∖{x} with d_J(x')≥ |J|^1-, whose neighborhood correlates heavily with J. We must have |S|<|J|, otherwise we can pass to an induced subgraph G'⊂ G[J∪ S] on 2|J|≥ d^1/5 vertices with average degree ≥ |J|^1-/2, which satisfies the second forbidden bullet (assuming 2≤ n^/10, which holds when d is large).We shall now find some I_x∈JD⊂N(x)D where {x'∈ A': |N(x')∩ I_x|≥ s}⊂{x}∪ S. Since |S|<|J|<d (and recalling J is an independent set), this will complete the proof.Write := {N(x')∩ N(x): x'∈ A'∖ ({x}∪ S)}. Since d is sufficiently large, we have that |F|<δ |J| for every F∈ (where δ = δ(h,D,s)>0 is the constant from Lemma <ref>). Now assuming (for the sake of contradiction) that no such I_x exists, then by Lemma <ref>,will s-shatter a set of size h (assuming d is sufficiently large so that |J| is sufficiently large). This implies that there are distinct vertices y_1,…,y_h∈ J so that for each e∈[h]s, there exists some x_e ∈ A' where N(x_e) ∩{y_1,…,y_h} = {y_i:i∈ e}. Taking G' := G[{y_1,…,y_h}∪{x_e:e∈[h]s}], we have that G' is the 1-subdivision of K_h^(s) (since J,A' are both independent sets within G). So this s-shattered set would mean we have a G' satisfying the first forbidden bullet, which can't happen. So the choice of I_x must exist.§ PROOF OF THEOREM <REF> Let G be a graph with d(G)≥ C(t)k^100t and ω(G)≤ k. Since we may pass to a subgraph of G of highest average degree we may and will assume that G is d-degenerate and has d(G)≥ d,δ(G)≥ d/2.Applying Lemma <ref> to G with L=10^10d^4, we either find an induced subgraph G'⊂ G with d(G')=Ω(d/log^2(d)) and Δ(G')=O(log^2(d)d(G')) or a partition V(G)=A∪ B, where |A|≥L|B|/2 and e(G[A,B])≥nd/4.Suppose the second case holds. Then, applying Lemma <ref> to G and the partition A,B, using the same choice of d, we obtain an induced t-theta graph. We may then assume the first case holds.Let G' be such an induced subgraph. We shall apply Lemma <ref> to G' with ε=1/20. Note that Δ(G')=O(log^2(d)d(G'))≤ (d)^1+ε provided d is large enough (which we assume by taking C(t) sufficiently large). Furthermore, due to Theorem <ref>, one can find an induced subdivision of K_t+2 inside Hby taking C(t) sufficiently large (which contains an induced subdivision of K_2,t, which is a t-theta graph). This already suffices to answer Question <ref> in the affirmative.We shall now focus on establishing Theorem <ref>. We require a more quantitative result to replace Theorem <ref>. In <cit.>, it is proven that C(k) can be taken to be k^O(1) in Theorem <ref>, which immediately implies Theorem <ref>. However, that proof relies upon many results proved throughout <cit.> (some of these being a bit complicated). We shall show how to deduce our main theorem with a simpler argument. We need the following:<cit.> Let G be a {C_3,C_4}-free graph with average degree d = d(G)≥Δ(G)^1-1/100. Then G contains an induced bipartite subdivision of K_k with k= Ω(d^1/5). The proof of Proposition <ref> given in <cit.> is a self-contained probabilistic sampling trick, combined with Theorem <ref>. Essentially it is just a variant of the sampling trick which we applied to G[A'∪ B] in the proof of Lemma <ref>.The plan is now to repeatedly apply the dichotomy from Lemma <ref>, until we can apply Proposition <ref>.Let H⊂ G' be a C_4-free induced subgraph where d_1 d(H)≥ d^1/10≥ d^1/12. Let H'⊂ H be a subgraph of H with highest possible average degree.Hence, d_2 d(H')≥ d^1/12, δ(H')≥ d_2/2 and H is d_2-degenerate. We apply once again Lemma <ref> to H' with L=10^10d_2^4. If the second case holds exactly as before we would obtain an induced t-theta graph. Otherwise, we find an induced subgraph H”⊂ H' withd_3 d(H”)=Ω(d_2/log^2(d_2) )≥ d^1/15 and Δ(H”)=O(log^2(d_3)d_3)≤d_3^(1+1/100).We are going to show that H” must contain an induced subdivision of K_2,t which concludes our proof.It is easy to see that as H” is C_4-free it contains at most 2nd_3^1+1/100 triangles as the neighborhood of every vertex contains at most a matching. The same argument as in the proof of Lemma <ref> allow us to pass to an induced subgraph of H” with still high average degree and no triangles. We may then without loss of generality assume that H” contains no triangles. For clarity we state here the properties of our graph H”.* d(H”) d_3≥ d^1/15;* Δ(H”)≤ d_3^(1+1/100);* girth of H” is greater or equal to 5; As of our final step we have to find the required induced t-theta graph. We observe that as H” is C_4-free V(H”) n ≥ cd_3^2, ((n, C_4)=O(n^3/2)).§ NEW BOUNDS FOR TRIANGLE-FREE GRAPHS Let G be a graph with no induced copy of K_s,t. Let G be a triangle-free graph, with no copy of K_s,t. ofFixsmall wrt s,t. Assume δ(G) ≥ d and Δ(G)≤ d^1+.Then we can pass to a C_4-free subgraph of polynomial degree which is C_4-free.§ CONCLUDING REMARKSWe have shown that every degree-bounded class of graphs is polynomially degree-bounded.It could be interesting to classify all degree-bounded classes with a degree bounding function which grows linearly. Now, regarding χ-bounded classes of graphs our knowledge is much more limited. Recall, we do not even know whether for every T, the class of T-free graphs is χ-bounded. This has only been verified for very specific trees (see e.g. <cit.>) which includes all paths.We conjecture that actually for every path P, the class of P-free graphs is polynomially χ-bounded. For every k, there is C_k>0 such that every graph G with χ(G)≥ω(G)^C_k contains an induced path of length k. Scott, Seymour and Spirkl <cit.> showed recently that the classof P_5-free graphs is χ-bounded where f(x)=x^log_2(x) is a χ-binding function for . Recall Theorem <ref> states that if the average degree of a graph G is sufficiently large compared with its clique number then G must either contain an induced complete bipartite graph or an induced subdivision of every small graph but we cannot ensure either of the structures by itself. But what happens if we replace average degree with chromatic number? We observe again that neither of these structures could individually be forced, as there are graphs of arbitrarily high chromatic number and girth at least 5 and triangle-free graphs with arbitrarily high chromatic number and no induced subdivision of the 1-subdivision of K_5 <cit.>.Scott and Seymour <cit.> showed the family of graphs without an induced subdivision of a (ℓ,t)-theta graph i.e. a graph consisting of two vertices joined by t-vertex disjoint paths of length at least ℓ is χ-bounded. We conjecture that actually avoiding the family of long balanced subdivisions of K_2,t is enough to guarantee χ-boundedness. For every ℓ and t, there is f_ℓ,t: ℕ→ℕ such that every graph G without an induced copy of a t-theta graph where all paths are of the same length at least ℓ has χ(G)≤ f_ℓ,t(ω(G)).Maybe f_ℓ,t could even be taken to be polynomial, but this is not known even for cycles (the case of t=2), since a positive answer would resolve Conjecture <ref>. §.§ AcknowledgementsThe authors would like to thank Alex Scott for helpful remarks.§.§ Further questionsLet H be a bipartite graph. Does there exist some C_H where the following holds for s≥ 1. For every H-free graph G with d(G)≥ s^C_H, we either have: * G contains a K_s,s;* or there is an induced subgraph G'⊂ G with d(G')≥ s that contains no 4-cycles. Write τ(G) to denote the maximal s so that G contains a K_s,s, and κ(G) to denote the maximal k so that there is an induced subgraph G'⊂ G with d(G')≥ k where G' has no 4-cycles. The above says that if G is H-free with d(G)≥ d, then max{τ(G),κ(G)}≥ d^Ω_H(1). With some minor adjustments, the methods from our paper easily confirm this conjecture when H= K_s,t or H is the 1-subdivision of K_h. By more aggressively applying our dichotomy result (Lemma <ref>),Our methods can prove that we either have τ(G)≥ d^Ω_H(1) or κ(G)≥ω_d→∞(1) (second condition boring).The above claims that we for any H-free family of graphs, , we have max{τ(G) abbrv
http://arxiv.org/abs/2310.18452v2
{ "authors": [ "António Girão", "Zach Hunter" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20231027195255", "title": "Induced subdivisions in $K_{s,s}$-free graphs with polynomial average degree" }
http://arxiv.org/abs/2310.18126v1
{ "authors": [ "Dmytro Kolisnyk", "Friedemann Queisser", "Gernot Schaller", "Ralf Schützhold" ], "categories": [ "quant-ph", "cond-mat.stat-mech", "physics.app-ph" ], "primary_category": "quant-ph", "published": "20231027131558", "title": "Floquet analysis of a superradiant many-qutrit refrigerator" }
2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).CHR 2023: Computational Humanities Research Conference, December 6 – 8, 2023, Paris, France1]Rebecca M. M. Hicke[ orcid=0009-0006-2074-8376, [email protected], url=https://rmatouschekh.github.io, ] [1] [1]Department of Computer Science, Cornell University, USA2]David Mimno[ orcid=0000-0001-7510-9404, [email protected], url=https://mimno.infosci.cornell.edu, ] [2]Department of Information Science, Cornell University, USA [1]Corresponding author.Large language models have shown breakthrough potential in many NLP domains. Here we consider their use for stylometry, specifically authorship identification in Early Modern English drama. We find both promising and concerning results; LLMs are able to accurately predict the author of surprisingly short passages but are also prone to confidently misattribute texts to specific authors. A fine-tunedmodel outperforms all tested baselines, including logistic regression, SVM with a linear kernel, and cosine delta, at attributing small passages. However, we see indications that the presence of certain authors in the model's pre-training data affects predictive results in ways that are difficult to assess. stylometry large language models Early Modern English dramaT5 meets Tybalt: Author Attribution in Early Modern English Drama Using Large Language Models [ January 14, 2024 =============================================================================================§ INTRODUCTION Stylometry is a key tool for computational humanities research. Author identification provides a clear test case for methods that seek to identify “style,” which in turn can be used to answer many questions of interest to humanists. However, current attribution methods require substantial amounts of known-author text for training as well as large amounts of text for identification. Large language models (LLMs) are powerful and now widely used. They develop a statistical model of language through training on a large, unorganized corpus. By encoding information from large amounts of contextual data in their parameters, they are often able to extract subtle, complex patterns from relatively short text segments. LLMs have proven useful for tasks such as detecting scenes in German dime novels <cit.>, predicting TEI/XML annotations for plain-text editions of plays <cit.>, and understanding ancient Korean documents <cit.>.In this work we consider whether LLMs can be applied to authorship identification and whether they might allow us to stretch the boundaries of stylometry to increasingly short passages. To evaluate these questions, we consider a deliberately difficult setting: Early Modern English drama. The language of Early Modern drama is sufficiently far from contemporary English that it may be challenging for LLMs primarily trained on modern text to parse. Additionally, the culture of co-authorship and collaboration among writers during the Early Modern era often makes it difficult to distinguish stylistic delineations between individuals. Despite its challenges, the attribution of Early Modern drama is a well-studied field, and techniques like cosine delta <cit.> achieve high accuracy at identifying plays. Yet, these methods still struggle to attribute short passages of text. We are specifically interested in determining whether fine-tuned LLMs can improve performance in this area. To this end, we provide the LLM with 5 to 450 word speaker utterances for both fine-tuning and testing. The average length of utterances in our test dataset is only 28.2 words. We have three primary findings. First, for short texts the fine-tuned LLM outperforms all tested baselines, including logistic regression, a support vector machine (SVM) with a linear kernel, and cosine delta.Accuracy varies by author and is not fully explained by the number of plays by the author in the fine-tuning set. Second, LLMs are more prone than cosine delta to confidently misattribute texts to specific authors. These “scapegoat” authors often have large vocabularies and word use similar to the corpus average. Third, trained LLMs may be able to quantify “style”.When we apply the model trained on Early Modern drama to “attribute” excerpts of plays written between the 1500s and 1900s, we see an increasing proportion attributed to Shakespeare, possibly suggesting a quantification of his lasting influence.§ RELATED WORK Many different methods have been used to perform authorship attribution tasks with Early Modern drama. These include function word adjacency networks <cit.>, multi-view learning <cit.>, clustering algorithms <cit.>, and SVMs with rolling attribution <cit.>. All of these studies attempt to attribute complete plays except <cit.>, which attributes scenes with more than 100 lines. We are not aware of any use of large language models for Early Modern attribution.Attempts to attribute shorter passages in Early Modern drama have been controversial. These studies include the attribution of 63 words from Macbeth <cit.> and samples of 173 words from Henry VI, Part 1 <cit.>. They have been critiqued <cit.> in part because the sections of text studied were so short. While we attempt short text attribution, we select samples broadly from many plays rather than focusing on specific passages.Work has also been done on the attribution of short texts in different fields. Cosine similarity is effective at attributing 500 word excerpts from blogs <cit.>. Similarly, topic models are able to attribute email and blog snippets with average length 39 and 57 words <cit.> and the Source Code Authorship Profile (SCAP) method attributes tweets of 140 characters or shorter with high accuracy <cit.>. None of these studies use LLMs, and all use modern datasets.Some researchers have begun testing the feasibility of using LLMs for attribution. These studies used the embedding output of LLMs to train custom attribution models using LSTMs <cit.> or CNNs <cit.>. Our work uses a simpler LLM method, in which we fine-tune the original model to directly generate author names, without the need for any additional coding or customization. In addition, we use a corpus with less clear delineation.§ DATA & METHODS We use a collection of Early Modern English drama—plays written in the 1500s and 1600s—gathered from two sources: the Folger Digital Anthology of Early Modern English Drama (EMED) <cit.> and the Shakespeare His Contemporaries corpus (SHC) <cit.>. We first gathered 367 plays from the EMED corpus and then added the 181 remaining plays from SHC.[Because the original Shakespeare His Contemporaries corpus is no longer publicly available, we have drawn these sources from a port of the original Github linked in the citation.] In order to remove features that may distinguish files from different corpora, we stripped all non-accent non-ASCII characters from the play texts and replaced them with standardized alternatives where appropriate. Each XML file offered regularized spellings of non-standard words in the play. In creating our corpus, we used the regularized spellings from the EMED corpus that agreed with the greatest number of other sources when possible and the SHC regularizations otherwise. We chose to use the regularized text for two reasons. First, we did not want the model to be able to distinguish between authors based on spelling choices. Although differences in spelling may help the model identify authors, they are not indicative of the kinds of stylistic difference we are interested in studying. Second, we hypothesized that standardizing the play texts would make them appear more similar to modern text and thus improve the model's ability to accurately tokenize the input. Finally, we removed all line breaks from the texts as the different corpora do not consistently mark them.We then split each play into speaker utterances to create a challenging but coherent identification problem. We separated any utterance longer than 450 words into multiple samples by splitting directly after every 450th word, regardless of sentence or line breaks. We then removed utterances with fewer than 5 words. Because authors sometimes develop distinctive speaker voices within a play, we hypothesize that separating the texts by speaker utterance adds an extra layer of difficulty to the attribution task. We further reduced the training and testing corpora to maximize validity and statistical reliability. We removed all plays with fewer than 300 remaining utterances, plays by multiple authors, and plays by authors with fewer than three works in the corpus. Plays that were mislabeled as by a single author, but were actually of disputed (co-)authorship were placed into a separate subcorpus. We were thus left with 253 plays by 23 authors in the primary corpus and 23 plays in the subcorpus. Further details about the corpora are listed in the appendix.We used these corpora to assess the capability of several different authorship attribution methods to label short texts. Specifically, we tested logistic regression, SVMs with a linear kernal, cosine delta <cit.>, Pythia <cit.>, Falcon <cit.>, and several fine-tuned T5 models <cit.> of varying sizes. T5 is a generative large language model and the pre-trained T5 models are optimized with a masked language modeling objective. Thus, in order to fine-tune T5 to perform authorship attribution, we created a series of input and output pairs where the inputs are formatted as an utterance with the author's name masked and the corresponding outputs are the same utterances with the author's name revealed (i.e. Table <ref>). The tagwas used to mask the author's name because it follows the format of tags used during T5's pre-training regime.Initial experimentation found that using this tag provided good accuracy. It is important to note that the model could emit any string, but in practice the fine-tuned model only generated author names present in our corpus except during a later application to a comparative dataset (Section <ref>).r0.7< g r a p h i c s >The size of each dataset used in the experiment by number of samples.One play from each author in our corpus was withheld from the training dataset. From the remaining n-1 plays by each author, we included 235 random samples in the training dataset and 15 samples in the validation dataset used for parameter tuning. We included another 50 samples from each of these plays in the final test dataset for which we report results. Thus, we draw 300 distinct samples from each play withheld from the training dataset. To the test dataset, we added 200 randomly selected samples from each of the plays withheld from training. We then created a separate test dataset containing 200 samples from each of the 23 plays in the disputed authorship corpus. Figure <ref> shows the relative size of the training, validation, and test datasets, as well as the held-out disputed set and a set of post-Early Modern plays used in Section <ref>.We fine-tuned a small, base, and large version of T5 on the train and validation datasets, using batch sizes of 16, 8, and 4 respectively and running for 10 epochs. The additional fine-tuning hyperparameters are reported in Section A of the appendix. We then asked each model to predict labels for every sample in the primary test dataset. Finally, we used the best performing model, , to predict labels for the test dataset of disputed authorship plays.We also experimented with fine-tuning two comparable decoder-only generative LLMs: Pythia with 1 billion parameters and Falcon with 1 billion parameters. The input and output strings described above were edited for these experiments so that thetag was placed at the end of each string. However, both models hallucinated extensively; Pythia produced 10,221 unique strings as author names and Falcon produced 9,180. Even when the first two words of each produced string, stripped of punctuation, were used as the predicted author name Pythia and Falcon still performed considerably worse than . We thus omit a further analysis of these models from the paper.For our baseline comparisons we used only the original quotation without theprefix and T5 tags. The correct authors were included as labels. We ran two logistic regression models and two SVM models with linear kernels: one version of each used TF-IDF weighted word counts as features and the other used plain word counts. Each of these baselines was implemented using thepackage. Cosine delta <cit.> is a popular improvement on Burrows delta <cit.> that represents texts using z-score weighted word frequencies for the n most frequent words and compares sample texts to the training corpus using cosine similarity. To run cosine delta, we used an adapted version of thepackage with a vocabulary size of 5,000 unigrams <cit.>. We chose a vocabulary size of 5,000 because we found it optimized performance on the plays in the training data without over-fitting and decreasing performance on the withheld plays. Each sample was assigned to the author with the highest cosine similarity value. All baseline models were evaluated on the same test/train splits as the T5 models and the TF-IDF, z-score, and word count values were fit on only the training dataset.For every model, we experimented with using combinations of unigrams, bigrams, and trigrams but found that using only unigrams resulted in the highest performance.§ COMPARING MODELS Results are shown in Table <ref> for the per-sample accuracy of each attribution method and the accuracy of the “majority vote” predicted author of each play. In order to display the effect of play-specific language such as character names and settings, we show predictive results for both held-out sections of plays and fully held-out plays. We begin by establishing that accurate author attribution is possible for this dataset using only the available information. It is known that authorship attribution is more reliable for longer samples. To establish an upper bound on expected performance, we thus apply a cosine delta model to the full held-out text of each play rather than the short samples we use for all other experiments.This setting increases the length of attributed samples by a factor of 50 for plays in the training set and 200 for fully held-out plays. Cosine delta accurately attributes 94.9% of the long samples, performing better on plays in the training set than those fully held-out.The fine-tunedmodel correctly attributes more short samples than any other method tested.It accurately labels 52.7% of held-out samples from plays included in the training dataset and 33.2% of samples from plays fully withheld from training.performs substantially worse on the individual sample level than the cosine delta upper bound, but it only falls seven plays short of the upper bound when attributing plays to the most-predicted author.Although it is not surprising that results are better for partially-seen plays, the accuracy of both subsets exceeded our expectations. Because the text excerpts we use are very short, they frequently contain no named entities, and we thus conclude that attribution was not performed solely using this information. Longer samples were more accurately attributed. The average length of correctly attributed samples in our primary test dataset was 36.7 words whereas the average length of misattributed samples was 20.7 words. Figure <ref> shows the distribution of sample lengths and the accuracy for each range. Accuracy exceeds 50% with only 20 words (random is ≈5%). Model scale also effects accuracy. Themodel performed better than the smaller models we compare it to,and . We observe thatdoes 30.3% better on samples from plays included in training, 21.4% better on samples from plays withheld from training, and 63.4% better at attributing plays by majority vote than . This effect may be due to the larger model's greater capacity to fit the particulars of author-specific language in fine-tuning, a greater capacity to represent linguistic variation in pre-training, or some combination of both.It appears that the reason for the large improvement in play attribution accuracy with model size is a significant reduction in the assignment of large numbers of samples to 2–3 specific (incorrect) authors, which we call scapegoating. assigns 60.5% of misattributed segments to the top two scapegoated authors (Thomas Heywood and William Shakespeare),assigns 32.8% of misattributed samples to two authors (Heywood and James Shirley), andonly attributes 25.6% of misattributed samples to two authors (also Heywood and Shirley). Because misattributions both occur less frequently and are spread more evenly between authors in the larger models, it is more likely that the author of a play will have the majority of samples assigned to them. Logistic regression and linear SVM prove to be strong baselines. However,performs 4.7% better on samples from plays included in training and 9.9% better on samples from withheld plays than linear SVM with TF-IDF values, the highest performing of these baselines. Since these models have access to the same data, the difference must either come from the LLM's ability to use arbitrary combinations of non-sequitive words or its access to patterns from pre-training. It is important to note that we do not know what data T5 saw during pre-training. However, because themodel performs worse than logistic regression, it is unlikely that this is the sole source of improvement. Logistic regression and linear SVM are also prone to scapegoating: all models assign over 35% of misattributed samples to two primary authors (Shakespeare and Shirley). Linear regression with TF-IDF values is a particularly egregious scapegoater, assigning over 50% of samples to Shakespeare and Shirley, over double the number thatassigns to Shirley and Heywood.In addition to the “merged samples” upper bound, we apply cosine delta to the short samples. This approach performs worse than all methods butand the simple baselines. However, cosine delta achieves high performance at the play level. Evenonly attributes 6 more of the 253 plays in the original corpus correctly. This, again, appears to be related to scapegoating. Cosine delta assigns the samples it misattributes relatively evenly between authors, only assigning 12.4% to the two most scapegoated authors (Richard Brome and Thomas Middleton). Thus, while cosine delta may be less accurate overall than T5, the way in which it fails is less skewed. Compared to T5, SVMs, and logistic regression, it is less likely to confidently misattribute a play.§ ACCURACY BY AUTHOR For the best-performing model, , accuracy varies considerably by author for both withheld plays and those included in training (Table <ref>). Authors with more plays in the training set are more accurately predicted for the held-out set; the Pearson correlation coefficient between these values is 0.65, with p < 10^-3. Themodel performs well on samples from many of the well-represented authors in our corpus. For 9 of the 23 authors, the model accurately attributes more than 50% of samples from plays included in training, well above random. The four authors for whom the model performs best on samples from included plays are Shakespeare (79.0%), Margaret Cavendish (74.9%), Shirley (61.7%), and John Lyly (60.6%). The model also accurately attributes many samples from the withheld plays by these authors: 72.0% of samples from Shakespeare[Antony and Cleopatra] are correctly attributed, 68.5% of samples from Cavendish[The Wooers], 58.5% of samples from Shirley[The Sisters], and 60.0% of samples from Lyly[Sappho and Phao].The reasons that the model attributes samples from these four authors with such high accuracy differ. Shirley and Shakespeare are the authors with the most and second-most plays in the dataset, with 31 and 30 plays in the corpus respectively. But Cavendish (8) and Lyly (12) are close to the average. Authors comparable to Cavendish in representation, such as George Chapman (11), Philip Massinger (13), and Thomas Middleton (13), all have accuracies below 50% for plays included in training. Similarly, authors comparable to Lyly such as John Ford (7) and John Marston (7) both have accuracies below 35% on included plays. Therefore, there is likely something distinctive about these two authors that makes them easier for the model to identify. Note that Cavendish is the only female author in the corpus (we were unable to include others), so we are not able to determine if her plays are distinctive because she has an individual style or if women authors of the period wrote differently from men.To further explore the cause of Cavendish and Lyly's distinctiveness, we compare each author's usage of the 100 most frequent words in the corpus. We first calculate z-scores comparing the frequency with which an author used each word to the mean frequency of that word's usage for all authors in the dataset. The frequencies are normalized by author so that no single author skews the distribution and we ensure that the set of 100 most frequent words contains no named entities. We then sum the absolute values of each author's z-scores to create a `uniqueness' metric. For a further exploration and validation of this metric, please see Section B of the appendix. The summed z-scores ranged from 47.5 to 138.2. The author with the most unique usage of common words by this metric is Cavendish, with a score of 138.2. The authors with comparable play counts to Cavendish each have considerably lower scores (Chapman: 50.57, Massinger: 69.9, Middleton: 67.7). The second most distinctive author is Thomas Killigrew, with a summed z-score of 108.7. Indeed, Killigrew has a very high accuracy on samples from included plays (52.0%) considering only four of his works are in the corpus. Lyly also has a relatively high summed z-score of 99.4, which is the fourth largest in the dataset. Again, this is higher than the scores of comparably represented authors (Ford: 68.3, Marston: 70.8), but not by as much. Notably, both Shirley and Shakespeare have low uniqueness scores by this metric. Shakespeare's is the lowest (47.5) and Shirley's is the 16th lowest (67.8). In addition, both authors have large vocabularies; Shakespeare has the largest vocabulary and Shirley the third-largest of all authors in the dataset. Both of these trends are likely related to their prominence within the training dataset, but they may still be meaningful. It is possible that Shakespeare and Shirley's uniqueness comes from using words that the other authors do not, instead of using common words uniquely. Overall, it seems that an author's usage of common words does affect how well the model can identify their writing. But it does not explain all of the variation seen in the dataset. §.§ Quote Misattribution There is also considerable variation in how the fine-tunedmodel misattributes quotes (Figure <ref>). Instead of assigning the misattributed quotes to authors randomly, it scapegoats two primary authors, Heywood and Shirley, and assigns them a disproportionate number. A confusion matrix depicting who quotes are misattributed to by original author demonstrates that the scapegoating phenomenon is not caused by confusion between specific pairs of authors (Figure <ref>). Instead, the misattributions to Heywood and Shirley are spread throughout the dataset. Again, it appears that contribution to the corpus is one factor that affects who samples are misattributed to. The Pearson's R correlation between the number of plays by an author in the dataset and the percentage of misattributed samples assigned to them is 0.86 with p < 10^-6. The outliers from this relationship appear to be Heywood, Shakespeare, Cavendish, and Ben Jonson (Figure <ref>).Examining authors' scores for the summed z-score metric again provides an indication of why some are scapegoated. Cavendish's high uniqueness score likely means it is more difficult for the model to mistake a given quote for hers. In contrast, Heywood, who has the most samples misattributed to him, has the second-lowest uniqueness score in the corpus, 49.4. He also has the second-largest vocabulary. The combination of these factors may help explain why he is so frequently scapegoated. Given a random quote from the test dataset, Heywood is more likely than most authors to have all of the words in the sample in his vocabulary. Even if he doesn't, the model could have learned that he is more likely to use a broad range of words than other authors. In addition, common word usage in the average corpus sample is likely to resemble Heywood's. Shirley, who has the second-most misattributed quotes assigned to him, has the third-largest vocabulary and the 16th lowest uniqueness score. Thus, it appears that vocabulary size and common word usage are factors that affect to whom the model's misattributes quotes.However, there are two major outliers which indicate that these three factors—number of plays, common word usage, and vocabulary size—cannot be the only ones affecting scapegoating. These are Jonson and Shakespeare. Shakespeare has both the largest vocabulary and lowest uniqueness score of any author in the corpus, and yet samples are less likely to be misattributed to him than would be expected given his contribution to the dataset. Similarly, Jonson has the fourth-largest vocabulary and the 19th lowest uniqueness score, yet he also stands out as an outlier to whom fewer samples are misattributed than expected. We hypothesize that these outliers are caused by the model's pre-training. Of the authors included in our corpus, Shakespeare and Jonson are among the best-known today. The model is likely to have seen the writing of these authors during pre-training, and may therefore be more likely to correctly label data from these authors than would be expected given only the fine-tuning process.§ ACCURACY BY PLAY Interesting patterns and outliers emerge when we examine the model's play-by-play accuracy at attributing samples. There are several authors, like Cavendish, for whom the proportion of correctly attributed samples is largely consistent across all plays and some, like Brome, for whom there is considerable variation in the play-level accuracy but for whom there are no noticeable outliers. When there is an outlier among an author's plays, there is usually (though not always) an identifiable reason for why that play stylistically differs from the rest of the author's work. A representative example of this can be seen in Ben Jonson's plays. Samples from all but two of Jonson's plays are correctly attributed more than 40% of the time, including those from the withheld play (Figure <ref>). However, only 18% of samples from Every Man Out of His Humour and 26% of samples from The Case is Altered are attributed to Jonson, causing Every Man Out of His Humour to be misattributed by the model. Both of these plays differ from Jonson's typical work. Although Every Man Out of His Humour was advertised as a sequel to the well-received Every Man in His Humour, it is very different from the original play <cit.>. It was the longest play written for a public theater performance during the Elizabethan era and was very poorly received. After its failure, Jonson began writing for private theaters instead <cit.>. Thus, it is likely that the play marks a stylistic experiment within Jonson's work. Interestingly, this play is still correctly attributed by sample-level cosine delta with 16% of samples. The Case is Altered is unique because it is the earliest surviving of Jonson's plays. Jonson excluded it from his collected works when they were first published and, even when it was eventually published in 1609, his name was only included in some copies <cit.>. The Case is Altered therefore likely represents an early work which the author was not proud of, and from whose style he matured.Another outlier is Covent Garden by Thomas Nabbes. Although there is some play-level variation in Nabbes' attribution accuracy, Covent Garden is the only play for whom the model correctly attributes less than 20% of samples and the only one it misattributes, assigning Brome 23% of samples (Figure <ref>). While this variation may be in part because Covent Garden was withheld from training, the underlying reason for the model's confusion is likely that Nabbes' Covent Garden was written as a direct response to Richard Brome's play The Weeding of Covent Garden, which is also included in the dataset. There are likely named entities that cross-over between these two works and there may even be stylistic similarities. Sample-level cosine delta correctly attributes Covent Garden to Nabbes, but with only 13% of samples. It assigns 10% of samples to Brome.We also see that the model performs poorly on the withheld plays of almost all authors with only three works in the corpus. For four of these five authors, less than 5% of samples from withheld plays are correctly attributed. The only deviation from this pattern is Robert Wilson; 19% of samples from the withheld Wilson play are correctly attributed. However, the withheld Wilson play is a prequel to one included in the training set. Thus, the model has more knowledge of this play than it would otherwise. It appears that including two plays by an author, or 470 samples, in the training data is not sufficient for the model to learn to extrapolate an author's style to an unseen text. It thus suggests a boundary for how much data may be needed for LLMs to be used for authorship attribution.§.§ Disputed and Co-Authorship We also asked themodel predict the author of samples from 23 plays which are of disputed authorship or which are believed to be co-authored, although they were labeled as written by a single author in the corpora we drew from. We determined which plays were co-authored or of disputed authorship using the Oxford National Dictionary of Biography, which provides a detailed biography for each author in this corpus. l0.65< g r a p h i c s >Percentage of samples attributed to authors for Sir Giles Goosecap.Overall, we found that the model greatly struggles to make clear attributions for plays that were co-authored or of disputed authorship unless Shakespeare was a contributor. The only exception is Sir Giles Goosecap, which is hypothesized to have been written by George Chapman. The model attributes 22% of samples from Sir Giles Goosecap to Chapman (Figure <ref>). This is comparable to two other plays by Chapman in the original dataset: All Fools, which was withheld from training and from which 13% of samples are correctly attributed, and The Widow's Tears, from which 24% of samples are correctly attributed. Thus, the model results support the overall attribution of this play to Chapman. Sample-level cosine delta does not support this attribution, assigning only 5.5% of samples to Chapman. For no other non-Shakespearian play in this subcorpus is there enough evidence to argue for an attribution or co-attribution to authors in the dataset. It is particularly difficult to make assumptions about plays that are suspected to have been written by authors for whom the model's performance on the initial corpus is low. Even if these authors are only attributed a small proportion of samples from a play, these results are often comparable to those for their plays in the original dataset, meaning no conclusion can be reached. Co-authorship also confused the model, particularly plays that were co-written with authors outside of the original corpus. The model often attributed a large proportion of quotes from these plays to Heywood. However, since there is no evidence that Heywood helped to author these plays, it is likely that this is an artifact of scapegoating. This trend also means that it is difficult to attribute plays to Heywood. 22% of samples from The Fair Maid of the Exchange, which Heywood is suspected to have co-authored, are attributed to him. However, a comparable proportion of samples are attributed to Heywood for multiple other plays in this dataset, meaning that we cannot use this as evidence for his authorship. This is a clear example of a case in which the model's misattribution patterns detrimentally affect its usability. The results are confusing even for plays from whom all of the suspected contributors are in the dataset, such as The Laws of Candy.Thus, the results for non-Shakespearian plays provide little evidence for or against certain writers' authorship. While sample-level cosine delta appears to have no clear advantage overin attributing samples from these plays, the two methods attribute samples in very different ways. In some cases,more strongly attributes a play to its suspected author, and in others sample-level cosine delta does. Often the models attributed samples to different subsets of authors.A very interesting pattern emerges when we look at the plays co-authored by Shakespeare in this test corpus. Over 50% of samples from each of the eight plays that Shakespeare contributed to are attributed to him, with little to no samples attributed to those who he supposedly co-authored the plays with, even if they are in the dataset. The most significant indication we see of another author's contribution to one of these plays is for The Two Noble Kinsmen. Here, only 54% of samples are attributed to Shakespeare and 8.5% are attributed to Fletcher, with whom he wrote the play. However, this is still not a strong signal of Fletcher's involvement. This pattern again suggests that themodel recognizes Shakespeare from pre-training. If the model had seen these plays attributed solely to Shakespeare during pre-training, as is likely, it may help explain why it assigns them so confidently to Shakespeare despite the influence of other authors. In contrast, sample-level cosine delta never assigns more than 25% of samples from any of these plays to Shakespeare, and the presence of his theorized co-authors is much more prominent in the results.§ STYLISTIC DEVELOPMENT OVER TIME In addition to the narrower task of author attribution, a measure of stylometric similarity can also be used to quantify authors' influence. To study shifts in dramatic style over time, we created a comparative corpus of plays written between the 14th and 18th centuries. In this corpus, we included 74 plays gathered from the EMED and SHC corpora not written by authors in our training dataset. To these, we added 67 additional plays from Project Gutenberg (see Table <ref>). We performed the same utterance separation and splitting with these plays as with the original corpus and formatted the input and output pairs identically. Further details can be found in the appendix. Themodel fine-tuned on the original dataset was asked to predict authors for 200 samples from each of these plays. The percentages we report in Figure <ref> are averaged by the original text author instead of by play; for example, we calculate the percentage of samples attributed to Heywood from each author writing in the 1500s and then average those percentages to reach the depicted value. This was to prevent any single writer whose style may somehow mimic that of an author in our original dataset from skewing the results. In the 1500s and 1600s, the greatest proportion of samples are assigned to Thomas Heywood. This aligns with the scapegoating trends we saw in the original corpus. However, starting in the 1700s the greatest proportion of samples are assigned to Shakespeare, and this value increases in the 1800s and 1900s (Figure <ref>), for which nearly half of the samples from each author were attributed to Shakespeare. In addition, if we attribute plays to an author by majority vote, no plays in the 1500s are assigned to Shakespeare, but 97% of plays are attributed to him by the 1900s. This result does not imply that 20th century plays are similar to Shakespeare, only that of the Early Modern authors known to the model, Shakespeare is both distinct and increasingly more similar to more recent plays than any other Early Modern author.§ CONCLUSION Generative large language models provide a promising tool for stylometry. While simpler methods such as cosine delta remain more accurate for larger text segments, we find that LLMs, particularly at larger scales, are remarkably effective at predicting the author of a difficult corpus of short 5–450 word text segments, which are more aligned with LLMs' shorter input windows.In addition to quantitative power, LLM-based stylometric analysis provides evidence for a range of interpretive arguments both when it succeeds (such as with Margaret Cavendish) as well as when it fails (both in scapegoating and in the stylistic differences in the work of Ben Jonson). Because T5 demonstrates an ability to recognize style, it may prove useful in other situations where recognizing implicit signals is key such as tracking genre differences and stylistic movements.There are also substantial practical advantages to using fine-tuned LLMs: despite their complexity and computational intensity, generative LLMs provide a remarkably simple text-in/text-out user interaction that requires no specialized software.However, there are several disadvantages to using pre-trained LLMs for authorship attribution. They are more computationally intensive than more traditional methods of authorship attribution and the content and effect of pre-training corpora are difficult to assess. In addition, the ways in which the model confidently misattributes texts means that it is more likely to produce misleading results than traditional attribution methods. Given the differences that emerged between the performance of cosine delta and the fine-tuned LLM, using the two methods in conjunction may provide more accurate results than using either method separately. Due to the weaknesses we have observed, however, we recommend against using LLMs for authorship attribution in forensic or legal settings. We would like to thank Federica Bologna, Katherine Lee, Noam Ringach, Rosamond Thalken, Andrea Wang, Matthew Wilkens, and Gregory Yauney for their thoughtful feedback. This work was supported by the NEH project AI for Humanists and Cornell University's Hopcroft Fellowship.§ T5 FINE-TUNING HYPERPARAMETERS|p0.5 | p0.2| Parameter Value Evaluation Strategy Epoch Learning Rate 2x10^-5 Weight Decay 0.01 Save Total Limit 3 § EXAMINATION OF Z-SCORE UNIQUENESS To explore the validity of our uniqueness metric, we ran 1,000 synthesized trials to examine what the expected correlation between dataset contribution and the uniqueness metric would be given randomly assigned plays. Concretely, in each trial we randomly assigned plays to synthetic authors in the same proportions they are assigned to authors in our true dataset. We then calculated the Spearman's rho correlation between number of plays and uniqueness values for each trial. We plot the binned synthetic correlations and the true correlation from our dataset in Figure <ref>. The true correlation from our dataset, depicted with the vertical red line, is 0.2 away from any value reached in our synthesized trial. Thus, it seems that there are some notable deviations from the expected trend in our dataset. To further explore this relationship, we averaged the uniqueness values for each synthetic author over all trials and plotted these values as well as the true values in Figure <ref>. It is clear that the true uniqueness values frequently deviate from the expected relationship between uniqueness and number of plays. In particular, Margaret Cavendish and John Lyly have much higher uniqueness values than expected given the number of plays they contribute to the training dataset. Because of this, we believe that this metric represents a valuable measure of uniqueness and does not simply reemphasize the impact of contribution to the training corpus.§ ORIGINAL CORPUS CONTENTS All plays in our original training and test corpora by author. The withheld plays are bolded and italicized. |p0.2 | p0.75| Author Plays Richard Brome The Northern Lass, The City Wit or The Woman Wears the Breeches, The Queen's Exchange (The Royal Exchange), The Weeding of Covent Garden or The Middlesex Justice of Peace, The Novella, The Queen and Concubine, The New Academy or The New Exchange, The Sparagus Garden (Tom Hoydon o' Tanton Deane), The English Moor or The Mock Marriage, The Antipodes, The Damoiselle or The New Ordinary, A Mad Couple Well Matched, The Lovesick Court or The Ambitious Politic, The Court Beggar, A Jovial Crew or The Merry Beggars0pt1emMargaret Cavendish The Lady — Part 1, The Lady — Part 2, The Unnatural Tragedy, Wit's Cabal — Part 1, Wit's Cabal — Part 2, Love's Adventures — Part 1, Love's Adventures — Part 2, Several Wits, The Matrimonial Trouble — Part 1, The Matrimonial Trouble — Part 2, The Religious, The Wooers George Chapman The Blind Beggar of Alexandria, A Humorous Day's Mirth, All Fools, The Gentleman Usher, May Day, The Widow's Tears, Bussy D'Ambois, Monsieur D'Olive, Caesar and Pompey (The Wars of Caesar and Pompey), The Tragedy of Charles Duke of Byron, The Revenge of Bussy D'Ambois William Davenant The Cruel Brother, Albovine King of the Lombards, The Just Italian, The Wits Thomas Dekker Old Fortunatus, Satiromastix or The Untrussing of the Humorous Poet, The Honest Whore — Part 2, Match Me in London, If It Be Not Good the Devil Is in It John Fletcher The Faithful Shepherdess, The Woman's Prize or The Tamer Tamed, Bonduca, Valentinian, The Mad Lover, The Chances, The Loyal Subject, The Humorous Lieutenant (Generous Enemies, Demetrius and Enanthe), Women Pleased, The Island Princess, The Wild Goose Chase, The Pilgrim, Rule a Wife and Have a Wife, A Wife for a Month John Ford The Lover's Melancholy, The Broken Heart, 'Tis Pity She's a Whore, Love's Sacrifice, Perkin Warbeck, The Fancies Chaste and Noble Henry Glapthorne The Hollander, Ladies' Privilege, Wit in a Constable Robert Greene Friar Bacon and Friar Bongay, The Scottish History of James the Fourth, Orlando Furioso Thomas Heywood The Four Prentices of London, Edward IV — Part 1, Edward I — Part 2, The Royal King and the Loyal Subject, How a Man May Choose a Good Wife from a Bad, A Woman Killed with Kindness, If You Know Me Not You Know Nobody or The Troubles of Queen Elizabeth — Part 1, If You Know Me Not You Know Nobody or The Troubles of Queen Elizabeth — Part 2, The Fair Maid of the West or A Girl Worth Gold — Part 1, The Wise Woman of Hogsdon, The Rape of Lucrece, The Golden Age or The Lives of Jupiter and Saturn, The Brazen Age, The Iron Age — Part 1, The Iron Age — Part 2, The English Traveller, Love's Mistress, A Challenge for Beauty Ben Jonson The Case is Altered, Every Man in His Humour, Every Man Out of His Humour, Cynthia's Revels, Poetaster, Sejanus His Fall, Volpone, The Alchemist, Epicoene or The Silent Women, Catiline His Conspiracy, Bartholomew Fair, The Devil is an Ass, The Staple of News, The New Inn Thomas Killigrew The Prisoners, The Princess, The Parson's Wedding, Claricilla John Lyly Sappho and Phao, Campaspe (Alexander, Campaspe, and Diogenes), Gallathea, Endymion, Midas, Love's Metamorphosis, Mother Bombie, The Woman in the Moon [l]Christopher Marlowe Tamburlaine the Great — Part 1, Tamburlaine the Great — Part 2, The Jew of Malta, Doctor Faustus, Edward the Second, THe Massacre at Paris John Marston Antonio and Mellida, Antonio's Revenge, Jack Drum's Entertainment, What You Will, The Malcontent, Parasitaster or The Fawn, The Dutch Courtesan Philip Massinger The City Madam, The Duke of Milan, The Maid of Honour, The Bondman, The Unnatural Combat, The Renegado or The Gentleman of Venice, A New Way to Pay Old Debts, The Roman Actor, The Great Duke of Florence, The Picture, The Emperor of the East, The Guardian, The Bashful Lover Thomas May The Heir, Cleopatra — Queen of Egypt, Julia Agrippina — Empress of Rome Thomas Middleton The Phoenix, Michaelmas Term, A Trick to Catch the Old One, A Mad World My Masters, The Puritain or The Widow of Watling Street, Your Five Gallants, The Widow, The Mayor of Quinborough, A Chaste Maid in Cheapside, More Dissemblers Beside Women, Women Beware Women, A Game at Chess Thomas Nabbes Covent Garden, Tottenham Court, Hannibal and Scipio, The Bride, The Unfortunate Mother [l]William Shakespeare The Comedy of Errors, Richard III, The Taming of the Shrew, The Two Gentlemen of Verona, Romeo and Juliet, Richard II, King John, The Merchant of Venice, Henry IV — Part 1, Henry IV — Part 2, Much Ado About Nothing, Henry V, Julius Caesar, As You Like It, Twelfth Night, Hamlet, Merry Wives of Windsor, Troilus and Cressida, Othello, Measure for Measure, Macbeth, King Lear, Antony and Cleopatra, Coriolanus, Cymbeline, The Tempest James Shirley The School of Compliment, The Maid's Revenge, The Wedding, The Witty Fair One, The Grateful Servant, The Humorous Courtier, Love's Cruelty, The Ball, The Traitor, Hyde Park, Changes or Love in a Maze, The Bird in a Cage (The Beauties), The Young Admiral, The Gamester, The Opportunity, The Example, The Lady of Pleasure, The Coronation, The Duke's Mistress, The Royal Master, The Doubtful Heir, The Constant Maid, The Gentleman of Venice, Saint Patrick for Ireland — Part 1, The Politician, The Arcadia, The Imposter, The Sisters, The Cardinal, The Brothers, The Court Secret John Webster The White Devil (Vittoria Corombona), The Duchess of Malfi, The Devil's Law Case (When Women Go to Law the Devil is Full of Business) Robert Wilson The Three Ladies of London, The Three Ladies of London, The Cobbler's Prophecy§ DISPUTED AND CO-AUTHORED CORPUS CONTENTS All plays in the disputed and co-authored corpus by the author they were attributed to in the original corpora.|p0.2 | p0.75| Labeled Author Plays George Chapman Sir Giles Goosecap, Two Wise Men and All the Rest Fools Thomas Dekker Patient Grissel, The Wonder of a Kingdom John Ford The Laws of Candy, The Queen Henry Glapthorne Revenge for Honor (The Parricide) Robert Greene George a Green the Pinner of Wakefield Thomas Heywood The Fair Maid of the Exchange John Marston Histriomastix or The Player Whipped, The Insatiate Countess Thomas Middleton Anything for a Quiet Life, The Family of Love [l]William Shakespeare Henry VI — Part 1, Henry VI — Part 2, Henry VI — Part 3, Henry VIII, Pericles — Prince of Tyre, Timon of Athens, Titus Andronicus, The Two Noble Kinsmen John Webster Appius and Virginia, The Thracian Wonder § COMPARISON CORPUS CONTENTS All plays in the comparative corpus by author. Plays that were attributed to Shakespeare by the model are bolded and italicized.|p0.2 | p0.75| Author Plays Robert Armin The Two Maids of More-Clacke Thomas Baker The Fine Lady's Airs [l]James Nelson Barker The Indian Princess J. M. Barrie Dear Brutus, Peter Pan Lording Barry Ram Alley Barnabe Barnes The Devil's Charter Clifford Bax Square Pegs Francis Beaumont The Knight of the Burning Pestle Dabridgecourt Belchier Hans Beer-Pot (See Me and See Me Not) Arnold Bennett The Great Adventure William Berkeley The Lost Lady [l]Hugh Henry Brackenridge The Battle of Bunkers Hill Alexander Brome The Cunning Lovers Robert Browning A Blot in the Scutcheon Henry Burnell Landgartha Lodowick Carlell The Deserving Favorite [l]Richard Claude Carton Lady Huntworth's Experiment William Cartwright The Royal Slave William Cavendish The Country Captain, The Variety Susanna Centlivre The Busie Body, The Perjur'd Husband [l]Robert Chamberlain The Swaggering Damsel George Coleman John Bull Abraham Cowley Love's Riddle Aleister Crowley Household Gods Robert Daborne A Christian Turned Turk John Denham The Sophy Thomas Drue The Duchess of Suffolk William Dunlap Andre Lord Dusany If Nathan Field Amends for Ladies, A Woman is a WeathercockJasper Fisher Fuimus Troes (The True Trojans) Phineas Fletcher Sicelides Ralph Freeman Imperiale John Galsworthy A Bit O' Love, The Eldest Son, A Family Man, The First and the Last, The Foundations, The Fugitive, Joy, Justice, The Little Dream, The Little Man, Loyalties, The Mob, The Skin Game, Strife Thomas Godfrey The Prince of Parthia Johann Wolfgang von Goethe Faust John Gough The Strange Discovery Fulke Greville Alaham John Johns Adrasta William Kemp A Knack to Know a Knave Henry Killigrew The Conspiracy John Kirke The Seven Champions of Christendom [l]James Sheridan Knowles The Love Chase Thomas Kyd Soliman and Perseda, The Spanish Tragedy (Hieronimo is Mad Again) Maurice Kyffin Andria William Habington The Queen of Aragon Samuel Harding Sicily and Naples Joseph Harris The City Bride William Haughton Englishmen for My Money Peter Hausted The Rival Friends William Hawkins Apollo Shroving [l]Gorges Edmond Howard The Female Famester Henrik Ibsen A Doll's House, Hedda Gabler Elizabeth Inchbald Such Things Are, The Widow's Vow Jerome K. Jerome Fanny and the Servant Problem, Woodbarrow Farm [l]Henry Arthur Jones Dolly Reforming Herself, Michael and His Lost Angel D. H. Lawrencee Touch and Go John Leacock The Fall of British Tyranny Thomas Lodge The Wounds of Civil War Samuel Low The Politician Out-Witted Sir William Lower The Phoenix in Her Flames Thomas Lupton All for Money James Mabbe The Spanish Bawd (Calisto and Meliboea) Charles Macklin The Covent Garden Theatre Gervase Markham The Dumb Knight, Herod and Antipater Shakerley Marmion A Fine Companion, Holland's Leaguer John Mason The Turk Jasper Mayne The City Match Edward Moore The Gamester Thomas Morton Speed the Plough Arthur Murphy The Grecian Daughter Thomas Newman The Andrian Woman (Andria), The Eunuch Mordecai Manuel Noah She Would Be a Soldier John O'Keeffe Wild Oats Henry Nevil Payne The Fatal Jealousie Arthur Pinero The Big Drum, The 'Mind the Paint' Girl Henry Porter The Two Angry Women of Abingdon Thomas Randolph The Jealous Lovers Thomas Rawlins The Rebellion Nathaniel Richards Messalina — The Roman Empress Robert Rogers Ponteach: The Savages of America Edmond Rostand Cyrano de Bergerac Samuel Rowley The Noble Spanish Soldier (The Noble Soldier or A Contract Broken Justly Revenged), When You See Me You Know Me (Henry the Eighth) Joseph Rutter The Shepherds' Holiday S. S. The Honest Lawyer W. S. Thomas Lord Cromwell Edward Sharpham Cupid's Whirligig, The Fleer [l]George Bernard Shaw Arms, The Devil's Disciple, Fanny's First Play, Man and Superman John Stephens Cynthia's Revenge William Stevenson Gammer Gurton's Needle Algernon Charles Swinburne The Duke of Gandia, Erechtheus, Rosamund Robert Tailor The Hog Hath Lost His Pearl Brandon Thomas Charley's Aunt Thomas Tomkis Albumazar, Lingua or The Combat of the Tongue and the Five Senses of Superiority Cyril Tourneur The Atheist's Tragedy Royall Tyler The Contrast Nicolas Udall Ralph Roister Doister George Wapull The Tide Tarrieth No Man Oscar Wilde Vera, A Woman of No Importance George Wilkins The Miseries of Enforced Marriage Nathaniel Woodes The Conflict of Conscience Robert Yarington Two Lamentable Tragedies Richard Zouch The Sophister
http://arxiv.org/abs/2310.18454v1
{ "authors": [ "Rebecca M. M. Hicke", "David Mimno" ], "categories": [ "cs.CL", "cs.LG" ], "primary_category": "cs.CL", "published": "20231027200457", "title": "T5 meets Tybalt: Author Attribution in Early Modern English Drama Using Large Language Models" }
RVrandom variable ILQRIterative Linear Quadratic Regulation ILQGamesIterative Linear-Quadratic Games SILQGamesStackelberg Iterative Linear-Quadratic Games SLFStackelberg Leadership FilterLQlinear-quadratic Leadership Inference for Multi-Agent Interactions Hamzah I. Khan^1 and David Fridovich-Keil^1 ^1Aerospace Engineering and Engineering Mechanics, University of Texas at Austin {hamzah, dfk}@utexas.edu ================================================================================================================================================================== Effectively predicting intent and behavior requires inferring leadership in multi-agent interactions. Dynamic games provide an expressive theoretical framework for modeling these interactions. Employing this framework,we propose a novel method to infer the leader in a two-agent game by observing the agents' behavior in complex, long-horizon interactions. We make two contributions. First, we introduce an iterative algorithm that solves dynamic two-agent Stackelberg games with nonlinear dynamics and nonquadratic costs, and demonstrate that it consistently converges.Second, we propose the SLF, an online method for identifying the leading agent in interactive scenarios based on observations of the game interactions. We validate the leadership filter's efficacyon simulated driving scenarios to demonstrate that the SLF can draw conclusions about leadership that match right-of-way expectations. This work was supported by the National Science Foundation under Grant No. 2211548. § INTRODUCTIONDuring daily commutes, drivers assert themselves in running negotiations with other road users in order to reach their destinations quickly and safely. Right-of-way expectations inform these assertions between road users.Consider the passing lane shown in <ref>. Agent 2 (blue) initially follows behind agent 1 (red), and we may intuitively perceive 1 as the leader. If instead 2 overtakes 1, the scenario seems to imply a reversal of leadership, with 2 in front and 1 behind, as in the inset of <ref>. However, this intuition is vague and premature. If 2 tailgates 1 or otherwise behaves aggressively, 1 might speed up or yield to 2 out of caution. However, aggressive behavior does not necessarily indicate leadership, as 1 could also react to 2 tailgating by slowing down and relying on the knowledge that 2 will not risk a collision. Here, any simple intuition of the leadership dynamics falls short. Depending on each driver's safety and comfort tolerances, either 1 or 2 may be the leader. Hence, deciphering leadership dynamics requires understanding common expectations, agent incentives, and other agents' actions. Successfully doing so can improve autonomous intent and behavior prediction for motion planning, as demonstrated by <cit.>.We turn to optimal decision making and game theory for tools to analyze interactive scenarios. Stackelberg games <cit.>, also known as leader-follower games, stand out because they model interactions with clear leadership hierarchies. In a Stackelberg game,a leader selects its strategy to influence the follower's response. Each strategy in a Stackelberg solution satisfies leadership conditions that describe how the leader's behavior induces the follower to act. Additionally, solving Stackelberg games results in trajectories that we can use for model-predictive control.Using these attractive properties, we propose a leadership inference technique for multi-agent scenarios like that of <ref>. To this end, we first contribute SILQGames, an algorithm for solving general Stackelberg games, and we show that it converges for games with nonlinear dynamics and general costs.Second, we propose the SLF (SLF) to infer leadership over time in two-agent interactions based on observations of the interacting agents. We validate that it infers the correct Stackelberg leader in two-agent games and report results on simulations of driving scenarios. § RELATED WORK Leadership Inference. Many prior works develop leadership inference techniques, particularly for robotic swarms and animal sociology. As an abstract concept, leadership is challenging to measure <cit.>. Leadership models prespecify particular agent(s) as leaders that influence group motion. Swarm applications <cit.> often assume the Reynolds flocking model <cit.>.Animal sociology applications define leadership models based on principal component analysis <cit.> or stochastic inference <cit.> with hand-selected domain-specific features.By contrast, we explicitly frame these interactions in terms of optimal decision making and game theory and therefore utilize the Stackelberg leadership model. Defining a Stackelberg game requires prespecifying a leader and solving one produces equilibrium trajectories for each agent. Hence, by associating a particular leader with solution trajectories in a principled manner, the Stackelberg leadership model allows for modeling leadershipover long-horizon interactions without hand-crafted heuristics. Stackelberg Games for Motion Planning. Recent advances <cit.> investigate Stackelberg models of leadership for interactive scenarios involving self-driving vehicles.In particular, <cit.> incorporate leadership as a latent variable by solving open-loop Stackelberg games and comparing expected leader and follower behaviors with observed agent behaviors.Our method generalizes this underlying approach to Stackelberg leadership by modeling a joint distribution over game state and leadership. We solve feedback Stackelberg games for richer access to leadership information.Solving Dynamic Games. Identifying computationally efficient game-solving techniques with theoretical guarantees of finding equilibria remains an open area of research. Most existing game-solving algorithms consider Nash games, which find equilibria for which each actor is unilaterally optimal given fixed opponent strategies.These algorithms <cit.> generally use Newton-based schemes based on iterative and dynamic programming algorithms that have been widespread for decades <cit.>. We note two axes on which such approaches differ: first, these approaches solve either open-loop Nash games <cit.> or feedback Nash games <cit.>. Second, these algorithms either reduce the game to a simpler problem <cit.> or directly solve the game <cit.>. In particular, <cit.> introduce ILQGames, an iterative method that approximates solutions to nonlinear dynamic, nonquadratic cost feedback Nash games by repeatedly solving LQ approximations until convergence. Convergence analysis of these methods is subtle, as described in depth by <cit.>.We utilize a similar approach as ILQGames to solve feedback Stackelberg games. § PROBLEM FORMULATION Two agents, 1 and 2 (e.g., autonomous cars), operate in a shared space with state t∈^ at each timestep t ∈𝕋≡{1, 2, …, } with sampling period Δ t. Agent i has controls (i)t∈ℝ^i, and the state evolves according tot+1 = f_t(t, (1)t, (2)t).We denote the sequence of states as 1: and the sequence of i's controls as (i)1:. We assume that f_t is continuous and continuously differentiable in t, (1)t, (2)t. i's objective,(i)(1:, (1)1:, (2)1:) ≡∑_t=1^ g^(i)_t(t, (1)t, (2)t),describes its preferences in a given scenario. We model the objective <ref> as the sum of stage costs g^(i)_t, assumed to be twice differentiable in t, (1)t, (2)t. Each agent i minimizes its objective with respect to its controls (i)1:.§.§ Background: Feedback Stackelberg Games Stackelberg games model leadership as a mismatch of information.Intuitively and without loss of generality, the leader 1 commits to a strategy and communicates it to the follower 2. Given this relationship, the leader carefully selects its strategy in order to influence the follower. Formally, a Stackelberg equilibrium {(1*)1:, (2*)1:((1*)1:) } is a tuple of optimal control trajectories for both agents. The function (2*)1:((1)1:) highlights that 2's optimal strategy depends on the leader's (possibly non-optimal) chosen strategy.Using an abuse of notation, we omit the state argument of the objective (i), and define γ((i)t) ≡ [(i)1:t-1, (i)t, (i*)t+1:].We define the set of all optimal follower responses at time t, U^(2*)_t((1)t) ⊂ℝ^2, as U^(2*)_t((1)t) ≡_(2)t(2)( γ( (1)t), γ((2)t) ). We assume |U^(2*)_t((1*)t)| = 1, i.e., that an optimal leader strategy results in a unique optimal follower response at each time t. Under this assumption, the set of control trajectories for all agents forms a feedback Stackelberg equilibriumif, at every time t ∈𝕋, the optimal trajectories satisfy (1)(γ((1*)t), γ((2*)t) )= min_(1)tmax_(2)t ∈U^(2*)_t((1)t)(1)( γ((1)t), γ((2)t)).At equilibrium, the leader uses Stackelberg condition <ref> to guide the follower towards its least bad option for the leader.Stackelberg games are generally non-cooperative, meaning that agents do not coordinate but plan based on observations of the game state. Agents in open-loop games observe only the initial game state, whereas in feedback games, agents adjust their control inputs after observing the state at each time step, producing complex, temporally-nested game constraints <ref>.LQ Stackelberg games have analytic solutions given strictly convex costs <cit.>.We denote i(t) as the -horizon Stackelberg game solved from state t with leader i. For a more detailed treatment of Stackelberg equilibria and solving LQ Stackelberg games, refer to Başar and Olsder <cit.>. §.§ Stackelberg Leadership Filtering We seek to describe a filter that identifies a leadership belief for i based on observations. To this end, let us first define t∈{1, 2} to be a binary RV indicating the leader at time t. Next, we state our assumptions about the observability of the game. We assume the state t is observable via noisy measurement t∼𝒩(h(t; t), t) withknown covariance matrix t≻ 0 and measurement model h. We also assume that control inputs (i)t for each agent i are directly observable.Next, recall that each agent has an objective that describes its preferences. For this work, we assume all agent objectives {(i)} are known a priori. In general settings, we note that techniques exist <cit.> to infer agent objectives from noisy observations, though further work may be required to confirm the computational tractability of simultaneously inferring leadership and objectives. Finally, we formally define the leadership belief for t as b(t) = t | 1:t.§ INFERRING LEADERSHIP We propose SILQGames (SILQGames), which iteratively solves nonlinear dynamic, general cost (non-LQ) Stackelberg games with continuous and differentiable dynamics and costs.We use SILQGames in the SLF (SLF, <ref>) as part of the Stackelberg leadership model. Our method infers the leading agent of a two-agent interaction from observations. §.§ Iteratively Solving Stackelberg Games At a high level, SILQGames (<ref>) iteratively solves LQ approximations of Stackelberg games (<ref>), updates the control trajectories using the solutions to these approximated games (<ref>), and terminates if the updated trajectory satisfies a convergence condition (<ref>). Upon successful convergence, the resulting trajectory constitutes an approximate Stackelberg equilibrium.This type of approach also yields approximate equilibrium solutions in the Nash case, although establishing precise error bounds remains an open problem <cit.>. We expect a similar result for SILQGames, though it is beyond the scope of this work.Inputs.SILQGames accepts an initial state 1 and a leader i. It accepts a set of all agents' nominal control trajectories {(i), k=01:}. We produce a nominal state trajectory 1:^0 by applying the nominal controls from 1 (<ref>).LQ Game Approximation. At each iteration k,we first linearize the dynamics (<ref>) and take second-order Taylor series approximations of the costs (<ref>) about the previous iteration's state and control trajectories, 1:^k-1, (1),k-11:, (2),k-11:: 0.95 2A_t= ∇_ f_t,Q^(i)_t= ∇^2_g^(i)_t,R^ij_t= ∇^2_(j)(j)g^(i)_t,B^(i)_t= ∇_(i) f_t,q^(i)_t= ∇_g^(i)_t,r^ij_t= ∇_(j)g^(i)_t. We define the state and control variables for our LQ game approximation as deviations from the previous state and control trajectories:1:k = 1:^k - 1:^k-1 and δu^(i), k_1: = u^(i), k_1: - u^(i), k-1_1:. We then approximate the game as an LQ problem with linear dynamics and quadratic costs t+1k ≈A_t tk + ∑_i∈{1, 2} B^(i)_t itk ,g^(i)_t(·, ·, ·) ≈1/2[ 2 g^(i)_t(t^k-1, (1), k-1t, (2), k-1t) + . ( Q^(i)_t tk+ 2q^(i)_t )^tk+ ∑_j=1^N ( R^ij_t jtk+ 2r^ij_t )^.jtk ]. We exclude mixed partials ∇_(i), ∇_(i)(j) due to their rarity in cost structures of relevant applications, but they can be included if needed. In practice, Q^(i)_t and R^ij_t may not be positive definite. Recall that LQ Stackelberg games have unique global solutions given strictly convex costs. Thus, we enforce positive definiteness, and thus convexity, in the quadratic cost estimates by adding a scaled identity matrix ν I to all Q^(i)_t and R^ij_t terms. This addition increases each eigenvalue by ν∈ℝ_+ <cit.>, so a sufficiently large choice of ν guarantees convexity. Finally, we solve the LQ game analytically (<ref>) <cit.>. Strategy Update. After approximating the game as LQ and solving it, we update the control strategy (<ref>). The analytic solution to the LQ game consists of gain and feedforward terms P^(i), k_1:, p^(i), k_1: which constitute an affine feedback control law that produces strategy δû^(i),k_t= -P^(i),k_t δt^k - p^(i),k_t.Following standard procedures in ILQR <cit.>, we define update rule (i), kt = (i), k-1t - P^(i),k_ttk - α_k p^(i),k_t, where α_k ∈ (0, 1] is an iteration-varying step size parameter. As α_k approaches 0, the new iterate (i), kt approaches the previous iterate (i), k-1t. Likewise, as α_k approaches 1, we adjust our previous iterate by the full step δû^(i),k_t.In single-agent settings, methods like ILQR commonly apply a line search for step size selection. However, this approach requires a detailed description of complex, temporally-nested feedback game constraints <ref>.Instead, SILQGames decays the step size (<ref>) with configurable decay factor β∈ (0, 1) and minimum step size α_min. Initial step size α_1 = 1 unless otherwise specified (<ref>).Convergence Criterion. Optimization algorithms commonly use first-order optimality conditions <cit.> to test for convergence, and incorporating a line search guarantees monotone improvement in such a convergence metric. As with a line search, however, using first-order optimality conditions becomes unwieldy due to the feedback game constraints <ref>.In practice, we define a convergence criterion as a functionof the current and next iterate's states: Conv(1:^k, 1:^k-1) = 1:^k - 1:^k-1_∞.We compute 1:^k based on the proposed controls resulting from update step <ref>. We say SILQGames converges if the metric value falls below a threshold τ. SILQGames stops after a maximum number of iterations M_iter, irrespective of convergence. We expect SILQGames to converge, though we do not expect monotone decrease in the convergence criterion as a large step size may occasionally overshoot the Stackelberg equilibrium. Oscillations in the convergence metric can occur when step sizes are consistently too large and may indicate that α_min or β should be reduced. Please refer to our results in <ref> for further details. Computational Complexity. The per-iteration computational complexity analysis of <cit.> holds almost identically for SILQGames. Since the number of agents is a constant (i.e., 2), the per-iteration complexity of SILQGames is O(^3).The entire algorithm runs in O(k^3), where k ≤ M_iter is the number of iterations to convergence. §.§ Leadership FilteringThe SLF (SLF) estimates the likelihood that each agent is the leader of a two-agent interaction given noisy measurements 1:. Let filter context t = [t, t]^ consist of continuous game state t and leader t. Following conventional Bayesian filtering practices and denoting all agent controls t={(1)t, (2)t} for brevity, the SLF refines prior context belief t-1 with update rule t ∝t | t ∫_t-1t | t-1, t-1 t-1 dt-1, In <ref>, the context transition probability term t | t-1, t-1 = t, t | t-1, t-1, t-1 describes the likelihood of context t given the previous context t-1 and each agent's controls. Furthermore, the measurement likelihood t | t quantifies an expected measurement based on how well the new state t matches the observation t. Thus, we compute the leadership belief at time t by marginalizing t = b(t, t) over t: t = ∫_t t dt. Next, we make several assumptions to simplify the context transition probability.First, we assume conditional independence of t and t given t-1 and t-1. While these values often evolve together, we can make this assumption if the state responds slowly to changes in leadership. In particular, if we select a sufficiently small sampling period Δ t, then any change in the state t when t≠t-1 requires multiple time steps to observe.After this simplification,0.927.5ptt | t-1, t-1=t | t-1, t-1t | t-1, t-1. The term t | t-1, t-1 indicates that t depends on the previous leader and the previous state and controls through the dynamics f_t-1. The second term t | t-1, t-1 models how t depends on the previous state and controls. In the passing scenario, for example, we could encode a prior relationship between t and whichever vehicle is in front. However, establishing this type of prior is difficult <cit.>, so we leave it to user discretion if such knowledge is available. We do not encode any such relationship in this work.Instead, we make a second, stronger assumption that the leadership transition process for t occurs independently of state t-1 and agent controls t-1. With these two simplifications, our context transition probability becomest | t-1, t-1 = t | t-1, t-1t | t-1.We model the leadership transition probability t | t-1 as a two-state Markov Chain with transition likelihood t≠t-1 | t-1 = p_trans. This assumption applies to a limited set of scenarios, though as mentioned, users may provide more complicated leadership transition models. Selecting a Filter. Due to the computational intractability of exactly evaluating Bayesian update rule <ref>, we use a particle filtering approach. Particle k has context tk = [tk, tk]^. Particle filters use a measurement model to compute the expected observation for a state t <cit.>. Our measurement model h(tk; tk) solves a Stackelberg game to generate simulated solution trajectories conditioned on the particle's leader. In the measurement update, we compare a subset of the solution to the ground truth observationsand update the likelihood of leadership using <ref>. We resample with replacement to eliminate unlikely particles when the effective number of particles, a metric that measures how well the particles represent the distribution, becomes low. We infer the leading agent based on the similarity of expected measurements, generated from Stackelberg games, to observations of the ground truth. Since Stackelberg equilibria satisfy leadership condition <ref>, converged solutions let the filter observe leadership indirectly via the measurement model. The Stackelberg Measurement Model.We construct a measurement model that relates the leader H_t-1^k in particle k at time t-1 with the expected state measurement at time t; in particular, we model the expected measurement from each particle as the output of Stackelberg game t-1kT_s(t-1k), played from the previous particle state over horizon T_s. Experiments determine that we must configure T_s carefully, neither too short to provide relevant leadership information nor too long as to cause excessive latency. We call the solutions to these games Stackelberg measurement trajectories and select the state at time t, t-1kt, as the expected measurement.In practice, playing a Stackelberg game from previous state t-1 requires each particle to maintain t-1 as additional context.To identify nominal strategies (i), k=0t-1:t+T_s-1 for the call to SILQGames within the measurement model, we require the user to define(T_s, 1:t-1k, 1:t-1) in a manner appropriate to the application, i.e., using previous particle contexts, a heuristic, etc. We describe one such heuristic in the appendix.After producing a measurement trajectory, we attach measurement uncertainty t to each state in it. Depending on the application, this step may incorporate uncertainty from sensors, processing, variation over time, etc. § EXPERIMENTS & RESULTS We first introduce the two-agent LQ shepherd and sheep game <cit.> and a nonlinear, nonquadratic variant. We use these to validate SILQGames and the SLF. Finally, we run the SLF on realistic driving scenarios.The LQ Shepherd and Sheep Game. In the shepherd and sheep game, each agent state t^(i) = [p^(i)_x, t, v^(i)_x, t, p^(i)_y, t, v^(i)_y, t] ∈ℝ^4 includes 2D position and velocity in the horizontal and vertical directions and each agent controls its planar acceleration (i)t∈ℝ^2. Agents' states evolve according to (linear) double-integrator dynamics in each direction, p^(i)_x, t+1 = p^(i)_x, t + v^(i)_x, tΔ t; v^(i)_x, t+1 = v^(i)_x, t + (i)x,tΔ t; p^(i)_y, t+1 = p^(i)_y, t + v^(i)_y, tΔ t; v^(i)_y, t+1 = v^(i)_y, t + (i)y,tΔ t. The game state combines the agent states t = [t^(1), t^(2)]^. Agents' costs g_t^(1)(t, (1)t, (2)t) = (p^(2)_x, t)^2 + (p^(2)_y, t)^2 + (1)t_2^2,g_t^(2)(…) = (p^(1)_x, t-p^(2)_x, t)^2+ (p^(1)_y, t - p^(2)_y, t)^2+ (2)t_2^2,are quadratic in state and controls and incentivize “shepherd” 1 to minimize “sheep” 2's distance to the origin (i.e., the barn) and 2 to minimize its distance to 1.Since the game is LQ, an analytic Stackelberg solution exists. Nonlinear, Nonquadratic (Non-LQ) Game. We form a nonlinear, nonquadratic variant of <ref>, <ref> by using unicycle dynamics and modifying <ref>.Each agent state t^(i) = [ p^(i)_x, t, p^(i)_y, t, ψ^(i)_t, v^(i)_t ]^∈ℝ^4 contains 2D position, heading, and velocity and each agent i controls yaw rate (i)t∈ℝ and longitudinal acceleration (i)t∈ℝ. Game state x_t = [t^(1), t^(2)]^, where agents' states evolve according to p^(i)_x, t+1 = p^(i)_x, t + Δ t v^(i)_t cosψ^(i)_t; p^(i)_y, t+1 = p^(i)_y, t + Δ t v^(i)_t sinψ^(i)_t; ψ^(i)_t+1 = ψ^(i)_t + Δ t (i)t; v^(i)_t+1 = v^(i)_t + Δ t (i)t. The nonquadratic cost g_t^(1') (t, (1)t, (2)t) = g_t^(1)(·,·,·) - log(s - p^(2)_x, t)- log(p^(2)_x, t- s) - log(s - p^(2)_y, t) - log(p^(2)_y, t - s)adds log barrier terms to <ref> which force 1 to keep 2's position (p^(2)_x,t, p^(2)_y,t) bounded within an origin-centered square of side length 2s. This nonquadratic cost remains convex.If 2 begins near the log boundary, then 1's cost starts high, and 1 may display more aggressive control. §.§ SILQGames ValidationTo test convergence for non-LQ games, we runsimulations of SILQGames on the non-LQ shepherd and sheep game. In each simulation, we fix 1's initial position at (2, 1) and vary 2's initial position along the perimeter of a radius-√(5) circle. Both agents begin stationary and face toward the origin. The nominal strategies apply zero input. We specify additional parameters in the appendix. Analysis. The results in <ref> indicate that all simulations converge. The median value of the convergence metric, shown with 10% and 90% percentile bounds, exhibits a generally decreasing trend.These results are consistent with our previous discussion on convergence, as SILQGames converges in every simulation, though without monotone decrease in the convergence criterion. In <ref>, we report the solution for a particular (arbitrarily chosen) simulation. Both agents' motion follows the incentive structure of the game: the distance between the two agents decreases, as does the distance from 2 to the origin. As expected, 1 exerts more control effort than 2 due to 2's leadership role and 1's incentive to constrain 2's position. Finally, we note that 1's motion changes sharply towards the end of its trajectory. Here, the unicycle comes to a stop and moves in reverse. These results demonstrate that, for a game with nonlinear dynamics and convex, nonquadratic costs, SILQGames converges to a solution that appears consistent with the dynamics and costs.Timing. We collect elapsed times for each iteration ofSILQGames simulations on AMD Ryzen 9 5900x 12-core processors. An iteration of SILQGames runs in a mean of 0.49 with a standard deviation of 0.29.§.§ Leadership Filter ValidationWe validate the leadership filter on analytic solution trajectories of horizon T_sim for the LQ shepherd and sheep game played with leader L_ = 1.Since we generate the ground truth 1:^ with a known leader, a perfect filter should infer the true leader with consistently high confidence.Our results indicate that the SLF produces an observable signal for Stackelberg leadership, but (as one can expect) noise and measurement model configuration significantly affect performance. We simulate noisy game state measurements t∼𝒩(t^, ) and assume the SLF knows . We list parameter values in the appendix.Analysis. In our results, the SLF produces the expected leadership probabilityfor part of the simulation horizon. From 1.5-3.5 in <ref>, the SLF correctly infers 1 as the leader with high likelihood.We can interpret these leadership results using the measurement trajectories. Examining the expected measurements in <ref> at 2.04, we note that the observations in this time range more closely match the measurement models generated with 1 as leader, which the SLF interprets as indicating leadership by 1. However, we also see complex behavior in <ref>. First, the SLF initially misidentifies the leader as 2, as shown by <ref>, because the Stackelberg measurement trajectories do not capture leadership information over the whole simulation horizon. Specifically, the measurement trajectories { h(t-1k, t-1k) } are straight lines that roughly reduce the state costs of the shepherd and sheep, but do not capture the granularity of motion from the ground truth due to higher control costs over the short horizon T_s ≪ T_sim.Second, the SLF is completely uncertain after 4.5.Near the origin, the contribution of process noise to the motion outweighs the contribution of the dynamics, and the measurement noise is too uncertain to clarify the state. Thus, the measurement and process noise obfuscate the dynamics.From these results, we see that the SLF requires parameter T_s to be of sufficient length to capture the influence of leadership on the measurement trajectories. We note that the SLF is sensitive to noise as it infers leadership indirectly by comparing the observed motion with the expected motion of a Stackelberg leader.Thus, too little process noise may lead particles to converge to an incorrect trajectory, and too much may reduce the signal-to-noise ratio.Timing. The mean overall runtime forsimulations of an LQ game withsteps is 10.91 with standard deviation 1.64. The mean and standard deviation SLF cycle runtimes for 50 particles and a 75-step measurement horizon are 0.82 and 0.35. Self-driving vehicle applications require sub-100 perception cycle latency <cit.>, so our implementation is not real-time. However, straightforward though nontrivial optimizations can reduce latency below 100, as demonstrated by <cit.>, which use fast particle filters with measurement models that involve solving dynamic games. §.§ Realistic Driving Scenarios We formulate passing and merging scenarios using realistic ground truth trajectories without a clear leader. We demonstrate that the SLF responds to changes in leadership, handles objectives that imperfectly model agent behavior,and that the results match right-of-way expectations. The dynamics and cost terms demonstrate that the SLF does not require LQ assumptions and works for nonconvex costs. Our results further indicate that SILQGames, used within the SLF, converges under these conditions. Each agent's state evolves according to unicycle dynamics.The simulation runs forsteps at period Δ t = 0.05.We model stage cost g^(i)_t as a weighted sum of incentives g^(i)_j, t,g^(i)_t = ∑^M^(i)_j=1 w^(i)_j g^(i)_j, t.Weights {w^(i)_j}⊂^+ specify the relative priorities of subobjectives.We define M^(i) = 6 terms to incentivize driving behaviors corresponding to legal or safety considerations.g^(i)_1, t = d(t^(i), t^(i),goal)g^(i)_2, t = -log( p^(i)_t - p^(j)_t ^2_2 - d_c)    ∀i ≠jg^(i)_3, t = -log(v_m - |v^(i)_t|) - log(Δψ_m - |ψ^(i)_t - ψ_r|)g^(i)_4, t = (ω^(i)_t)^2 + (α^(i)_t)^2 g^(i)_5, t = -log( p^(i)_t, llb - p^(i)_t ^2_2) -log( p^(i)_t, rlb - p^(i)_t ^2_2)g^(i)_6, t = exp(-(1/2) (p^(i)_t, cl - p^(i)_t)^C^-1 (p^(i)_t, cl - p^(i)_t)) Eq. <ref> requires a small distance between vehicle state t ^(i) and goal state t^(i),goal. For this scenario, d(·, ·) is a weighted Euclidean distance. Eq. <ref> requires a minimum safety radius d_c between the vehicles. Eq. <ref> requires obeying speed limit v_m and avoiding excessive heading deviation Δψ_m from road direction ψ_r. Eq. <ref> incentivizes low control effort. Eq. <ref> enforces left and right lane boundaries p^(i)_t, llb, p^(i)_t, rlb, based on lane width ℓ_w. Eq. <ref> uses a (nonconvex) Gaussian function with covariance C to discourage crossing the center line p^(i)_t, cl. We specify these parameter values in the appendix. Lastly, we define the direction of motion as the y-direction and the transverse direction as the -x-direction to maintain a righthand coordinate frame.Passing Scenario. The passing scenario begins with 2behind 1 and runs for .In the ground truth trajectories (<ref>), 2 initially follows 1 for 2.5, then passes in the other lane, and ends ahead of 1 in the initial lane.1 drives along the lane at a constant velocity, applying no controls. We simulate the leadership filter on the passing maneuver.We expect 1 to start with a high leadership probability and for that probability to decrease once the passing maneuver begins, and vice versa for 2. In <ref>, the state estimate tracks the ground truth, indicating that the leadership filter captures the game dynamics.Since the SLF produces the expected trends in the state estimates and agents' probabilities, our results show that Stackelberg leadership can match right-of-way expectations for scenarios without a ground truth leader. Moreover, the SLF responds appropriately to changing leadership dynamics over time.Finally, the result shows that SILQGames can handle nonconvex cost terms.Merging Scenario. The merging scenario involves three sections of road (see <ref>): two -long lanes separated by a barrier at x=0, a merging segment that decreases from width 2 ℓ_wto ℓ_woverof length, and a one lane road centered along x=0m. Both agents start in their own lanes, though 1 starts behind 2. In the ground truth, 2 merges before 1, which slows down to yield before merging. 2 delays its merge once it enters the merging segment. We construct the game played within the measurement model to incentivize each agent to merge quickly after entering the merging segment, so the cost we define for 2 does not exactly reflect its actual behavior. In <ref>, we simulate this merge with the leadership filter. We expect 2 to lead the interaction as it begins ahead and merges first. Given their objectives, we expect the agents' measurement trajectories to merge quickly, and we see these trajectories quickly move toward the center of the merging segment. Nevertheless, the leadership filter's state estimate tracks the ground truth, including 2's delayed merge, and the SLF infers 2 as the leader.Thus, the results match our right-of-way expectations despite agent objectives that do not exactly describe the observed ground truth behavior. § DISCUSSION & LIMITATIONSWe contribute SILQGames, an iterative algorithm to solve Stackelberg games with nonlinear dynamics and nonquadratic costs. Through empirical validation on non-LQ game scenarios, we show it reliably converges. We also introduce the SLF and apply it to noisy scenarios with known leaders and realistic driving situations. Results highlight the SLF's ability to estimate leadership in long-horizon interactions with changing leadership and with objectives that do not exactly reflect observed agent behavior. Furthermore, we discuss the robustness of our method to the measurement horizon and noise. Future directions include extending SILQGames to > 2 agents and overcoming combinatorial scaling challenges from the pairwise definition of Stackelberg leadership. Another critical direction involves establishing theoretical bounds on the number of SILQGames iterations. For the SLF, future work includes enabling real-time application using more efficient estimators and algorithmically adjusting the measurement horizon T_s to observe leadership dynamics over different horizons. Further work may also clarify when Stackelberg leadership appropriately models leadership. § ACKNOWLEDGMENTWe thank Professor Todd Humphreys and members of the CLeAR and SWARM Labs at UT Austin for feedback.SILQGames Parameters. We vary the initial position of 2 about (-1, 2) along aarc of a circle.We set convergence threshold τ =, the maximum number of iterations to , and minimum step size α_min =. We play the game forwith period Δ t =( steps). The nominal controls apply zero input.SLF Parameters.In our examples, we select nominal controls for measurement models with a simple heuristic that returns T_s-length control trajectories for each agent, (…) ={ [(i)t-1⋯(i)t-1]}. We configure the number of particles = 50.The Stackelberg measurement horizon T_s = 75 steps (1.5). Let p_trans = 0.02, so transitioning is thus likely enough that particles can switch leadership state and model dynamic leadership transitions without injecting excessive uncertainty into the inference.For the process noise uncertainty , we set position and heading variances on the order of magnitude of 10^-3 and velocity variances to 10^-4. SLF measurement uncertainty = 5 · 10^-3 I. The convergence threshold τ=1.5 · 10^-2, the max iteration count M_iter = 50, and step size α_min = 10^-2.Driving Scenario Parameters. Let speed limit v_m = with initial headings aligned with the road direction ψ_r. Lanes are ℓ_w = wide. A safety violation occurs if the vehicles come within d_c = of one another. We constrain acceleration and rotational velocity magnitudes toand . The measurement horizon T_s=, with sampling periods of 0.05 (20). We initialize the leadership prior toand use 100 particles.The center line is at x =. Each agent begins with velocity . Other parameters are identical to the SLF parameters.
http://arxiv.org/abs/2310.18171v1
{ "authors": [ "Hamzah Khan", "David Fridovich-Keil" ], "categories": [ "cs.MA", "cs.GT" ], "primary_category": "cs.MA", "published": "20231027143053", "title": "Leadership Inference for Multi-Agent Interactions" }
SynergyNet: Bridging the Gap between Discrete and Continuous Representations for Precise Medical Image Segmentation Vandan Gorade^1, Sparsh Mittal^1, Debesh Jha^2, Ulas Bagci^2^1 Indian Institute of Technology Roorkee, India^2 Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, USA January 14, 2024 ===============================================================================================================================================================================================================In recent years, continuous latent space (CLS) and discrete latent space (DLS) deep learning models have been proposed for medical image analysis for improved performance. However, these models encounter distinct challenges. CLS models capture intricate details but often lack interpretability in terms of structural representation and robustness due to their emphasis on low-level features. Conversely, DLS models offer interpretability, robustness, and the ability to capture coarse-grained information thanks to their structured latent space. However, DLS models have limited efficacy in capturing fine-grained details. To address the limitations of both DLS and CLS models, we propose SynergyNet, a novel bottleneck architecture designed to enhance existing encoder-decoder segmentation frameworks. SynergyNet seamlessly integrates discrete and continuous representations to harness complementary information and successfully preserves both fine and coarse-grained details in the learned representations. Our extensive experiment on multi-organ segmentation and cardiac datasets demonstrates that SynergyNet outperforms other state of the art methods including TransUNet: dice scores improving by 2.16%, and Hausdorff scores improving by 11.13%, respectively. When evaluating skin lesion and brain tumor segmentation datasets, we observe a remarkable improvements of 1.71% in Intersection-over-Union scores for skin lesion segmentation and of 8.58% for brain tumor segmentation. Our innovative approach paves the way for enhancing the overall performance and capabilities of deep learning models in the critical domain of medical image analysis.§ INTRODUCTIONMedical image segmentation, a key step in gaining vital anatomical insights, assists clinicians in injury identification, disease monitoring, and treatment planning. As reliance on medical image analysis grows, the demand for precise, robust segmentation techniques rises. In this regard, deep learning has greatly improved our ability to do this. Existing deep learning models can be divided into continuous latent space (CLS) and discrete latent space (DLS) models. The CLS models represent latent variables as continuous values, enabling fine-grained representation.CLS Models such as FCNs <cit.>, UNet <cit.>, and TransUNet <cit.>and others <cit.> have shown an ability to capture spatial relationships and fine-grained details for medical image segmentation. However, these models offer limited latent interpretable representations of structural information and robustness <cit.> in terms of generalization. DLS methods employ discrete codes instead of continuous values for latent variables. They use techniques such as vector quantization to discretize the latent space into a finite set of elements representing anatomical structures. This enables efficient and generalized data representation. Approaches such as VQVAE <cit.> and VQGAN <cit.> have shown promise in image generation, representation learning, and data compression. Recent studies <cit.> highlight the effectiveness of DLS models in achieving interpretable and robust medical segmentation, particularly for organs like lungs, retinas, optic discs, and prostates. However, DLS models struggle to capture fine-grained details and complex spatial relationships, especially in multi-organ and cardiac segmentation tasks. Accurate modeling of spatial interdependencies between organs is crucial for precisely segmenting intricate boundaries and overlapping structures. Recent studies  <cit.> have highlighted the advantages of learning complementary information across various domains, including medical imaging. Motivated by this trend, our study aims to address the pivotal question: “How can we effectively integrate complementary information from discrete and continuous latent space models for improved medical image segmentation?”.We present SynergyNet, a novel bottleneck architecture designed specifically for encoder-decoder segmentation models, aiming to enhance medical image segmentation results by integrating continuous and discrete latent spaces. SynergyNet includes the Quantizer, DisConX, and Refinement modules. The encoder extracts a detailed continuous representation, while the quantizer module maps it to a compact discrete representation using vector quantization. By reducing dimensionality, the quantizer module enables efficient, structured representation while preserving essential information. The DisConX module serves as a bridge, employing cross-attention to effectively combine the discrete and continuous representations. Leveraging their complementary information, the DisConX module enhances pattern capture and interpretation. The refinement module further enhances the fused features, using hard attention to emphasize essential elements and filter out noise. The refinement module improves discriminative power and segmentation quality by focusing on relevant features. Our contributions are as follows:* We propose SynergyNet, a novel method that integrates discrete and continuous representations to enhance medical image segmentation performance. This integration has not been explored in prior studies for medical image segmentation tasks. * Our study demonstrates the effectiveness of combining CLS and DLS models in improving model generalization across diverse datasets. By leveraging CLS models for fine-grained detail capture and DLS models' structured latent space for encoding coarse-grained details, we observe notable enhancements in learning and generalization. This integration effectively utilizes the strengths of each approach, resulting in improved performance across various datasets.* SynergyNet is extensively evaluated on four diverse datasets, including Synapse multi-organ segmentation, ACDC dataset for cardiac segmentation, ISIC 2018 dataset for skin lesion segmentation, and brain tumor segmentation dataset. Results show that SynergyNet outperforms both CLS <cit.> and DLS-based methods <cit.> across all evaluated datasets. Qualitative analysis confirms the efficacy of SynergyNet in capturing intricate anatomical structures and achieving more precise segmentation compared to existing methods § PROPOSED METHOD We first discuss the preliminaries (Section <ref>) and then present our newly proposed algorithm (SynergyNet) and its architecture (Section <ref>).§.§ Preliminaries§.§.§ Problem StatementMedical image segmentation aims to automatically label anatomical structures or pathological regions within medical images. Mathematically, this involves finding a mapping function f that assigns labels y to pixels x in the input image domain 𝒳. The goal is to maximize the conditional probability of the ground truth segmentation labels ŷ given the input image x, i.e., ŷ = max_y P(y|x). Learning the parameters of the mapping function f involves assigning the correct labels to each pixel using training data. The learning process employs a loss function that usually consists of Binary Cross Entropy (BCE) and Dice similarity coefficient. This loss function can be defined as follows:L_seg = BCE(y, ŷ) + (1 - Dice(y, ŷ)), where BCE(y, ŷ) calculates the binary cross entropy loss between the predicted labels y and the ground truth segmentation ŷ, and Dice(y, ŷ) computes the dice similarity coefficient between y and ŷ.§.§.§ Vector Quantization Following VQVAE<cit.>, Vector quantization (VQ) transforms continuous latent space vectors z_con∈ℝ^dim into discrete codes e_k from a predefined codebook E ∈ℝ^K × dim, where K is the codebook size. The objective of VQ is to find the code e_k from the codebook that minimizes the euclidean distance to the input vector z_con. This code e_k serves as the discrete representation z_dis of z_con. During training, the codebook E and the mapping functions between the continuous and discrete representations are learned by minimizing the quantization loss ℒ_quant = ‖ z_con - e_k ‖_2^2. The quantization process efficiently encodes and decodes data while preserving important information in discrete representations. We use the total loss function ℒ_total = ℒ_seg + ℒ_quant for end-to-end model training.§.§.§ Multi-head Cross-attention Mechanism The multi-head cross-attention mechanism extends the cross-attention by incorporating multiple attention heads. Each attention head attends to different subspaces of queries and keys, capturing diverse relationships and dependencies. Given a set of queries Q and keys K, multiple sets of attention weights are computed, one for each attention head. The relevance scores between a query q_i and a key k_j are obtained using a similarity function denoted as score(q_i, k_j) = sim(q_i, k_j). The softmax function is applied to transform the relevance scores into attention weights for each attention head:𝒜_soft^(h)(q_i, k_j) = exp(score^(h)(q_i, k_j))/∑_j'exp(score^(h)(q_i, k_j'))The multi-head cross-attention mechanism then computes a weighted sum of the values associated with the keys using the attention weights of each attention head h:z^attn_h = ∑_j 𝒜_soft^(h)(q_i, k_j) · v_j.Here, z^attn_h represents the aggregated result for the given query q_i, considering the importance assigned by the attention weights of the h-th attention head. The outputs from all the attention heads are concatenated and linearly transformed to produce the final output:z^attn = Concat(z^attn_1, z^attn_2, …, z^attn_h) · W^O.The multi-head cross-attention mechanism enables the model to capture various interactions and dependencies between queries and keys, enhancing its representation and information retrieval capabilities. §.§ Proposed Architecture Our proposed architecture, illustrated in Fig. <ref>(c), consists of three key components: the encoder, bottleneck, and decoder. The bottleneck incorporates the Quantizer, DisConX, and Refinement modules. Starting with an input image X, the encoder function f generates the continuous representation z_con. The Quantizer module (Section <ref>) maps z_con to a more compact discrete representation z_dis, capturing essential information efficiently. The DisConX module (Section <ref>) combines the discrete and continuous representations through cross-attention, leveraging both benefits to enhance data interpretation. The Refinement module (Section <ref>) further improves the representation by emphasizing relevant features. This step enhances the model's discriminative power for the given task. §.§.§ DisConX Module The DisConX module integrates the discrete representation z_dis and continuous representation z_con using cross-attention, as discussed in Section <ref>. It calculates relevance scores between discrete queries q_i and continuous keys k_j, and computes attention weights using a softmax function. The module then performs a weighted sum of the continuous values v_j associated with the keys based on the attention weights. The computation happens as:z^attn_dc= ∑_j𝒜^(h_s)_soft(z_dis, z_con) · z_con. Here, h_s represents the index of the attention heads, z_dis denotes the discrete query, and z_con represents the continuous key/value. The resulting z^attn_dc is the aggregated representation, considering the attention weights from all attention heads. This integration of discrete and continuous representations enables the exchange of complementary information, enhancing the model's ability to capture complex patterns and improving performance in tasks such as semantic segmentation.Next, the information of z_dis, z_con, and z_attn^dc is fusedas z_f = Fusion(z_dis, z_con, z_dc^attn). The fusion operation integrates complementary information from discrete and continuous representations, enhancing the overall representation for subsequent refinement modules. We empirically choose addition for fusion. §.§.§ Refinement Module The proposed refinement module incorporates a hardness-aware self-attention mechanism, which captures the relevance and similarity between elements in the fused representation. This mechanism enhances the overall representation quality by emphasizing important elements and filtering out the noise. The element with the highest relevance score is identified as the most important. The attention weight for each element is determined by comparing its similarity to other elements. The equation below represents this calculation:𝒜^(h_h)_hard(z_f_i) = 𝕀(sim^(h_h)(z_f_i, z_fj) = max_j sim^(h_h)(z_f_i, z_f_j)).Here, the indicator function 𝕀 checks if the similarity between element z_f_i and any other element z_f_j is the maximum among all similarities. h_h is the index of attention head.Next, the self-attention mechanism calculates a weighted sum of the values associated with the selected elements using the attention weights for each attention head:z_ref = ∑_j 𝒜^(h_h)_hard(z_f_i) · (z_f_j).The refined information z_ref represents the output of the self-attention mechanism for the h_h-th attention head. This process is repeated for all attention heads. The resulting refined information from all the attention heads is then concatenated and linearly transformed to produce the final refined representation. It highlights the most important elements within the fused representation, considering multiple attention heads. This refined representation enhances the discriminative power and overall quality of the fused features. Finally, the fused representation z_f is added to z_ref and then passed through the decoder.§ EXPERIMENTAL PLATFORMDatasets: We utilized four open-source medical segmentation datasets for our experiments. The Synapse Multi-Organ Segmentation dataset <cit.> consists of 30 clinical CT cases with annotated segmentation masks for eight abdominal organs. We followed the configuration described in <cit.>, using 18 cases for training and 12 cases for testing. The ACDC dataset <cit.> is a cardiac MRI dataset with 100 exams, including labels for the left ventricle (LV), right ventricle (RV), and myocardium (MYO). We divided the dataset into 70 training samples, 10 validation samples, and 20 testing samples as per <cit.>. For skin lesion segmentation, we adopted the ISIC 2018 dataset <cit.> and followed the division into train, validation, and test sets as per previous work <cit.>. The Brain Tumour Segmentation (BTS) dataset <cit.> comprises 233 volumetric T1-weighted contrast-enhanced images from 233 patients (with a total of 3064 2D slices), including three types of brain tumors (meningioma, glioma, and pituitary tumor) with corresponding binary masks. We maintained an approximate 80:20 ratio for the training and test sets. Metrics:We utilize the Dice Similarity Coefficient (DSC) and the 95% Hausdorff Distance (HD) metrics for the synapse and ACDC datasets to follow the segmentation challange standards and benchmarking. For the ISIC-18 and BTS datasets, we employ a more comprehensive range of metrics per segmentation challenge benchmarking, including the Intersection over Union (IOU), DSC, Specificity (SP), Sensitivity (SE), and Accuracy (ACC). For HD, lower is better. For other metrics, higher is better.Implementation Details: We use PyTorch framework and train the models on three RTX 2080 GPUs, each with 11GB of memory. The input image size was set to 224 × 224. During training, we used a batch size of 8 and a learning rate of 0.01. We utilized the SGD optimizer with a momentum of 0.9 and weight decay of 0.0001. We employed data augmentations, such as flipping and rotating. Architecture Configuration: SynergyNet employs a ResNet50 encoder pre-trained on the ImageNet dataset, although we have no restriction on the choice of architecture for encoder. The quantizer module utilizes a codebook size of K=1024. The quantizer, DisConX and Refinement module maintain a hidden dimension of dim=512. We evaluate multiple SynergyNet variants, for example, SynergyNet-8s2h implies that h_s=8 and h_h=2, i.e., it has 8 DisConX heads and 2 refinement heads. The pre-and post-quantization blocks consist of two convolution block. The decoder has the same depth as the encoder.Techniques for Comparison:We compare SynergyNet against four CLS methods, i.e., UNet <cit.>, Att-UNet <cit.>, DeeplabV3+ <cit.> R50ViT <cit.>, TransUNet <cit.> and two DLS methods, i.e.,VQUNet <cit.> and TransVQUNet. TransVQUNet architecture is a combination of VQUNet and TransUNet. It consists of an encoder followed by a quantizer module and a transformer bottleneck, similar to the bottleneck of TransUNet. We kept the hyperparameters and architectural design consistent across all the methods for consistency. § EXPERIMENTAL RESULTS §.§ Synapse multi-Organ segmentation Table <ref> compares SynergyNet with both CLS and DLS methods. SynergyNet outperforms both CLS and DLS methods by a significant margin. Quantitatively, SynergyNet-8s2h achieves a 2.17pp improvement in DSC and a 12.20pp deterioration in HD compared to TransUNet, while showing an 11.24pp improvement in DSC and an 11.16pp deterioration in HD compared to TransVQUNet-8h (pp= percentage point). SynergyNet-8s8h variant shows the best results in terms ofHD metric. SynergyNet demonstrates superior accuracy in delineating the organs and capturing the boundary between them. It outperforms other methods in learning both coarse-grained anatomical structures (e.g., stomach and liver) and fine-grained anatomical structures (e.g., gallbladder and spleen). TransUNet, a well-engineered CLS method, exhibits comparable performance in learning fine-grained structures. On the other hand, DLS methods can capture coarse anatomical structures but struggle to capture fine-grained boundaries. SynergyNet benefits from the complementary information extracted by continuous and discrete latent spaces. Fig. <ref> further highlights the effectiveness of SynergyNet in accurately segmenting fine/coarse and complex structures. SynergyNet yields more robust and precise segmentation results even in the presence of intricate variations. Interpretability Analysis: Here, we analyze the bottleneck architecture to evaluate learned representations. Fig. <ref> visualizes the GradCAMs, revealing that CLS methods excel in capturing fine organ boundaries, while DLS methods excel in locating organs but struggle with fine boundary details. In contrast, SynergyNet effectively captures both fine and coarse boundaries, emphasizing the importance of leveraging complementary information. These findings further support the significance of synergistic effects. §.§ Cardiac Segmentation From Table <ref>, we note that the proposed SynergyNet outperforms both continuous and discrete baselines. SynergyNet can effectively capture complex heterogeneous structures. Compared to TransUNet and TransVQUNet-8s2h, SynergyNet-8s2h demonstrates 0.07pp and 11.61pp higher DSC and 0.06pp and 3.23pp lower HD. The qualitative results are shown in Fig. <ref> further validate effectiveness of our approach in delivering more accurate segmentation results.§.§ Skin Lesion Segmentation Table <ref> demonstrates the quantitative results on the ISIC 2018 dataset. Compared to CLSmethods, DLS-based approaches can effectively capture shapes like lesions, which typically exhibit less variability in terms of shape and size compared to organs and cardiac structures. However, the proposed SynergyNet method consistently outperforms both CLS and DLS-based methods, showcasing its ability to generalize well across different scenarios. Fig. <ref> further highlights SynergyNet's ability to capture both coarse and fine-grained structured skin lesions. CLS-based methods tend to over-segment non-contour structures, while DLS-based methods such as VQUNet tend to under-segment lesions. In contrast, SynergyNet successfully and accurately segments lesions with smoother boundaries, demonstrating the importance of learning synergistic representations.§.§ Brain Tumour SegmentationSynergyNet achieves the best score on all metrics on the the BTS dataset (Table <ref>) and outperforms the second-best method by a large margin.From Fig. <ref>, we note that DLS methods tend to lose boundary information, but theysegment regions of interest more accurately than CLS methods for this particular case. On the other hand, SynergyNet consistently identifies regions of interest with smoother boundaries, surpassing both CLS and DLS methods. SynergyNet accurately predicts lesions, even in case of varying locations, sizes, and modality views. It effectively suppresses irrelevant information, such as the background. § ABLATION STUDIESUnless otherwise mentioned, we use K=1024, dim=512, h_h=8, h_s=2 and backbone as ResNet-50. §.§ Codebook analysisFrom Table <ref>, we observe a direct relationship between K and the performance of SynergyNet on the Synapse dataset, where increasing K leads to a notable improvement in HD scores.On the ACDC dataset, the trend is different, such that K=256 gives the best HD score, and K=128 and K=1024 give comparable results. A smaller codebook size in the quantization module leads to higher compression and more aggressive quantization, but it can result in the loss of local information. This loss of fine-grained details and subtle variations can negatively impact the segmentation model's ability to capture intricate boundaries, leading to lower HD scores.To achieve the best segmentation performance, the codebook size needs to be chosen so as to balance compression and preservation of local information. §.§ Hidden Dimension (dim) Analysis From Table <ref>, we observe that using a codebook size of K = 1024 with dim (hidden dimension size) greater than 512 or dim less than 512 leads to a deterioration in performance. Empirically, we found that setting K to be twice the value of dim (K = 2 * dim) yields the best performance. Thus, multiple parameters, including the dataset characteristics, influence the overall performance.§.§ Bottleneck Size AnalysisTable <ref> presents the impact of the size of the DisconX module and the Refinement Module of SynergyNet on the overall segmentation performance. The combinations h_s=8, h_h=0 and h_s=2, h_h=0 denote the configurations without the refinement module.We observe a significant deterioration in the overall performance when the refinement module is not utilized.For the Synapse dataset, the best value of DSC is obtained for h_s=8, h_h=2 and the best value of HD is obtained for h_s=8, h_h=8. For the ACDC dataset,the combination h_s=8, h_h=2 results in the best values of DSC and HD.Overall, the optimal module size is dataset and task-dependent. It is crucial to consider these factors when determining the optimal sizes for the DisConX and refinement modules.§.§ Contribution of DisConX ModuleThe DisConX module plays a crucial role in the SynergyNet's ability to learn fine-grained local features. To understandits contribution,we create a variant SynergyNet(Fusion), which replaces the DisConX module with a simple feature fusion approach. It combines discrete and continuous representations and passes them through a refinement module. As shown in Table <ref>, this variant attains lower performance, which clearly demonstrates that the DisConX module is essential for learning fine-grained local features. Overall, results indicate that selectively attending to complementary information preserves higher-quality discriminative and semantic information. §.§ Backbone AnalysisWe evaluate SynergyNet with ResNet and EfficientNet backbones. For the Synapse dataset, ResNet-50 achieved a DSC score of 79.65%, and EfficientNet-B7 achieved the lowest HD score of 21.53%. In the ACDC dataset, ResNet-101 performed the best on both metrics. EfficientNet-B0 exhibited remarkable boundary delineation capabilities despite its shallower architecture. Please Refer to the supplementary materials for parameters analysis. Limitations: i) The quantizer's reliance on selecting the most similar codebook item for input representation may lead to difficulties in capturing intricate patterns, potentially causing information loss. ii) Both CLS and DLS struggle to effectively model inter-class relationships, resulting in increased false negatives. SynergyNet reduces false negatives compared to CLS and DLS but still can be further improved. § CONCLUSION AND FUTURE WORKWe propose SynergyNet, a novel bottleneck architecture for learning complementary information from CLS and DLS. Extensive experiments and ablation studies confirm that SynergyNet captures both fine and coarse-grained details in the learned representations and outperforms previous works.SynergyNet is a promising model for medical image analysis that offers high performance. Integrating SynergyNet with efficient architectures like Swin Transformer <cit.>, HiFormer <cit.> and others shows promise for further advancements. Exploring SynergyNet's performance with unsupervised models is an intriguing research area that enables leveraging unlabeled data to enhance capabilities in medical image analysis. This holds the potential to improve efficiency and performance in this critical domain. ieee_fullname
http://arxiv.org/abs/2310.17764v1
{ "authors": [ "Vandan Gorade", "Sparsh Mittal", "Debesh Jha", "Ulas Bagci" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231026201344", "title": "SynergyNet: Bridging the Gap between Discrete and Continuous Representations for Precise Medical Image Segmentation" }
Prospects for thermalization of microwave-shielded ultracold moleculesJohn L. Bohn January 14, 2024 ======================================================================== Difference-in-differences (DID) is a popular approach to identify the causal effects of treatments and policies in the presence of unmeasured confounding. DID identifies the sample average treatment effect in the treated (SATT). However, a goal of such research is often to inform decision-making in target populations outside the treated sample. Transportability methods have been developed to extend inferences from study samples to external target populations; these methods have primarily been developed and applied in settings where identification is based on conditional independence between the treatment and potential outcomes, such as in a randomized trial. This paper develops identification and estimators for effects in a target population, based on DID conducted in a study sample that differs from the target population. We present a range of assumptions under which one may identify causal effects in the target population and employ causal diagrams to illustrate these assumptions. In most realistic settings, results depend critically on the assumption that any unmeasured confounders are not effect measure modifiers on the scale of the effect of interest. We develop several estimators of transported effects, including a doubly robust estimator based on the efficient influence function. Simulation results support theoretical properties of the proposed estimators. We discuss the potential application of our approach to a study of the effects of a US federal smoke-free housing policy, where the original study was conducted in New York City alone and the goal is extend inferences to other US cities.§ INTRODUCTIONDifference-in-differences (DID) is a popular identification strategy when studying the causal effects of large-scale social and economic policies <cit.>. DID is appealing when: (i) randomization is not feasible, (ii) there is variation across jurisdictions and over time in terms of whether a policy was adopted, and (iii) not all variables that are confounders of the policy-outcome relationship are measured, leading to concerns about confounding bias <cit.>. By comparing pre- and post-policy outcomes in both the jurisdiction implementing the policy and a comparable jurisdiction without the policy, and making a so-called parallel trends assumption (i.e., that changes in average potential outcomes over time are independent of policy adoption) <cit.>, DID can identify the causal effect of the policy on the outcome, even in settings where unmeasured variables would confound either (i) a pre-post analysis or (ii) a post-policy comparison between the treated and untreated jurisdictions.An important (often under-recognized) aspect of DID is that it identifies the average treatment effect among the treated (ATT) in the post-policy period, and not the average treatment effect (ATE) or other common parameters of interest <cit.>. For example, the ATT in a study of a policy raising the minimum wage is the effect of the policy on outcomes for the population living in the jurisdiction(s) that actually raised the minimum wage, and not the population living in all the jurisdictions in the study—those with and without the policy. ATT estimates resulting from a DID analysis can be informative as to whether to maintain or discontinue policies in those locations. However, a major goal of DID research is often to inform policy decisions by governments that have not yet adopted the policy of interest; in the minimum wage example, it may be of interest to inform decisions by the federal government or states with less generous minimum wage laws. Naïvely considering the estimated policy effects to apply to untreated jurisdictions requires the additional, strong assumption that there are no effect measure modifiers (measured or unmeasured) whose distribution varies between the treated jurisdiction(s) under study and the untreated jurisdiction(s) to which one wishes to make inferences <cit.>. For example, such an extrapolation would be biased if effects of the minimum wage differ by age, and age distributions differed across states.Generalizability and transportability methods have been developed with the goal of formally extending inferences made in one population to another population in the presence of effect heterogeneity <cit.>. These methods have mainly been applied in contexts where internal validity is established based on an unconfoundedness assumption, typically achieved through a randomized controlled trial (RCT). It is well-known that real-world RCTs can deliver high internal validity, but that inferences from such studies apply only to the people participating in the RCT, which may differ from the true target population in terms of effects experienced. We define “target population” to be the population to whom inference is desired, as dictated by substantive concerns. For example, in RCTs of medical treatments, the target population may be the population that should receive treatments in practice, which may differ in important aspects from the individuals included in the trial <cit.>. Methods exist to quantitatively extend (i.e., transport or generalize) effects estimated in RCTs to target populations other than the included study sample, possibly alleviating the well-known tradeoff between internal and external validity in such studies <cit.>.It is plausible that transportablity methods could be used to quantitatively extend causal effects estimated from DID studies to target populations other than the treated sample, possibly alleviating the well-known tradeoff between internal and external validity in such DID studies as well. However, to our knowledge, neither identification assumptions nor estimators for transporting DID estimates have been addressed in the literature. DID presents special challenges for transportability because of the presumed existence of unmeasured confounders. Standard approaches to transportability assume that a conditional average treatment effect is constant between the sample and target population after conditioning on a measured set of covariates; if any unmeasured confounders in a DID application are also effect measure modifiers of the treatment-outcome relationship, then the existence of these unmeasured confounders creates complexities in evaluating this condition which have not, to our knowledge, been explored. Causal diagrams <cit.> may facilitate such an exploration, as they have been essential in understanding assumptions for identification of transported effects <cit.>, but have seen limited use in DID settings <cit.>. This disconnect may be because causal diagrams generally only capture nonparametric independence assumptions <cit.>, whereas parallel trends is a semiparametric assumption partially restricting the functional form of the outcome distribution <cit.>.This paper develops a formal approach to identification and estimation of effects in a target population, based on DID conducted in a study sample that differs from the target population. This paper is framed as transportability in the sense that we assume the study sample is not a subset of the target population <cit.>, though our results can easily be extended to the case where the study sample is nested within the target population. We employ causal diagrams to understand the sampling mechanism (i.e, the model that distinguishes the study sample from the target population), and show that our results rely crucially on the assumption that unmeasured confounders are either independent of the sampling process or are not effect modifiers on the scale of the effect being estimated (in this paper, we focus on additive effects such as the ATT and ATE, but our results can be generalized to non-additive measures, such as risk ratios). Section <ref> describes the observed data and preliminary assumptions, Section <ref> presents key identification results linking the observed data in the sample to causal quantities in the target population, and Section <ref> presents estimators (including a doubly robust estimator based on the efficient influence function) for these quantities, which are illustrated using simulation in Section <ref>. Section <ref> concludes.§ PRELIMINARIESSuppose we observe data on the variables W_i, A_i, Y_i0, and Y_i1 in a study sample containing n individuals or units (i=1,...,n), where W_i are (possibly multivariate) baseline covariates measured just before exposure, A_i is a binary exposure, and Y_it (t=0,1) are outcomes measured before (t=0) and after (t=1) exposure occurs. Hereafter, we drop the i subscript unless needed to resolve ambiguity. Suppose that the study sample is not representative of the true target population of interest, and that the latter contains N individuals (i=1,...,N). We let S=1 denote membership in the study sample and S=0 denote membership in the target population. We assume that outcomes are only measured in the study sample, but that treatment and covariates are measured in both the study sample and the target population. Thus, the observed data take the form O = {S, A, W, Y_0S, Y_1S}. Throughout, we use f(x|·) to denote a conditional density if x is continuous and a conditional probability mass function if x is discrete. Caligraphic uppercase letters denote the support of a random variable.We use Y_t(a) to denote a potential outcome, or the outcome that would have occurred if exposure A had been set by intervention to the value a. We assume the following throughout:(No interference) Y_it(a_i, a_i')=Y_it(a_i) for i≠ i', with i,i' such that {S_i, S_i'}∈{0,1}^2 (Treatment version irrelevance) If A_i=a, then Y_it=Y_it(a) with i,i' such that {S_i, S_i'}∈{0,1}^2 Assumptions <ref> and <ref> are standard in the causal inference and transportability literature and are not specific to the DID setting. Assumption <ref> requires that one unit's treatment does not impact another unit's potential outcome in either the sample or the target. Assumption <ref> requires that treatments are sufficiently well-defined that observed outcomes can stand in for potential outcomes under treatment with the observed exposures, and that versions of the treatment do not differ between the sample and target. Assumptions <ref> and <ref> are often referred to together as the stable unit treatment value assumption (SUTVA). §.§ Difference-in-differences in the study sampleHere, we give a brief review of causal identification based on DID, which we will assume is the basis of identification in the study sample. Specifically, we invoke the following assumptions, standard in the DID literature <cit.>:(No anticipation): Y_0(a)=Y_0 for a=0,1 (Positivity of treatment assignment) If f(w|S=1)>0 then f(A=0|W=w, S=1)>0 with probability 1 for all w∈𝒲 (Parallel Trends): For w∈𝒲: {Y_t(0)-Y_t-1(0)|A=1, S=1, W=w}={Y_t(0)-Y_t-1(0)|A=0, S=1, W=w} Assumption <ref> states that future treatment does not impact the prior outcomes (this assumption can also be relaxed to allow anticipation up to a known time period <cit.>). It is well known that under Assumptions <ref>-<ref>, it is possible to identify the W-conditional SATT, defined as η(w)≡[Y_1(1) - Y_1(0)|W=w, A=1, S=1]. Specifically, under Assumptions <ref>-<ref> we have:η(w)=[Y_1-Y_0|W=w, A=1,S=1]- [Y_1-Y_0|W=w, A=0, S=1] ≡ m_1(w) - m_0(w),where we define m_a(w) = [Y_1-Y_0|W=w, A=a, S=1]. By extension, the unconditional sample ATT (abbreviated SATT, usually the focal parameter in DID) is identified as [Y_1(1) - Y_1(0)|A=1,S=1]=[m_1(W) - m_0(W)|A=1,S=1]. However, and importantly for our discussion, Assumptions <ref>-<ref> are not sufficient to identify parameters unconditional on A=1, such as the sample average treatment effect (SATE), defined as [Y_1(1) - Y_1(0)|S=1]. This is because parallel trends provides information about potential outcomes only among the treated group; without further assumptions there is no basis for identification of potential outcomes for the group A=0. Moreover, and as is the focus of this paper, additional assumptions would be required to identify effects outside the study sample, since parallel trends and positivity of treatment assignment are conditional on S=1.§ IDENTIFICATION OF TRANSPORTED TREATMENT EFFECTSIn this section we consider the task of equating a causal estimand (i.e., one specified in terms of potential outcomes) in the target population to a function of the distribution of the observed data, O. Specifically, we focus on the population average treatment effect in the treated (PATT), defined as [Y_1(1)-Y_1(0)|A=1, S=0], and the population average treatment effect (PATE) defined as [Y_1(1)-Y_1(0)|S=0]. We begin by introducing a motivating example, after which we introduce and discuss a set of sufficient identifying assumptions, and present identifying formulas which equal each causal estimand if the assumptions are true.§.§ Motivating exampleAs of July 30, 2018, a US Department of Housing and Urban Development (HUD) rule required all public housing authorities to implement smoke-free housing (SFH) policies banning smoking in residences. As a motivating example, consider the question, what effect did the federal SFH policy have on air quality in US public housing developments? To answer this question, we consider transporting the results from a study conducted in public housing buildings in New York City (NYC) only. Specifically, a team of investigators conducted air quality monitoring in living rooms and common areas of NYC public housing buildings, both before the federal policy went into effect (from April to July 2018), and again approximately every six months for 3 years post-policy. A DID analysis was conducted to estimate the effect of the policy on indoor air nicotine (among other measures), using as a comparison group a sample of households receiving housing assistance through a program known as Section 8, a public subsidy to supplement rental costs in private sector buildings <cit.>. Air quality was sampled in stairwells, hallways, and living rooms; for simplicity here we focus on stairwells. Because of concerns about systematic variation in outdoor air quality between building types, the investigators adjusted for outdoor ambient PM_2.5 in their DID estimates. (We note that in the original study, building inclusion criteria were high-rise [>15 floors], large resident population [>150 units], at least 80% Black or Hispanic residents, and at least 20% younger than 18 years; for simplicity we ignore these criteria here.)In this example, Y_t is a continuous variable representing log-transformed air nicotine in stairwells (where we let t=0 denote April-July 2018 and t=1 denote April-September 2021), A=1 denotes residence in a public housing building, A=0 denotes residence in a Section 8 household, S=1 denotes residence in NYC, S=0 denotes residence outside of NYC, and W is a continuous variable capturing outdoor ambient PM_2.5. Thus, if Assumptions <ref>-<ref> hold (along with correct model specification and no measurement error), the DID results in this study may be interpreted as estimates of the effect of the SFH policy on indoor air quality for NYC public housing residents only. Though the study is informative as to the effect of the policy in NYC, it is also of interest to federal policymakers to estimate the PATT, which here represents the effect of the HUD rule on air nicotine in April-September 2021 in public housing in the US outside of NYC (S=0). Moreover, it may also be of interest to assess the PATE, which here represents the effect of a hypothetical policy covering both public housing and Section 8 housing. Importantly, the estimates in this study cannot be interpreted as estimates of the PATT or PATE without additional assumptions. §.§ Naïve approachWe begin with an approach to transportability that does not take into account the causal structure of DID (in particular, does not take into account unmeasured confounding), after which we will use causal diagrams to illustrate why this approach will usually fail. Since all identified potential outcomes in equation (<ref>) are conditional on A=1, an obvious starting point in attempting to transport effects identified through DID is to identify the PATT. Inspecting equation (<ref>), a natural approach may be to assume that the W-conditional SATTs (conditional on each value of w) are equal between the sample and the target. If this were the case, one could identify the PATT using the following expression:[Y_1^1-Y_1^0|A=1,S=0]= [ (Y_1^1-Y_1^0 |W, A=1, S=1)|A=1,S=0] =[m_1(W)-m_0(W)|A=1,S=0]Specifically, in order for equation (<ref>) to hold, the following assumptions would be sufficient:(Exchangeability of selection) Y_t(a)S | W, A=1 (Positivity of selection) If f(w|A=1, S=0)>0 then f(S=1|W=w, A=1)>0 with probability 1 for all w ∈𝒲Assumption <ref> states that, among the treated group, the distributions of potential outcomes in the sample and target are equal after conditioning on W. Assumption <ref> states that any covariate values that may occur in the target treated group must also be possible in the sample treated group.Assumptions <ref> and <ref> together imply that the W-conditional ATT is constant across settings, and hence that the PATT is identified by equation (<ref>).Though Assumption <ref> is similar to the exchangeability of selection assumption usually invoked for transportability of the average treatment effect (ATE), it differs importantly in that it must hold conditional on A=1.This is so because DID was the basis for identification in the sample, so any effects identified in the sample (whether SATT or W-conditional SATT) are conditional on A=1, and the basis for transportability is therefore the constancy of the W-conditional SATTs (not SATEs) across settings. This constancy is dependent on replacing potential outcomes in the treated target with those in the treated sample, and for this replacement to be licensed, Assumption <ref> must condition on A=1.Unfortunately, conditioning on A=1 means Assumption <ref> is unlikely to hold in most DID applications. To illustrate this point, Figure <ref> displays a single world intervention graph (SWIG) depicting a common DID setting. SWIGs are similar to causal directed acyclic graphs (DAGs) in that nodes represent random variables, directed arrows represent direct effects, and conditional independencies are given by d-separation rules <cit.>. SWIGs extend DAGs by depicting interventions on variables as split nodes (A|a in Figure <ref> indicates intervening to set A=a), and any variables affected by the intervention variable become potential outcomes under that intervention. Figure <ref> represents a standard DID scenario in the sense that U represents unmeasured common causes of A and Y_1 that would confound a cross-sectional comparison, and whose existence motivates the use of DID. In Figure <ref>, we add S with arrows into A and W, depicting the assumption that distributions of these variables differ across settings. Following the convention of selection diagrams, arrows emanating from S represent “exogenous conditions that determine the values of the variables to which they point” <cit.>. Assumption <ref> would not be expected to hold in Figure <ref> due to the existence of the path S→ A ← U → Y_1(a), on which A is a collider. Thus, this path is opened by conditioning on A (and not closed by conditioning on W), rendering S potentially associated with Y_1(a) conditional on {W, A=1}. For the same reason, S would also be associated with Y_0 conditional on {W, A=1} in at least one data distribution consistent with the SWIG. Importantly, such paths will be present whenever (i) there is unmeasured confounding, and (ii) the target and sample differ in the distribution of treatment (conditional on W). We expect (i) to always be the case (otherwise DID would be unnecessary). We also expect (ii) to be the case except in rare circumstances such as when A is experimentally assigned in both the target and the sample. Importantly, this failure of Assumption <ref> occurs regardless of whether the unmeasured confounders U differ marginally in distribution between the target and sample (i.e., whether or not there is an arrow from S into U in Figure <ref>). §.§ Identification via restrictions on effect heterogeneityThe analysis in the previous subsection illustrated that identification of transported effects based on exchangeability according to measured covariates (as in Assumption <ref>) is unlikely to be tenable in DID studies, since conditioning on A=1 will typically cause unmeasured covariates U to be associated with the sampling mechanism S regardless of whether this association exists marginally. Thus, Assumption <ref> would likely only be plausible if U were included in the conditioning event, but this would not aid identification since U is unmeasured. Fortunately, it is possible to identify transported effects when variables needed for exchangeability are unmeasured, so long as those variables are not also effect measure modifiers on the scale on which the causal effects are being measured, which we illustrate here.We begin by expressing the concept that unmeasured confounders U drive our decision to use DID by stating the following Assumptions, which relate only to identification in the sample:(Latent exchangeability of treatment) Y_t(a)A|W, U, S=1 for t∈{0,1} and a ∈{0, 1} (Latent positivity of treatment)If f(u, w|S=1)>0 then f(A=a|U=u, W=w, S=1)>0 with probability 1 for a ∈{0,1} and {u, w}∈{𝒰, 𝒲}In a sense, Assumption <ref> does not introduce any new restrictions because one can define U to be whatever variables (known or unknown) confound the cross-sectional association between A and Y_t and which motivate the use of DID in the first place. In contrast, Assumption <ref> may be restrictive; the requirement that unmeasured confounding variables U (known or unknown) have overlapping distribution between the treated and untreated may not hold in some settings and is not necessary for identification of the SATT via DID. (As an aside, it can be shown that parallel trends will hold if (i) Assumptions <ref> and <ref> hold and (ii) U exerts a constant effect on Y_0 and Y_1 on the additive scale within levels of W among the treated <cit.>. However, in this paper we assume parallel trends to hold and do not consider what conditions render it plausible or not.) Next consider the follow assumptions aimed at identification in the target:(Latent exchangeability of selection) Y_t(a)S|W, U, A for t∈{0,1} and a ∈{0, 1} (Latent positivity of selection) If f(u, w|A=a, S=0)>0 then f(S=1|U=u, W=w, A=a)>0 with probability 1 for a ∈{0,1} and {u, w}∈{𝒰, 𝒲}Assumption <ref> modifies Assumption <ref> by allowing for U in the conditioning event, so that the potential outcomes are equal in distribution between the sample and the target after conditioning on W, U, and A. Similarly, Assumption <ref> requires all possible values of both U and W in the target population to also be possible in the sample. In addition to conditioning on U, Assumptions <ref> and <ref> modify Assumption <ref> and <ref> by requiring their respective conditions for both the treated and untreated, not just the treated. We can similarly assess Assumption <ref> graphically: if (as is the case in Figure <ref>) the variables {W, U, A} d-separate S from Y_0 and S from Y_1(a), then Assumption <ref> holds.Because U is unmeasured, Assumptions <ref> and <ref> are insufficient for transportability; they render effects conditional on {W, U, A} constant across settings, but these effects are not themselves identifiable. However, transportability is still possible if U is not an additive effect measure modifier, which we state as follows:(U-homogeneity) [Y_1(1)-Y_1(0)|U, W, S, A] =[Y_1(1)-Y_1(0)|W, S, A]Note that Assumption <ref> does not require that U not be a confounder, only that the treatment effect does not vary across levels of U on the additive scale.Note also the scale-dependence of Assumption <ref>; for example, it cannot hold for both log-transformed Y_t and Y_t on its natural scale, unless there is no effect of treatment or U is unassociated with Y_t. The fact that Assumption <ref> refers to the additive scale follows the fact that our focus is on additive treatment effects; if effects on an alternate scale (such as risk ratios) were of interest, then Assumption <ref> would need to be reformulated to express treatment effect homogeneity on that scale. If effects on the additive scale are homogeneous with respect to U, then additive effects conditional on {W, U, A} (which are constant across settings by Assumptions <ref> and <ref>) do not depend on U, yielding identification of the PATT. This is stated in the following theorem:(Transportability for difference-in-differences) Under Assumptions <ref>-<ref> and <ref>-<ref>, the PATT is identified as [Y_1(1)-Y_1(0)|A=1,S=0] =[ m_1(W)-m_0(W) |A=1, S=0 ]. Moreover, if Assumptions <ref> and <ref> also hold, then the identification for the PATE is given as [Y_1(1)-Y_1(0)|S=0] =[ m_1(W)-m_0(W) |S=0 ], and for the population average treatment effect in the untreated (PATU) as [Y_1(1)-Y_1(0)|A=0, S=0] =[ m_1(W)-m_0(W) |A=0, S=0 ].The proof of Theorem <ref> is provided in Appendix <ref>. Importantly, Theorem <ref> gives identifying formulas for the PATT as well as the PATE and PATU. Notably, Assumptions <ref>-<ref> are only required for identification of the PATE and PATU, not the PATT. This is an important distinction, particularly because one of the key advantages of DID is that identification can hold without having to assume positivity for the unmeasured confounders. As an aside, the addition of Assumptions <ref> and <ref> to the standard identifying assumptions for DID (in our exposition, Assumptions <ref>-<ref>) also renders identifiable the SATE and the sample average treatment effect in the untreated (SATU) (shown in Appendix <ref>). These results are intuitive: under latent exchangeability of selection, the treatment effects in the population are weighted averages of the {W,U,A}-conditional treatment effects in the sample; these conditional effects do not depend on U under U-homogeneity. Moreover, because U represents all unmeasured confounders, differences between the W-conditional SATT, SATE, and SATU can only be caused by effect heterogeneity according to U, which has been ruled out by Assumption <ref>. Therefore the W-conditional SATT, SATE, and SATU all equal one another.Assumptions <ref>-<ref> are not the only set of assumptions that yield identification of effects in the target when U is related to the sampling mechanism, but alternative assumption sets will generally also place restrictions on unmeasured effect heterogeneity. For example, supposing that Assumptions <ref>-<ref> hold, it is possible to identify the PATT under a parallel trends assumption for both the treated and untreated counterfactual regimes (i.e., if we added to Assumption <ref> an equivalent expression replacing Y_t(0) and Y_t-1(0) with Y_t(1) and Y_t-1(1)). However, this stronger parallel trends assumption also implies the W-conditional SATT, SATE, and SATU are all equal <cit.>, implying effect homogeneity according to U. §.§ ApplicationTable <ref> provides interpretations of each of the 12 Assumptions presented in terms of the applied question. In particular, U represents unmeasured differences between public housing and Section 8 that impact levels of air nicotine independently of the treatment, leading investigators to pursue a DID design. For example, U may represent ventilation (with U=1 denoting high and U=0 denoting low ventilation); we expect public housing building to more often have low ventilation and that ventilation impacts air nicotine, but ventilation was not measured in the study. In Figure <ref>, arrows from S into W and A depict measured environmental and societal conditions that lead to differing air quality and differing distributions of public housing vs. Section 8 residence across regions in the US. Since Y_t was log-transformed, Assumption <ref> requires that for buildings with the same levels of outdoor PM_2.5 and separately for public housing and Section 8, the additive effect of a smoke-free housing policy on log-transformed air nicotine (and hence a type of mulitplicative effect) is constant for buildings with high and low ventilation. Thus, Assumption <ref> would be violated if high- and low-ventilation buildings had differing baseline levels of air nicotine and the effect of a smoke-free housing policy was to decrease air nicotine by a constant absolute amount (e.g., a constant reduction in parts per million).§ ESTIMATORSIn the section, we presume identification holds according to one of the sets of assumptions presented Theorem <ref>, and consider the problem of estimating the statistical parameterψ(a^*)=[ η(W) |A=a^*, S=0 ], a^*=0,1.(See equation (<ref>) for the definition of η(·).) From Theorem <ref>, we have that under Assumptions <ref>-<ref> and <ref>-<ref>, ψ(1) equals the PATT; with the addition of Assumptions <ref>-<ref>, ψ(0) equals the PATU and E[ψ(A)|S=0] equals the PATE. To simplify notation in this section, let Δ Y=Y_1-Y_0 denote differenced outcomes, m_a(W)=[Δ Y|W,S=1, A=a]denote the true outcome-difference model, and g_a,s(W)=f(A=a, S=s|W), denote the true propensity scores for treatment assignment and selection. We use m_a and g_a,s to denote estimators of those quantities, which may or may not be correctly specified. A correctly specified model is one that converges in probability to the true population moments. We also use P_n { h(O) }=n^-1∑_i=1^n h(O_i) to denote the sample average of a function h(·) of the observed data.§.§ G-computation estimatorA g-computation estimator (also called a substitution estimator or plug-in estimator) is constructed by plugging in estimators of the empirical counterparts of the population quantities into the identifying formula in Theorem <ref>:ψ_gcomp(a^*)= P_n {I(A=a^*, S=0)/P_n { I(A=a^*, S=0)}{m_1(W) - m_0(W)}}The estimator ψ_gcomp will be consistent and asymptotically normal if m_a is correctly parametrically specified, but not necessarily otherwise.§.§ Inverse-odds weighted estimatorInstead, one may have more information about the functional form of the propensity scores. The following inverse-odds weighted estimator will be consistent and asymptotically normal if g_a,s are correctly parametrically specified, but not necessarily otherwise:ψ_iow(a^*)= P_n {[ I(A=a^*, S=1) g_a^*,0(W)/P_n{ I(A=a^*, S=0)}g_1,1(W) - I(A=0, S=1) g_a^*,0(W)/ P_n{ I(A=a^*, S=0)}g_0,1(W)] Δ Y }§.§ Doubly robust estimatorLastly, we provide a doubly robust estimator, meaning in this case that the estimator is consistent and asymptotically normal if either g_a,s or m_a consistent of correctly-specified parametric models; it need not be the case that both are correct. A doubly robust estimator for ψ is given by:ψ_dr(a^*)= P_n {I(A=a^*, S=1) g_a^*,0(W)/ P_n I(A=a^*, S=0)g_1,1(W){Δ Y - m_1(W) } - I(A=0, S=1) g_1,0(W)/P_nI(A=a^*, S=0) g_0,1(W_i){Δ Y - m_0(W) } + I(A=a^*,S=0)/P_nI(A=a^*, S=0){m_1(W) - m_0(W)}}In Appendix <ref>, we show the derivation of ψ_dr(a^*) as a “one-step” estimator based on the efficient influence function for ψ(a^*), which implies that ψ_dr(a^*) is asymptotically efficient. The fact that ψ_dr(a^*) corresponds to the efficient influence function also leads to an estimator of the asymptotic variance under the assumption that g_a,s and m_a are both correctly specified, which we also provide in Appendix <ref>. In Appendix <ref>, the double robust property of ψ_dr(a^*) is demonstrated, and proof of the consistency of the g-computation and IOW estimators are provided as a bi-product of the double robust property. Code to implement the proposed estimators is available at . § SIMULATION STUDYWe generated nsims=200 datasets of nobs=10,000 each, according to the following data generating mechanism:S∼ Bernoulli(0.5) U∼ Bernoulli( logit^-1[-1 + S]) W∼ Bernoulli(0.5 - 0.25S) A∼ Bernoulli(0.3 + 0.1S + 0.1W + 0.1U) Y_0∼ N(1 + W + U, 0.1) Y_1∼ N(0.5W + U + A + 0.5WA, 0.1)To see that parallel trends holds in the simulation, note that for a=0,1:[Y_1(0)-Y_0(0)|A=a, S=1, W]= {[0.5W + U + (0) + 0.5W(0)] - [1+W+U]|A=a, S=1, W}= {[0.5W- 1 - W]|A=a, S=1, W}= -1 - 0.5WWe applied each of the three proposed estimators for the PATT to each dataset with all models correctly specified, all models incorrectly specified, only outcomes models misspecified, and both selection and treatment models misspecified. We treated U as an unmeasured variable in all analysis. For correctly specified models, all variables except U were included with the above functional form, in misspecified outcome models we only include main terms for W and A, and in misspecified propensity models we dropped terms for S. The true PATT=1.28 was calculated by generating potential outcomes for 1 million observations. Results shown in Figure <ref> illustrate that IOW is biased whenever the propensity score for treatment and selection is misspecified, g-compuation is biased whenever the outcome model is misspecified, and that the doubly robust estimator is approximately unbiasedif either model is correct. Code to implement the simulation is available at . § DISCUSSIONThis paper introduced an approach to estimating treatment effects in a target population based on DID conducted in a study sample that differs from the target population. Under certain assumptions, some of which may be understood with the aid of causal diagrams, we can identify the PATT, PATE, and PATU. We also propose several estimators of the aforementioned effects in the target population that only require measurement of covariates and/or treatments in the target population, not necessarily outcomes. This approach may be useful when, as is the case in our motivating example involving air nicotine, measurement of outcomes in the target population (in this case, the entire U.S.) may not be feasible, and unobserved confounding is present (in this case, ventilation) but those unobserved confounders do not modify the additive treatment effect. Though our approach assumed the same set of covariates were sufficient for internal and external validity, the methods can easily be adapted to settings where the covariates needed for external validity are a subset of those needed for internal validity. Though our approach has been framed around the problem of transportability (i.e., the study sample is not a subset of target population), our methods can easily be adapted to generalizability problems (when the study sample is nested in the target population). The approach may therefore also prove useful when a select group of jurisdictions (such as states or provinces) implement a policy, but decisions need to made a higher level of organization (such as national governments).It is important to note that our motivating example was greatly simplified for illustrative purposes; a full analysis to address the motivating question would likely be more complex. For example, one would need to carefully consider how the exclusion criteria may impact the plausibility of assumptions, whether other covariates would need to be measured, and whether differences in building management between public housing in NYC and other areas might violate treatment version irrelevance.Causal diagrams have rarely been employed to understand identification in DID designs, but have been essential for elucidating the way causal structure impacts generalizability and transportability problems. By employing causal diagrams, we highlighted that the causal structure implied by unmeasured confounding that often motivates DID creates particular complexities for generalizability and transportability. Specifically, we were able to identify transported treatment effects under an assumption that the unmeasured confounders are not additive effect measure modifiers, but not necessarily otherwise.The validity of an assumption that unmeasured confounders are not additive effect measure modifiers may be difficult to assess in practice. In our example, we possess no a priori substantive information to suggest that the additive effect of a smoking ban on log-transformed air nicotine would be constant according to the building's level of ventilation (a presumed confounder). This suggests that, when transportability is of interest in DID studies, investigators should measure and adjust for as many potential confounders as possible (even if not formally needed for parallel trends) in order to reduce the number of variables for which we must make homogeneity assumptions. Future work will seek to develop bounds under violations of effect homogeneity along with methods to assess the sensitivity of conclusions to this key assumption. § PROOF OF IDENTIFICATION RESULTS §.§ Proof of Theorem <ref>First we show identification for the PATT: [Y_1(1) -Y_1(0)|A=1, S=0] = { [Y_1(1)-Y_1(0)|W, U, A=1, S=0] |A=1,S=0 } (iterated expectation) ={ [Y_1(1)-Y_1(0)|W, U, A=1, S=1] |A=1,S=0 } (Assumptions <ref>, <ref>, <ref> & <ref>) ={ [Y_1(1)-Y_1(0)|W, A=1, S=1] |A=1,S=0 } (Assumption <ref>)=( m_1(W) - m_0(W) |A=1,S=0 ) (Assumptions <ref>-<ref>)Next consider identification of the PATU: [Y_1(1) -Y_1(0)|A=0, S=0] = { [Y_1(1)-Y_1(0)|W, U, A=0, S=0] |A=0,S=0 } (iterated expectation) ={ [Y_1(1)-Y_1(0)|W, U, A=0, S=1] |A=0,S=0 } (Assumptions <ref>, <ref>, <ref> & <ref>)={ [Y_1(1)-Y_1(0)|W, U, A=1, S=1] |A=0,S=0 } (Assumptions <ref>, <ref>, <ref> & <ref>) ={ [Y_1(1)-Y_1(0)|W, A=1, S=1] |A=0,S=0 } (Assumption <ref>)=( m_1(W) - m_0(W) |A=0,S=0 ) (Assumptions <ref>-<ref>)Having identified the PATU and the PATT, the PATE is trivially identified:[Y_1(1)-Y_1(0)|S=0]= [ (Y_1(1)-Y_1(0)|A, S=0)|S=0] = [(m_1(W) - m_0(W) |A, S=0)|S=0] = [ m_1(W) - m_0(W) |S=0] §.§ Identifying the SATE and SATU under additional assumptionsUnder Assumptions <ref>, <ref>, and <ref>, the SATE is identified as[Y_1(1) -Y_1(0)|S=1] = {[Y_1(1)-Y_1(0)|W, U, S=1]|S=1} (iterated expectation)= {[Y_1(1)-Y_1(0)|W, U, A=1, S=1]|S=1} (Assumptions <ref>&<ref>)= {[Y_1(1)-Y_1(0)|W, A=1, S=1]|S=1} (Assumption <ref>)= {m_1(W)-m_0(W)|S=1} (Assumptions <ref>-<ref>),and the SATU is identified as:[Y_1(1) -Y_1(0)|A=0, S=1] = {[Y_1(1)-Y_1(0)|W, U, A=0, S=1]|A=0, S=1} (iterated expectation)= {[Y_1(1)-Y_1(0)|W, U, A=1, S=1]|A=0, S=1} (Assumptions <ref>&<ref>)= {[Y_1(1)-Y_1(0)|W, A=1, S=1]|A=0, S=1} (Assumption <ref>)= {m_1(W)-m_0(W)|A=0, S=1} (Assumptions <ref>-<ref>) § EFFICIENT INFLUENCE FUNCTION FOR TRANSPORTED TREATMENT EFFECTSequationsectionTo ease notation, we let Δ Y=Y_1-Y_0. We let P_n{ h(O) }=n^-1∑_i=1^n h(O_i) denote the sample mean of a function h(·) of the observed data O, P dnote the true distribution of the observed data, and P_n denote an estimator of the observed data distribution (which may or may not be the empirical distribution). Here we focus on the statistical estimandψ(a^*, P)= [ m_1(W) - m_0(W)|A=a^*, S=0 ]}= ∫ [m_1(z)-m_0(w)] f(w|A=a^*, S=0)dwwhere we write ψ(a^*, P) as a function of P to emphasize that it depends on the true observed data distribution. §.§ Proof of EIF The efficient influence function for ψ(a^*) is given byD(a^*, O_i, P) =I(A_i=1, S_i=1) g_a^*,0(W_i)/f(A=a^*, S=0)g_1,1(W_i){Δ Y_i - m_1(W_i) } - I(A_i=0, S_i=1) g_a^*,0(W_i)/f(A=a^*, S=0)g_0,1(W_i){Δ Y_i - m_0(W_i) } + I(A_i=a^*,S_i=0)/f(A=a^*, S=0){m_1(W_i) - m_0(W_i)}- ψ(a^*) NOTE: this proof is somewhat non-rigorous because it appears at times to rely on W being discrete and other times continuous. Plan is to re-rewrite using the approach of Hines et al. <cit.>. IF{ψ(a^*, P) } =∫( IF{m_1(w)} - IF{ m_0(w)})f(w|A=a^*, S=0)dw + ∫{ m_1(W)-m_0(w) }IF{ f(w|A=a^*, S=0) }dw =∫I(A=1,S=1,W=w)/f(A=1,S=1, W=w) [Δ Y - m_1(w)]f(w|A=a^*, S=0)dw - ∫I(A=0,S=1,W=w)/f(A=0,S=1, W=w) [Δ Y - m_0(w)]f(w|A=a^*, S=0)dw + ∫{ m_1(w)-m_0(w) }I(A=a^*,S=0)/f(A=a^*, S=0){ I(W=w) -f(w|A=a^*, S=0) }dw =I(A=1, S=1)g_a^*, 0(W)/f(A=a^*, S=0)g_1,1(W)[Δ Y - m_1(W)] - I(A=0, S=1)g_a^*, 0(W)/f(A=a^*, S=0)g_0,1(W)[Δ Y - m_0(W)]+ I(A=a^*,S=0)/f(A=a^*,S=0)[m_1(W)-m_0(W)] - ∫{ m_1(w)-m_0(w) }f(w|A=a^*, S=0)dwwhere the first equality uses the fact that the efficient influence function is a derivative and applies the product rule for derivatives, the second substitutes known influence functions for conditional expectations and conditional densities/probability mass functions, and the third rearranges. §.§ One-step estimatorBecause the efficient influence function for ψ(a^*) is given by D(a^*, O_i, P), a one-step estimator <cit.> is given byψ_dr(a^*) =ψ(a^*, P_n) - P_n { D(a^*, O_i, P_n) }= P_n {I(A=1, S=1) g_a^*,0(W)/ P_n( I[A=a^*, S=0] )g_1,1(W){Δ Y - m_1(W) } - I(A=0, S=1) g_a^*,0(W)/P_n( I[A=a^*, S=0] ) g_0,1(W){Δ Y - m_0(W) } + I(A=a^*,S=0)/ P_n( I[A=a^*, S=0] ){m_1(W) - m_0(W)}}where P_n={g_a, s, m_a} with {a, s}∈{0,1}^2. If P_n consists of correctly-specified parametric models, it follows by the central limit theorem that√(n)[ψ_dr(a^*) - ψ(a^*)]N(0, [ D(a^*, O, P_n)^2 ] )wheredenotes convergence in distribution. Thus, an estimator of the asymptotic variance that is consistent when P_n consists of correctly-specified parametric models is given byvar[ψ_dr(a^*)]=P_n {D(a^*, O, P_n)^2 } § DOUBLE ROBUST PROPERTY OF Ψ_DRIn this section, for ease of notation we let m_a≡ m_a(W) and g_a,s≡ g_a,s(W). First, suppose m_am_a^* and g_a, s g_a, s^*, not necessarily assuming g_a, s = g_a, s^* or m_a = m_a^*. Using Slutzky's theorem and continuous mapping theorem we haveψ_dr(a^*)ψ^*(a^*) ≡{I(A=1, S=1) g_a^*,0^*/f(A=a^*, S=0) g_1,1^*{Δ Y - m_1^* } - I(A=0, S=1) g_a^*,0^*/f(A=a^*, S=0)g_0,1^*{Δ Y - m_0^*} + I(A=a^*,S=0)/f(A=a^*, S=0){ m_1^* - m_0^* }}First consider the case where outcome models are correctly specified, so that m_a(w)= m_a^*(w) for all w∈𝒲. By the linearity property of expectations, we haveψ^*(a^*)= {I(A=1, S=1) g_a^*,0^*/f(A=a^*, S=0) g_1,1^*(Δ Y - m_1^*) } - {I(A=0, S=1) g_a^*,0^*/f(A=a^*, S=0)g_0,1^*(Δ Y - m_0^*) } + {I(A=a^*,S=0)/f(A=a^*,S=0)(m_1^*-m_0^*)}Under m_a= m_a^*, we have(<ref>) ={I(A=1,S=1)g_a^*,0^*/f(A=a^*,S=0)g_1,1^* ([Δ Y |A=1,S=1,W] - m_1 ) }=0Likewise, (<ref>)=0. Therefore, ψ^*(a^*)={I(A=a^*,S=0)/f(A=a^*,S=0)(m_1-m_0)}=[m_1-m_0|A=a^*,S=0]=ψ(a^*). Moreover, because (<ref>) is the probability limit of the g-computation estimator, we have that ψ_gcomp(a^*)ψ(a^*) whenever m_am_a.Next, consider the case where treatment and selection models are correctly specified, so that g_a,s(w)=g_a,s^*(w) for all w ∈𝒲. Rearranging we haveψ^*(a^*)= { (I(A=1, S=1) g_a^*,0^*/f(A=a^*, S=0) g_1,1^* - I(A=0, S=1) g_a^*,0^*/f(A=a^*, S=0)g_0,1^* )Δ Y } + { ( I(A=0,S=1)g_a^*,0^*/f(A=a^*,S=0)g_0,1^* - I(A=1, S=0)/f(A=a^*, S=0) ) m_0^* } - { ( I(A=1,S=1)g_a^*,0^*/f(A=a^*,S=0)g_1,1^* - I(A=a^*, S=0)/f(A=a^*, S=0) ) m_1^* }Since g_a, s^* = g_a, s, we have(<ref>)= { (I(A=1, S=1) f(A=a^*,S=0|W)/f(A=a^*, S=0) f(A=1,S=1|W) - I(A=0, S=1) f(A=a^*,S=0|W)/f(A=a^*, S=0)f(A=0,S=1|W) )Δ Y }= {I(A=1, S=1) f(A=a^*,S=0|W)/f(A=a^*, S=0) f(A=1,S=1|W)m_1(W) - I(A=0, S=1) f(A=a^*,S=0|W)/f(A=a^*, S=0)f(A=0,S=1|W)m_0(W) }= {[I(A=1, S=1)|W] f(A=a^*,S=0|W)/f(A=a^*, S=0) f(A=1,S=1|W)m_1(W) - E[I(A=0, S=1)|W] f(A=a^*,S=0|W)/f(A=a^*, S=0)f(A=0,S=1|W)m_0(W) }= {f(A=a^*,S=0|W)/f(A=a^*, S=0)[m_1(W) - m_0(W)] }= {I(A=a^*, S=0)/f(A=a^*, S=0)[m_1(W) - m_0(W)] }=ψ(a^*)(<ref>)={ ( [I(A=0, S=1)|W] f(A=a^*,S=0|W)/f(A=a^*,S=0)f(A=0, S=1|W) - I(A=a^*,S=0)/f(A=a^*,S=0) ) m_0^* }={ ( f(A=a^*,S=0|W)/f(A=a^*,S=0) - I(A=a^*,S=0)/f(A=a^*,S=0) ) m_0^* }=0Likewise, (<ref>)={ ( f(A=a^*,S=0|W)/f(A=a^*,S=0) - I(A=a^*,S=0)/f(A=a^*,S=0) ) m_1^* }=0.Thus if g_a, s(w)g_a, s(w), ψ_dr(a^*) ψ(a^*). Moreover, because (<ref>) is the probability limit of the IOW estimator, we have that ψ_IOW(a^*)ψ(a^*) whenever g_a,s g_a,s.
http://arxiv.org/abs/2310.17806v1
{ "authors": [ "Audrey Renson", "Ellicott C. Matthay", "Kara E. Rudolph" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20231026225545", "title": "Transporting treatment effects from difference-in-differences studies" }
Some journals require a graphical entry for the Table of Contents. This should be laid out “print ready” so that the sizing of the text is correct.Inside theenvironment, the font used is Helvetica 8 pt, as required by Journal of the American Chemical Society.The surrounding frame is 9 cm by 3.5 cm, which is the maximum permitted forJournal of the American Chemical Society graphical table of content entries. The box will not resize if the content is too big: instead it will overflow the edge of the box.This box and the associated title will always be printed on a separate page at the end of the document.Peptides play a pivotal role in a wide range of biological activities through participating in up to 40% protein-protein interactions in cellular processes. They also demonstrate remarkable specificity and efficacy, making them promising candidates for drug development. However, predicting peptide-protein complexes by traditional computational approaches, such as Docking and Molecular Dynamics simulations, still remains a challenge due to high computational cost, flexible nature of peptides, and limited structural information of peptide-protein complexes. In recent years, the surge of available biological data has given rise to the development of an increasing number of machine learning models for predicting peptide-protein interactions. These models offer efficient solutions to address the challenges associated with traditional computational approaches. Furthermore, they offer enhanced accuracy, robustness, and interpretability in their predictive outcomes. This review presents a comprehensive overview of machine learning and deep learning models that have emerged in recent years for the prediction of peptide-protein interactions. § INTRODUCTIONPeptides consist of short chains of amino acids connected by peptide bonds, typically comprising 2 to 50 amino acids. One of the most critical functions of peptides is their mediation of 15-40% of protein-protein interactions (PPIs) <cit.>. PPIs play essential roles in various biological processes within living organisms, including DNA replication, DNA transcription, catalyzing metabolic reactions and regulating cellular signal <cit.>. Peptides have become promising drug candidates due to their ability to modulate PPIs. Over the past century, Food and Drug Administration (FDA) has approved more than 80 peptide drugs <cit.>, with insulin being the pioneering therapeutic peptide used extensively in diabetes treatment. Compared with the small molecules, peptide drugs demonstrate high specificity and efficacy <cit.>. Additionally, compared with other classes of drug candidates, peptides have more flexible backbones, enabling their better membrane permeability <cit.>. Rational design of peptide drugs is challenging and costly, due to the lack of stability and the big pool of potential target candidates. Therefore, computational methodologies that have proven effective in small molecule drug design have been adapted for modelling peptide-protein interactions (PepPIs). These computational techniques include Docking, Molecular Dynamics (MD) simulations, and machine learning (ML) and deep learning (DL) models. Docking approaches enable exploration of peptide binding positions and poses in atomistic details, facilitating the prediction of binding affinities <cit.>. However, peptides are inherently flexible and they can interact with proteins in various conformations. These conformations often change during the binding process <cit.>. MD simulation is another approach to model the peptide-protein interaction. The peptide-protein binding and unbinding process can be studied thermodynamically and kinetically through MD simulations <cit.>. But sampling the complex energy landscapes associated with peptide-protein interactions typically requires intensive computational resources and time. The accuracy of Docking and MD simulations both rely on the knowledge of protein structures, therefore the limited availability of peptide-protein complex structures has restricted the utility of these two approaches. In recent years, ML and DL models have been widely used in the field of computer-aided drug design. These models offer an alternative way to address the inherent challenges associated with Docking and MD simulations in modeling PepPIs. Due to the large amount of available biological data, many ML/DL models are routinely employed to obtain sequence-function relationship, achieving comparable predictive performance to structure-based models. This is because sequence data contains evolutionary, structural and functional information across protein space. Furthermore, compared with Docking and MD simulation, ML/DL models exhibit greater efficiency and generalizability. Trained ML/DL models are capable of predicting PepPIs in a single pass, but it's hard to do large-scale docking and MD simulations due to their resource-intensive and time-consuming nature. Moreover, with the development of interpretable models, DL models are no longer regarded as black boxes; they can provide valuable insights into residue-level contributions to peptide-protein binding predictions. Previous reviews mainly summarized ML/DL models for predicting PPIs <cit.>. They have traditionally categorized computational methods for predicting PPIs into two main classes: sequence-based and structure-based approaches. Sequence-based methods extract information only from sequence data, whereas structure-based methods rely on the information derived from peptide-protein complex structures. Recently, ML/DL models have increasingly integrated both sequence and structure information to enhance their predictive performance. In this review, we systematically summarize the progress made in predicting PepPIs. From ML perspective, we include Support Vector Machine (SVM) and Random Forest (RF). ML models typically require manual feature extraction from sequence and structure datasets. But DL models, including Convolutional Neural Network (CNN), Graph Convolutional Network (GCN) and Transformer, automatically extract multi-layer feature representations from data. To the best of our knowledge, this is the first review to summarize the ML/DL work for specifically predicting PepPIs. Figure <ref> shows the timeline illustrating the evolution of ML/DL methods in the context of PepPIs predictions. Table <ref> summarizes the details of ML/DL models discussed in this review.§ MACHINE LEARNING MODELS FOR PEPTIDE-PROTEIN INTERACTIONS PREDICTIONSupport Vector Machine (SVM). SVM is a powerful ML algorithm commonly employed for classification tasks. The objective of SVM is to determine the optimal hyperplane that effectively separates data points belonging to different classes in the feature space. The selection criteria for this optimal hyperplane aims to maximize the margins between the closest points of distinct classes, thereby minimizing misclassification rates.SPRINT-Seq (Sequence-based prediction of Protein–peptide Residue-level INTeraction sites) is the first ML based prediction of peptide-protein binding sites only using sequence features <cit.>. Various types of information were extracted from protein sequence to create a feature dataset, including one-hot encoded protein sequences, evolutionary information <cit.>, predicted accessible surface area <cit.>, secondary structure <cit.>, and physiochemical properties <cit.>.These features were fed into a classification model, SVM, to predict the label for each residue (Figure <ref>). SPRINT-Seq yielded Matthews’ Correlation Coefficient (MCC) of 0.326, sensitivity of 0.64 and specificity of 0.68 on an independent test set. The importance of each feature was also evaluated, the most crucial feature distinguishing binding from non-binding residues is the sequence evolution profile. This sequence-based technique's performance is comparable or better than structure-based models (Peptimap <cit.>, Pepite <cit.>, PinUp <cit.>, VisGrid <cit.>) for peptide-binding sites prediction. To improve the accuracy of sequence-based prediction, Zhao et al. introduced intrinsic disorder as a feature within sequence representation <cit.>. Peptides that participate in peptide-protein interactions exhibit consistent attributes of short linear motifs, primarily found in the intrinsic disordered regions (IDRs). These attributes include short length, flexible structure and weak binding affinity <cit.>.In addition to the novel sequence representation, they designed a consensus-based method called PepBind <cit.>. This method combines SVM classification model with the template-based methods S-SITE and TM-SITE <cit.>. The aggregation of these three individual predictors yielded better performance than all three individual methods and outperformed the first sequence-based method SPRINT-Seq. Random Forest (RF). RF is another supervised ML algorithm for classification and regression, which combines multiple decision trees to create a “forest". During training of a RF for classification, each tree contributes a vote. The forest subsequently selects the classification with the majority of votes as the predicted outcome. All decision trees comprising the RF are independent models. While individual decision trees may contain errors, the collective majority vote of the ensemble ensures more robust and accurate predictions, thereby enhancing the reliability of RF predicted results.A RF model, SPRINT-Str <cit.> (Structure-based Prediction of Residue-level INTeraction), was developed to predict the putative peptide-protein binding residues and binding sites by combining both sequence-based and structure-based information. The sequence information in the input includes Position Specific Scoring Matrix (PSSM) for all amino acids in the protein and entropy calculated based on PSSM. Structural information includes Accessible Surface Area (ASA) calculated by DSSP (Define Secondary Structure of Proteins)<cit.>, Secondary Structure (SS) calculated by DSSP,<cit.> half-sphere exposure (HSE) representing the solvent exposure using residue contact numbers in upward and downward hemispheres along with pseudo Cβ–Cα bond,<cit.> and flexibility calculated by iModeS<cit.> to describe the functional motions of proteins.<cit.> A RF classifier was further trained and tested to predict the binding residues. The Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithm <cit.> was then applied to cluster spatially neighboring binding site residues. The largest cluster was selected as the predicted binding site with a corresponding reliability score. SPRINT-Str achieved robust performance in predicting binding residues with MCC of 0.293 as well as Area Under the Receiver Operating Characteristic Curve (ROC AUC) of 0.782. For instance, when testing the model's performance on peptide binding with the human tyrosine phosphatase protein PTPN4 PDZ domain (PDBID: 3NFK) <cit.>, 15 out of 17 binding residues were correctly predicted, and the predicted binding sites were similar to the actual binding sites. SPRINT-Str is one of the representative ML models that pass structural features into the models and achieves remarkable success in predicting PepPIs. The structures of proteins or peptide-protein complexes can also be directly used as input to ML models. The underlying premise of this approach is that, if a PepPI shares similarities with a certain interaction surface, that well-characterized surface can serve as a template for modeling other PepPIs. The InterPep model <cit.> constructs four steps to better represent this idea: Mass Structural Alignment (MSA), Feature Extraction, RF Classification, and Clustering. A Template Modeling (TM) score larger than 0.5 was used to screen out candidate templates. Overall, InterPep accurately predicted 255 out of 502 (50.7%) binding sites for the top 1 prediction and correctly identified 348 out of 502 (69.3%) binding sites within the top 5 predictions, which demonstrates it's a useful tool for the identification of peptide-binding sites.Ensemble Learning. In the pursuit of a more robust predictive model for protein-peptide binding sites, Shafiee et al. adopted an ensemble-based ML classifier named SPPPred <cit.>. Ensemble learning stands out as an effective strategy for handling imbalanced datasets, as it allows multiple models to collectively contribute to predictions, resulting in enhanced robustness, reduced variance, and improved generalization <cit.>. In the SPPPred algorithm, the ensemble learning technique of bagging <cit.> was employed to predict peptide binding residues. The initial step in bagging involves generating various subsets of data through random sampling with replacement, a process known as bootstrapping. For each bootstrap dataset, distinct classification models are trained, including Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Random Forest (RF). Subsequently, for each residue, the class with the majority of votes across these models is determined as the final predicted label. This ensemble method consistently demonstrates strong and comparable performance on independent test sets, with F1 score of 0.31, accuracy of 0.95, MCC of 0.23. Other State-Of-The-Art (SOTA) Models. There are some SOTA bespoke ML models that achieve great success for the predictions of PepPIs, for example, Hierarchical statistical mechanical modeling (HSM).<cit.> A dataset of 8 peptide-binding domain (PBD) families was applied to train and test the HSM model, including PDZ, SH2, SH3, WW, WH1, PTB, TK, and PTP, which cover 39% of human PBDs. The HSM model defines a pseudo-Hamiltonian, which is a machine-learned approximation of Hamiltonian that maps the system state to its energy.<cit.> The predicted PepPI probability is derived from the sum of pseudo-Hamiltonian corresponding to each PBD-peptide sequence pair. In total, 9 models were developed (Figure <ref>a), including 8 separate HSM/ID models (ID means independent domain, one for each protein family) and a single unified HSM/D model covering all families (D means domains). The HSM model remarkably outperformed other ML models such as NetPhorest<cit.> and PepInt.<cit.> By computing the energies from pseudo-Hamiltonian, the HSM model can evaluate and rank the possibilities of different PepPI patterns, facilitating the verification of existing PepPI ensembles and the discovery of new possible PepPI ensembles. Furthermore, the HSM model provides detailed explanations of the peptide-protein binding mechanism, demonstrating a strong interpretability. Using peptide binding with HCK-SH3 domain (PDBID: 2OI3) <cit.> as an example, the HSM model gave a detailed examination and explanation of the peptide-SH3 domain binding mechanism. The “W114 tryptophan switch” binding motif <cit.> was correctly recognized by the HSM model (Figure <ref>b). Additionally, a conserved triplet of aromatic residues W114-Y132-Y87 was previously identified as contributing to the peptide binding with the HCK-SH3 domain (Figure <ref>b).<cit.> However, the HSM model also found that Y89 and Y127 had similar predicted energetic profiles as W114, suggesting a new possible W-Y-Y aromatic triplet (Figure <ref>c). By mapping the predicted interaction energies to the complex structure, the HSM model successfully recognized the repulsive binding regions (shown in magenta) and attractive binding regions (shown in blue) (Figure <ref>d). The predicted attractive binding interface correctly aligns with the previously studied RT-loop and proline recognition pocket,<cit.> demonstrating the strong predictive and interpretative ability of the HSM model. § DEEP LEARNING MODELS FOR PEPTIDE-PROTEIN INTERACTIONS PREDICTION Convolutional Neural Network (CNN). CNN is a class of neural networks that have demonstrated the great success in processing image data <cit.>. The design of CNN was inspired by biological visual system in humans. When humans see an image, each neuron in the brain processes information within its own receptive filed and connects with other neurons in a way to cover the entire image. Similarly, each neuron in a CNN also only processes data in its receptive field. This approach allows CNNs to dissect simpler patterns initially and subsequently assemble them into more complex patterns. A typical CNN architecture consists of three layers: the convolutional layer, the pooling layer, and the fully connected layer. In the convolutional layer, a dot product is computed between two matrices—the first being a kernel with a set of learnable parameters, and the second representing a portion of the receptive field. The kernel slides across the entire image, generating a two-dimensional representation. The pooling layer replaces the output of the convolutional layer at each location by deriving a summary statistic of the nearby outputs. This serves to reduce the size of the feature maps, subsequently decreasing training time. Finally, the fully connected layer connects the information extracted from the previous layers to the output layer and eventually classify the input into a label. The biological data could be transformed into an image-like pattern, therefore CNN could be applied to binding site identification. Wardah et al. applied CNNs for identifying peptide-binding sites by introducing a CNN-based method named, Visual <cit.>. In Visual algorithm, features were extracted from protein sequence, like HSE <cit.>, secondary structure <cit.>, ASA <cit.>, local backbone angles <cit.>, PSSM <cit.> andphysicochemical properties <cit.>.These features were stacked horizontally resulting in a feature vector with a length of 38. Visual employs a sliding window approach to capture the local context of each residue. For a given residue, the feature vectors of the three upstream and three downstream residues were combined into a matrix, resulting in a 2-dimensional array with size of 7×38. An illustrative example of the input data in an image-like format is depicted in Figure <ref>, showcasing the center residue Serine (S) within a window size of 7. A 7×38 image is generated as input of CNN classifier. The Visual model comprises two sets of convolutional layers, followed by a pooling layer and a fully connected layer (Figure <ref>).Visual was applied to identify the peptide binding sites of protein and achieved sensitivity of 0.67 and ROC AUC of 0.73. BiteNet_P_p <cit.> is another CNN-based model that converts 3D protein structures to 4D tensor-based representations and feeds them into a 3D CNN to learn the probability of PepPIs and predict the peptide binding sites/domain. The 4D tensor has the first three dimensions corresponding to the x, y, and z dimensions, and the fourth dimension corresponding to 11 channels including atomic densities of 11 different atom types such as aromatic carbon, sulfur, amide nitrogen, carbonyl oxygen, and so forth. These four-dimensional tensor-based representations were then fed into 10 three-dimensional convolutional layers to obtain the probability score of “hot spots”, which are determined as the geometric centers of each segmented peptide-protein interface. This model outperforms SOTA methods with ROC AUC of 0.91 and MCC of 0.49. The model showed promising power for the prediction of peptide-protein binding sites, but the model's performance is limited by the input protein orientation and sensitivity to the protein conformations. Therefore, BiteNet_P_p could be improved by using representations that could handle the protein rotation invariance. Graph Convolutional Network (GCN). Graph based models have been widely used to illustrate the PPIs and PepPIs based on the peptide/protein structures <cit.>. Graph embedding <cit.> includes nodes (vertices) representing different entities and edges (links) representing the relationships between them. For proteins, graphs typically assign amino acids and related information as nodes, with the distances and connections between amino acids represented as edges. This approach allows for the direct observation of information from protein 3D structures without involving hand-crafted features.<cit.> GCNs <cit.> are a type of neural network that can be used to learn graph embeddings. Similar to CNNs, GCNs take graph embeddings as input and progressively transform them through a series of localized convolutional and pooling layers where each layer updates all vertex features. The updated embeddings are passed through a classification layer to obtain the final classification results.<cit.> GCNs have been successfully applied to protein binding site prediction, with models such as PipGCN <cit.> and EGCN <cit.> achieving great success. More recently, a number of GCN-based models have also been applied for PepPIs prediction. InterPepRank <cit.> is a representative GCN that has been developed to predict the PepPIs. In this model, billions of decoys (computational protein folding structure) were generated by the PIPER <cit.> docking tool as the training and testing set, respectively. The peptide-protein complexes were then represented as graphs with one-hot encoded nodes illustrating individual residues, PSSM <cit.>, self-entropy,<cit.> and one-hot encoded edges denoting the residue interactions. Both node and edge features were then passed through edge convolution layers with the output from each layer concatenated and fed into a global pooling layer and two dense layers to predict the LRMSD (ligand root-mean-square deviation) of decoys. InterPepRank achieved a median ROC AUC of 0.86, outperforming other benchmarking methods such as PIPER,<cit.> pyDock3,<cit.> and Zrank.<cit.> For example, in the case of a fragment from the center of troponin I (peptide) binding with the C-terminal domain of Akazara scallop troponin C (receptor),<cit.> the peptide was proved to be disordered when unbound and become an ordered α-helical structure upon binding,<cit.> following the induced-fit binding mechanism. Predicting the peptide binding conformation and binding sites for systems with induced-fit mechanisms is extremely challenging.The top 100 decoys predicted by both InterPepRank and Zrank showed that both methods can find the true binding site of the peptide. However, InterPepRank achieved an accuracy of 96% in predicting the peptide as an α-helical structure, while Zrank only achieved an accuracy of less than 50%, where half of the peptide decoys' secondary structures were predicted as either random coils or β-sheets. Therefore, InterPepRank is a powerful tool for predicting both binding sites and conformations, even in cases where the peptide is disordered when unbound. This is a significant advantage over other benchmarked energy-based docking methods, which may struggle with disordered structures that are more energetically favorable in unbound states or easier to fit into false positive binding sites. Struct2Graph <cit.> is a novel multi-layer mutual graph attention convolutional network for structure-based predictions of PPIs(Figure <ref>). Coarse-grained graph embeddings were generated by two GCNs with weight sharing for both components of the protein complexes. These embeddings were then passed through a mutual attention network to extract the relevant features for both proteins and concatenated into a single embedding vector. By calculating attention weights, residues with large learned attention weights are more important and more likely to contribute towards interaction. The vector was further passed into a feed-forward network (FFN) and a final Softmax layer to get the probability for PPI. Struct2Graph outperformed the feature-based ML models and other SOTA sequence-based DL models, achieving an accuracy of 98.89% on positive/negative samples balanced dataset, and accuracy of 99.42% on a positive/negative samples unbalanced dataset (positive:negative = 1:10). Residue-level interpretation was conducted to identify the residues' contribution to PepPIs. For example, Staphylococcus aureus Phenol Soluble Modulins (PSMs) peptide PSMα _1 <cit.> competes with high mobility group box-1 protein (HMGB1) to bind with toll-like receptor-4 (TLR4),<cit.> thus inhibiting HMGB1-mediated phosphorylation of NF-κB.<cit.> For the PSMα _1-TLR4 complex, Struct2Graph demonstrated impressive accuracy of 92%, and the predicted binding residues aligned with the previously identified TLR4 active binding sites.Notably, peptide residues 2Gly and 10Val were accurately predicted as the peptide binding residues. Furthermore, Struct2Graph’s predictions corroborated the previously studied competitive binding mechanism,indicating that both PSMα _1 peptide and HMGB1 bind to the same area of TLR4. Interpretable DL graph models have also been employed for the PepPI predictions. Recently, an end-to-end geometric DL architecture known as ScanNet (Spatio-chemical arrangement of neighbors neural NETwork),<cit.> was developed that integrated multi-scale spatio-chemical arrangement information of atoms, amino acid, along with multiple sequence alignment (MSA) for detecting protein–protein binding sites (PPBS). The model took the protein sequence, tertiary structure, and optionally position-weight matrix from MSA of evolutionarily related proteins as input. It first extracted all the atomic neighborhood embeddings, which were then passed through several filters to learn the atomic scale representations. To further reduce the dimensions, atom-wise representations were pooled at the amino acid scale, mixed with extracted amino acid information, and fed into trainable filters to yield amino acid scale representations (Figure <ref>a). With these representations containing multi-scale spatio-chemical information, ScanNet was trained for the prediction of PPBS on 20k proteins with annotated binding sites. When compared with the traditional ML method XGBoost with handcrafted features, and designed pipeline based on structural homology, ScanNet achieved the highest accuracy of 87.7%. While the structural homology baseline performed almost the same as ScanNet, the accuracy dropped quickly when meeting with the unseen fold during the test because of its strong dependence on the homology that was previously developed. Therefore, it's crucial to understand what ScanNet has actually learned. Specifically, does the network only memorize the training data, or does it really understand the underlying protein-protein binding principles? Detailed visualization and interpretation were explored to illustrate the learned atom-wise representations and amino acid-wise representations. The network has learned different atomic patterns, such as N-H-O hydrogen bond (Figure <ref>b), SH or NH_2 side-chain hydrogen donor surrounded by oxygen atoms (Figure <ref>c), a carbon in the vicinity of a methyl group and an aromatic ring (Figure <ref>d), and so on. The detected pattern with solvent-exposed residues frequently appearing in the protein-protein interface (Figure <ref>e), such as Arginine (R), was positively correlated with the output probability of PPBS. However, that with the buried hydrophobic amino acids (Figure <ref>f), such as Phenylalanine (F), was negatively correlated with the output probability of PPBS. Interestingly, the pattern with exposed hydrophobic amino acid surrounded by charged amino acids, which is the hotspot O-ring <cit.> architecture in protein interfaces, was positively correlated with the output probability (Figure <ref>g). 2D t-distributed stochastic neighbor embedding (t-SNE) projections further verified that the model has already learned various amino acid-level structural features. 2D t-SNE projections on secondary structures (Figure <ref>h) clearly illustrated that the model has learned the secondary structural information of the training complexes. With the multi-level knowledge of protein structures, ScanNet captures the underlying chemical principles of protein-protein binding. This SOTA interpretable DL model aids in a deeper understanding of PepPIs and PPIs. Attention based models. Recurrent neural networks (RNN) and long short-term memory (LSTM) are most common models for language modeling and machine translation <cit.>. But both RNN and LSTM suffer from the issue of handling long range dependencies, in other words they become ineffective when there is a significant gap between relevant information and the point where it is needed.The attention mechanism was introduced to address this limitation, which enables the modeling of dependencies without being constrained by their distance in input or output sequences <cit.>. Attention mechanism is one of the most important developments in natural language processing. Vaswani et al. introduced a new form of attention, called self-attention, which relates different positions of a single sequence to obtain a representation of the sequence <cit.>. A new architectural class, Transformer, was conceived, primarily based on the self-attention mechanism <cit.>. Transformer consists of multiple encoders and decoders with self-attention layers. The self-attention layer allows transformer model to process all input words at once and model the relationship between all words in a sentence. Transformer architecture led to the development of a new language model, called Bidirectional Encoder Representations from Transformers (BERT) <cit.>. BERT is designed to pre-train deep bidirectional representations from unlabeled text. It utilizes a “masked language model" (MLM) objective, where some tokens from the input are randomly masked, and the model is trained to predict the masked word based on its context from both directions. Numerous deep learning architectures have emerged, either directly employing self-attention mechanisms or drawing inspiration from the Transformer architecture. These advancements have also been applied forward in predicting PepPIs.Existing ML and DL models for predicting peptide-protein binding sites mainly focus on identifying binding residues on the protein surface. Sequence-based methods typically take protein sequences as inputs, assuming that a protein maintains fixed binding residues across different peptide binders. However, this assumption doesn't hold true for most cellular processes, as various peptides may interact with distinct protein residues to carry out diverse functions. Structure-based methods would require a target protein structure and a peptide sequence, thus limiting their applicability to proteins with available structural data. A novel DL framework for peptide-protein binding prediction was proposed, called CAMP <cit.>, to address the above limitations. CAMP takes account of information from sequence of both peptides and target proteins, and also detect crucial binding residues of peptides for peptide drug discovery.CAMP extracted data from difference sources, including RCSB PDB <cit.> and the known peptide drug-target pairs from DrugBank <cit.>.For each PDB complex, protein-ligand interaction predictor (PLIP) is employed to identify non-covalent interactions between the peptide and the protein, considering these interactions as positive samples for training. Additionally, PepBDB <cit.> aids in determining the binding residues of peptides involved in the specific protein-peptide complexes. Various features are extracted based on their primary sequences to construct comprehensive sequence profiles for peptides and proteins. These features include secondary structure, physicochemical properties, intrinsic disorder tendencies, and evolutionary information <cit.>. CAMP utilizes two multi-channel feature extractors to process peptide and protein features separately (Figure <ref>). Each extractor contains a numerical channel for numerical features (PSSM and the intrinsic disorder tendency of each residue), along with multiple categorical channels for diverse categorical features (raw amino acid, secondary structure, polarity and hydropathy properties). Two CNN modules extract hidden contextual features from peptides and proteins. Self-attention layers are also employed to capture long-range dependencies between residues and assess the contribution of each residue to the final interaction. CAMP applies fully connected layers on all integrated features to predict the interaction between proteins and peptides. In addition to binary interaction prediction, CAMP can identify which residue of peptides interacts with target proteins by adding a sigmoid activation function to the output of the peptide CNN module. Compared with three baseline models (DeepDTA <cit.>, PIPR <cit.>, NRLMF <cit.>), CAMP demonstrates consistent better performance with an increase by up to 10% and 15% in terms of Area Under the Curve (AUC) and Area Under the Precision-Recall Curve (AUPR). To evaluate its ability to identify binding residues of peptides, the predicted label of each residue of the peptide is compared with real label for four existing peptide binders. The results shows that CAMP correctly predicts binding residues and thus provides reliable evidence for peptide drug design. Instead of only applying self-attention layer, Adbin et al. developed a Transformer-based architecture known as PepNN, enabling both sequence-based (PepNN-Seq) and structure-based (PepNN-Struct) predictions of peptide binding sites <cit.>. PepNN takes representations of a protein and a peptide sequence as inputs and generates a confidence score for each residue,indicating the likelihood of being part of binding sites. PepNN-Struct learns a contextual representation of a protein structure through the use of graph attention layers (Figure <ref>a). In contrast, PepNN-Seq only takes the protein and peptide sequence as inputs (Figure <ref>b). In the PepNN algorithm, the encoding of the peptide sequence is independent from the protein encoding module, under the assumption that the peptide sequence carries all the necessary information regarding peptide-protein binding. However, in many scenarios, the peptide sequence is not sufficient to determine the bound conformation, as the same peptide can adopt different conformations when bound to different proteins <cit.>. Motivated by this, PepNN incorporates a multi-head reciprocal attention layer that simultaneously updates the embeddings of both the peptide and protein (Figure <ref>a). This module attempts to learn the interactions between protein and peptide residues involved in binding.Another challenge in predicting the protein-peptide binding sites is the limited availability of protein-peptide complex training data. Protein-protein complex information was added to the training set to overcome the limited data issue. Notably, not entire protein-protein complex data was included, because the interactions between two proteins can be mediated by a linear segment in one protein that contribute to the majority of the interface energy. Pre-training of the model was conducted using a substantial dataset of large protein fragment-protein complexes (717,932) <cit.>. Fine-tuning of the model then took place with a smaller set of peptide-protein complexes (2,828), resulting in a considerable enhancement in predictive performance, particularly for the PepNN-Struct model (Figure <ref>c). PepNN reliably predicts peptide binding sites on an independent test set and three benchmark datasets from the other studies <cit.>. PepNN-Struct surpassed most peptide binding site prediction approaches, achieving a higher AUC score. While PepNN generally exhibits lower MCC than the SOTA method AlphaFold-Multimer in most cases, its independence from multiple sequence alignments may render PepNN more suitable for modeling synthetic PepPIs.While numerous computational methods have been developed for predicting peptide-protein binding site, many of them need complex data preprocessing to extract features, often resulting in reduced computational efficiency and predictive performance. Wang et al. developed an end-to-end predictive model that is independent of feature engineering named PepBCL <cit.>. This innovative approach leverages pre-trained protein language models to distill knowledge from protein sequences that are relevant to protein structures and functions. Another challenge encountered in identifying protein-peptide binding sites is the issue of imbalanced data. Current work typically construct a balanced dataset by using under-sampling techniques. However, these techniques remove samples from the majority class to match the size of minority class. In PepBCL algorithm, a contrastive learning-based module is introduced to tackle this problem. Unlike conventional under-sampling methods, the contrastive learning module adaptively learn more discriminative representations of the peptide binding residues. The PepBCL architecture is composed of four essential modules: sequence embedding module, BERT-based encoder module <cit.>, output module and contrastive learning module <cit.>. In the sequence embedding module, each amino acid of the query sequence is encoded into a pre-trained embedding vector, while the protein sequence is encoded to an embedding matrix. In the BERT-based encoder module, the output from the sequence embedding module undergoes further encoding through BERT to generate a high dimensional representation vector <cit.>. The representation vector is then passed through a fully connected layer. In the contrastive learning module, the contrastive loss between any two training samples is optimized to generate more discriminative representations of the binding residues. In the output module, the probability of each residue being in a binding site is calculated (Figure <ref>a). When compared with the existing sequence-based method (SPRINT-Seq <cit.>, PepBind <cit.>, Visual <cit.>, and PepNN-Seq <cit.>), PepBCL achieves a significant improvement in the precision by 7.1%, AUC by 2.2%, and MCC by 1.3% over best sequence predictor PepBind <cit.>. Furthermore, PepBCL also outperforms all structure-based methods (i.e. Pepsite <cit.>, Peptimap <cit.>, SPRINT-Str <cit.>, and PepNN-Struct <cit.>) in terms of MCC. The superior performance of PepBCL indicates that DL approaches can automatically learn features from protein sequence to distinguish peptide binding residues and non-binding residues, eliminating the reliance on additional computational tools for feature extraction. When assessing various methods using evaluation metrics, it is observed that recall and MCC tend to be notably low due to the extreme class imbalance in the dataset. This suggests that many true protein-peptide binding residues may be overlooked. However, PepBCL demonstrates improved recall and MCC values, highlighting the effectiveness of the contrastive module in identifying more true peptide binding residues. This enhancement can be attributed to the contrastive learning's ability to extract more discriminative representations, particularly in imbalanced datasets. Figure <ref>b visually demonstrates the learned feature space with and without the contrastive learning module, showcasing a clearer distribution of binding and non-binding residues in the feature space.AlphaFold/RoseTTAFold/OmegaFold/ESMFold. Multiple Sequence Alignment (MSA)-based transformer models such as AlphaFold2 (AF2, including monomer model <cit.> and multimer model <cit.>), RoseTTAFold,<cit.> and protein Language Model (pLM)-based models such as OmegaFold,<cit.> and ESMFold,<cit.> have demonstrated remarkable success in predicting the in silico folding of monomeric proteins and peptides.<cit.> However, PepPIs are relatively flexible protein complexes, making it challenging to achieve highly accurate predictions. Therefore, benchmarking these SOTA DL techniques on PepPI predictions could provide structural insights into peptide–protein complexes, for example, binding affinities, conformational dynamics, and interaction interfaces, thus contributing to the advancement of molecular biology and drug discovery.While AF2 monomer was originally designed for predicting monomeric proteins/peptides structures, it has recently been shown to be successful in predicting PepPIs by Tsaban et al.<cit.> The PepPIs could be represented as the folding of a monomeric protein by connecting the peptide to the C-terminus of the receptor with a poly-glycine linker (Figure <ref>a), which forms a general idea of how to perform peptide–protein docking using the AF2 monomer model. This method can not only identify the peptide binding regions but also accommodate binding-induced conformational changes of the receptor. AF2 surpassed RoseTTAFold since the latter tended to fold the polyglycine linker into a globular structure or various interactive loops. For a small dataset of 26 PepPI complexes, AF2 achieved a relatively high accuracy (75%) for complexes whose binding motifs have been experimentally characterized. AF2 also outperformed another peptide docking method PIPER-FlexPepDock (PFPD) <cit.> in terms of both accuracy and speed. Furthermore, accurate predictions were achieved with AF2 pLDDT values above 0.7, further verifying that AF2 monomer can reliably predict the PepPIs. However, the predicted accuracy became lower (37%) when tested on a larger dataset (96 complexes), indicating that further improvements are needed for more accurate PepPI predictions by AF2 monomer. The recent release of AF2 multimer has yielded a major improvement in PepPIs prediction (Figure <ref>b). Using a set of 99 protein–peptide complexes, Shanker et al <cit.> compared the performance of AF2 monomer, AF2 multimer, and OmegaFold on PepPI prediction with their peptide docking software AutoDock CrankPep (ADCP).<cit.> The new AF2 multimer model with 53% accuracy, which was trained to predict the interfaces of multimeric protein complexes, outperformed OmegaFold with 20% accuracy and ADCP with 23% accuracy (Figure <ref>c). However, the AF2 multimer model is only limited to linear peptides, reducing its applicability to cyclized peptides, or peptides with non-standard amino acids. Effective selection from top-ranked poses yielded by both AF2 multimer and ADCP docking tool was found to further enhance the accuracy to 60%. Therefore, DL protein structure prediction models, especially AF2 multimer, have achieved high-accuracy in PepPIs predictions, though limitations exist. Combining these SOTA DL models with traditional peptide docking tools could be a future direction for further improving the accuracy of PepPIs predictions.Leveraging the highly accurate predictions of protein structures by AF2, Amir Motmaen et al <cit.> developed a more generalized model for the prediction of PepPIs. The model was accomplished by placing a classifier on top of the AF2 network and fine-tuning the combined network (Figure <ref>d). AF2 was able to achieve optimal performance and generate the most accurate complex predicted structure models for a large dataset of peptide-Major Histocompatibility Complex (MHC) complexes. This was accomplished by aligning the peptide sequence with the peptide-protein crystal structures as templates. However, AF2 occasional docking of non-binding peptides in the peptide binding domain of MHC highlighted the need for a clear classification of binder and non-binder peptides in the training of the model. To address this issue, a logistic regression layer that normalizes AF2 Predicted Aligned Error (PAE) score into binder/non-binder score was placed on top of AF2. This resulted in three types of losses being combined and applied to further fine-tune the combined model: structure loss on both peptide and protein for binding peptide-protein complexes, structure loss on protein only for non-binding peptide-protein complexes, and classification loss on binding/non-binding score. The evaluation of the combined model showed a ROC AUC of 0.97 for Class 1and 0.93 for Class 2 peptide-MHC interactions. Surprisingly, the fine-tuned model outperformed the previously mentioned HSM model and could also be generalized on PDZ domains (C-terminal peptide recognition domain) and SH3 domains (proline-rich peptide binding domain), despite being trained and fine-tuned only on the peptide-MHC dataset. Therefore, taking advantage of the accurate predictions of protein structures through AF2, and fine-tuning the model with existing peptide-protein binding data offers significant boost to PepPIs predictions. § CONCLUSIONS AND FUTURE RESEARCH DIRECTIONS Peptides, which are short proteins consisting of around 2 to 50 amino acids, are known for their flexibility. This characteristic makes it challenging to achieve highly accurate predictions of PepPIs. A variety of SOTA ML and DL models summarized in this review have been designed and applied to predict PepPIs, which are key to de novo peptide drug design. Apart from their well-documented high efficiency and accuracy requirements, ML/DL methods offer several other advantages in the predictions of PepPIs. Compared to Docking or MD Simulation methods, ML or DL methods offer diverse options for model inputs. DL methods, such as transformers and language models, have been shown to achieve great success in predicting PepPIs solely on sequence information. Instead of original sequence or structure information, ML methods can also incorporate multi-level information such as evolutionary information, secondary structures, solvent accessible surface area, and so forth, which could significantly enhance the accuracy of the prediction. Furthermore, more interpretability can be provided by ML/DL methods. Attention mechanism assists in demonstrating the internal dependencies between residues and the contribution of each residue to PepPIs. Graph models capturing multi-scale structure information of peptides and proteins are able to provide insights into the underlying peptide-protein binding chemical principles and binding patterns. Moreover, ML/DL techniques exhibit a degree of generalizability. Transfer learning could facilitate the models trained on certain peptide-protein binding datasets to generalize to other peptide-protein complexes. Despite their numerous advantages, ML and DL methods also have certain limitations in the prediction of PepPIs, which highlight potential areas for future research. One significant challenge is the issue of imbalanced datasets in the training and testing of PepPIs prediction models. Given that peptide binding is typically a rare occurrence, the imbalanced number of positive and negative samples often results in the limited performance of ML/DL models due to the poor understanding of the minority binding class. Consequently, ML/DL methods for PepPI predictions were normally trained based on datasets with positive-to-negative ratio as 1:1. Both oversampling methods, which duplicate or create new samples, and undersampling methods, which delete or merge samples in the majority class can enhance the model performance on imbalanced classification. Additionally, ML/DL methods often failed in the prediction of PepPIs between intrinsically disordered peptides (IDP) and proteins. IDPs are abundant in nature, with flexible and disordered structures but adopt stable and well-defined structures upon binding. In these cases, ML/DL methods, particularly structure-based models, tend to fail in predicting binding sites and peptide binding conformations, offering little insights into the binding mechanism. With the enhancement of computing power, high-throughput MD simulations can achieve more accurate predictions of binding sites and peptide/protein conformations as well as a deeper understanding of the mechanism of folding and binding, induced fit (binding then folding), or conformational selection (folding then binding).The integration of MD or quantum chemical insights and ML/DL methods could constitute a promising future research direction of PepPIs predictions. Furthermore, some advanced techniques like transfer learning or one-shot learning models can also be applied for address the low data issue in the PepPIs prediction <cit.>. In addition to enhancing the predictive accuracy of established ML and DL models, future research directions should prioritize the enhancement of model's ability to generate novel peptide sequences to specific target proteins of interest, thereby contributing to de novo peptide drug design. An essential way is to fine-tune pre-trained pLM. Introducing noises and perturbations within the peptide latent space of pLM, or masking peptide sequences to facilitate the model to learn the probability distribution of peptide binders, could be explored to generate entirely new peptide sequences. Additionally, diffusion models offer another avenue for achieving the generative tasks. These models possess a deeper understanding of the intricate molecular interactions at the atomic levels, thus enabling the generation of new peptide sequences based on peptide-protein complex structures. The resultant novel peptide sequences can be subsequently validated through MD simulations, in vitro, and in vivo experimental tests. Therefore, developing new generative models or leverage the pre-trained ML/DL models to facilitate peptide generation represents a noteworthy and promising future for advancing peptide drug design.In conclusion, ML/DL-guided methods have shown significant potential for the accurate predictions of peptide-protein complex structures and binding sites. These SOTA models will undoubtedly further accelerate the process of peptide drug discovery and design.D.S. acknowledges support from National Institutes of Health, under Award No. R35GM142745 and No. R21AI-167693.
http://arxiv.org/abs/2310.18249v1
{ "authors": [ "Song Yin", "Xuenan Mi", "Diwakar Shukla" ], "categories": [ "q-bio.BM", "q-bio.QM" ], "primary_category": "q-bio.BM", "published": "20231027163606", "title": "Leveraging Machine Learning Models for Peptide-Protein Interaction Prediction" }
[email protected] Gran Sasso Science Institute (GSSI), I-67100 L'Aquila, Italy INFN, Laboratori Nazionali del Gran Sasso, I-67100 Assergi, ItalyGran Sasso Science Institute (GSSI), I-67100 L'Aquila, Italy INFN, Laboratori Nazionali del Gran Sasso, I-67100 Assergi, ItalyUniversità di Napoli “Federico II", I-80126 Napoli, Italy INFN, Sezione di Napoli, I-80126 Napoli, ItalyINFN, Sezione di Genova, via Dodecaneso, I-16146 Genova, ItalyNikhef, 1098 XG Amsterdam, The NetherlandsAstronomical Observatory, University of Warsaw, Al. Ujazdowskie 4, 00-478 Warsaw, Poland Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, PolandUniversità di Napoli “Federico II", I-80126 Napoli, Italy INFN, Sezione di Napoli, I-80126 Napoli, ItalyNicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, PolandUniversità di Napoli “Federico II", I-80126 Napoli, Italy INFN, Sezione di Napoli, I-80126 Napoli, ItalyUniversità di Napoli “Federico II", I-80126 Napoli, Italy INFN, Sezione di Napoli, I-80126 Napoli, ItalyUniversità di Napoli “Federico II", I-80126 Napoli, Italy INFN, Sezione di Napoli, I-80126 Napoli, ItalyEuropean Gravitational Observatory (EGO), I-56021 Cascina, Pisa, ItalyMaastricht University, 6200 MD Maastricht, The Netherlands Nikhef, 1098 XG Amsterdam, The NetherlandsAstronomical Observatory, University of Warsaw, Al. Ujazdowskie 4, 00-478 Warsaw, PolandUniversité Savoie Mont Blanc, CNRS, Laboratoire d’Annecy de Physique des Particules - IN2P3, F-74000 Annecy, FranceUniversité de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, FranceINFN, Sezione di Pisa, I-56127 Pisa, ItalyEuropean Gravitational Observatory (EGO), I-56021 Cascina, Pisa, ItalyNicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, PolandINFN, Sezione di Genova, via Dodecaneso, I-16146 Genova, ItalyUniversité Savoie Mont Blanc, CNRS, Laboratoire d’Annecy de Physique des Particules - IN2P3, F-74000 Annecy, FranceMaastricht University, 6200 MD Maastricht, The Netherlands Nikhef, 1098 XG Amsterdam, The NetherlandsNicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warsaw, PolandAstronomical Observatory, University of Warsaw, Al. Ujazdowskie 4, 00-478 Warsaw, PolandEuropean Gravitational Observatory (EGO), I-56021 Cascina, Pisa, ItalyEuropean Gravitational Observatory (EGO), I-56021 Cascina, Pisa, ItalyTerrestrial gravity perturbations caused by seismic fields produce the so-called Newtonian noise in gravitational-wave detectors, which is predicted to limit their sensitivity in the upcoming observing runs. In the past, this noise was seen as an infrastructural limitation, i.e., something that cannot be overcome without major investments to improve a detector's infrastructure. However, it is possible to have at least an indirect estimate of this noise by using the data from a large number of seismometers deployed around a detector's suspended test masses. The noise estimate can be subtracted from the gravitational-wave data; a process called Newtonian-noise cancellation (NNC). In this article, we present the design and implementation of the first NNC system at the Virgo detector as part of its AdV+ upgrade. It uses data from 110 vertical geophones deployed inside the Virgo buildings in optimized array configurations. We use a separate tiltmeter channel to test the pipeline in a proof-of-principle. The system has been running with good performance over months. Design and implementation of a seismic Newtonian-noise cancellation system for the Virgo gravitational-wave detector Paolo Ruggi January 14, 2024 ====================================================================================================================§ INTRODUCTIONThe detection of gravitational waves (GWs) from a binary black hole coalescence in 2015 <cit.> by the Advanced LIGO detectors <cit.> marked the start of a new era in GW astrophysics. Ever since, the Advanced Virgo (AdV) and Advanced LIGO detectors <cit.> have detected more than a hundred GW signals <cit.> spanning over three observing runs. Each of the observing runs were followed by a period of instrument upgrade and commissioning <cit.> aimed at improving the sensitivity and the duty-cycle of the detectors. The AdV detector achieved a binary neutron star range of about 60 Mpc towards the end of the third observing run which lasted until March, 2020 <cit.>. Following this, a series of instrument upgrades were planned to achieve the Advanced Virgo Plus (AdV+) sensitivity <cit.>. Design and implementation of an online Newtonian noise cancellation (NNC) system was one of the planned activities aimed at improving the low-frequency sensitivity of the detector during the first phase of AdV+ upgrades. From an astrophysical standpoint, improving the low-frequency sensitivity would increase the possibilities of detecting GW signals from stellar-mass black hole mergers and also increase the rate of detection of intermediate-mass binaries. Additionally, better constrained estimates of parameters like the chirp mass and effective spin of the binaries is also expected <cit.>. Terrestrial gravity noise also known as Newtonian noise (NN) originates due to the gravitational coupling of ambient density fluctuations to the suspended test masses of the interferometer <cit.>. These density fluctuations can be either atmospheric due to pressure and temperature fluctuations <cit.>, or of the subsurface due to the propagation of seismic waves <cit.>. The former is referred to as atmospheric NN, while the latter is referred to as seismic NN. In this article, we address the cancellation strategies concerning seismic NN. Newtonian noise is expected to be one of the major fundamental limits to the sensitivity of the AdV+ detector in the frequency band 10–20 Hz. Figure <ref> shows the contribution of the several fundamental sources of noise to the AdV+ design sensitivity <cit.>. The contribution of NN to the low-frequency sensitivity of the detector can be estimated by either using analytical models <cit.> or by finite element simulations of the seismic wavefield in the vicinity of the test-mass <cit.>. Both approaches require surface-seismic array studies aimed at deciphering the dominant wave type at the site (surface or body waves) and also quantifying the contribution of each of the wave-types from the different anthropogenic sources of noise. Prior to the design of the NNC for AdV+, several surface-seismic array studies have been conducted inside each of the end buildings <cit.> and outside the interferometer arms <cit.>. Based on the understanding of the propagation characteristics of the seismic waves near the test-masses of the interferometer, it is possible to design an optimal surface-array of seismometers for NNC. A first such NNC system was proposed in <cit.> which makes use of the correlation between the ground motion measured by seismometers near the test-masses and the main interferometer signal. The underlying principle for NNC systems makes use of the linear relation between the measured ground motion and the expected Newtonian noise in order to design a Wiener filter corresponding to each of the seismometers <cit.>. Application of such a subtraction scheme was fully simulated in time domain <cit.>. In cases when the NN originates due to a pure Rayleigh wavefield, studies by <cit.> have shown that NNC by even one tiltmeter would achieve NN residuals that would be limited only by the tiltmeter self-noise. The study also shows that for a more pessimistic scenario, when the seismic noise is a mixture of body and surface waves, a modest cancellation by about a factor two would be possible. However, before a noise-cancellation scheme can be implemented and tested, the positions of seismometers near the test-masses need to be determined for optimal NNC. Determination of the optimal locations of the seismometers for NNC is an optimization problem that minimizes a residual, which can be estimated by making use of the cross-correlations between seismometers and that between the seismometers and the expected Newtonian noise. For AdV+, based on prior estimates of correlations between seismometer channels, a Particle Swarm optimizer was used to determine the optimal geometry of the NNC arrays corresponding to each of the end buildings <cit.>.In this paper, we present the results of the cancellation of the tilt signal measured at the North End Building (NEB) of the AdV+ detector <cit.> by using the seismic noise data measured by the NEB NNC array. In section <ref>, we present the seismic wavefield characteristics at the NEB and prove that it is dominated by Rayleigh waves, a case in which the tilt signal can be used as a proxy for the expected NN. In Section <ref>, we present the NN estimates for the AdV+ detector based on array studies and finite element simulations. In Section <ref> we present the optimization results that helped in designing the surface array of seismometers for the NNC system. In Section <ref> we derive the expressions corresponding to the time-domain implementation of the Wiener filter for a Multiple-Input-Single-Output (MISO) system and detail the several signal processing steps implemented in the NNC pipeline. The noise-cancellation performance of the system when using the tilt signal as the target is presented in Section <ref>. Finally, we present the conclusions of our work in Section <ref>. § SITE CHARACTERISTICS The AdV+ NNC array comprises a total of 110 seismic sensors, with 55 sensors deployed at the Central Building (CEB), and 30 sensors each at the NEB and the West End Building (WEB). These sensors were deployed in 2020 in their optimal positions (see Section <ref>) with some refinements of the CEB array a year later. Each sensor is equipped with a vertical geophone with a resonance frequency of 4.5 Hz and a data acquisition system. Sensors creating an array are connected to an SPU (signal processing unit), providing sensor communication, time synchronization, and power. The sensor data acquisition system samples the seismic signal from the geophone at 500 samples/s and sends data after time synchronization to the storage server through the SPU. SPU provides sensors with a modified ethernet standard, which uses a communication speed of 100 Mb/s and only two pairs of ethernet cables. Another pair is used to provide time synchronization pulses generated every 1 s. The last pair provides the power supply. These installations follow an initial site-characterization phase with deployments of temporary arrays in 2018 as reported in <cit.>. Figures <ref> (a), (b), and (c) show the positions of the sensors at the CEB, NEB, and WEB, respectively. In this Section, we present the amplitude and the phase characteristics of the seismic noise corresponding to the layout after the optimization studies were done.The amplitude characteristics of the seismic noise data presented here were acquired between April 01, 2023 and May 07, 2023 at a sampling frequency of 500 Hz. Seismic data corresponding to each of the geophones are first divided into 1200 s long segments, and the instrument response is deconvolved, which applies a correction to the amplitude and the phase of the seismic data and converts it from voltage to ground velocity. The (single-sided) power spectral densities (PSDs) are then computed asS_m(f) = 2|X_m(f)|^2/T,where X_m(f) is the Fourier transform of the deconvolved seismic data at frequency f of geophone m, and T is the segment duration. The discrete Fourier transform is calculated using a Hamming spectral window <cit.>. The estimated spectral densities for every 1200 s segment are then used to generate histograms with a bin size of 0.5 dB (1 dB=20log_10(m/s/√(Hz))). Next, the 10^th, 50^th, and 90^th percentile PSDs are extracted from the histogram. The process is then repeated for all the geophones in the NNC array. The black, blue, and red shaded regions in Figure <ref>(a) show the maximum and minimum of the 10^th, 50^th, and the 90^th percentiles of the PSDs for the CEB geophones. The solid black, blue, and red curves show the average of the 10^th, 50^th, and 90^th percentile PSDs. Figures <ref>(b) and (c) show the same corresponding to the WEB and NEB, respectively. Seismic noise at each of the buildings vary between 3×10^-14 – 10^-18 m^2/s^2/Hz. A spatial variability of about 20 dB is observed for frequencies between 10 – 15 Hz and it increases to about 40 dB for frequencies above 20 Hz. Besides broadband noise, several peaks are observed in the PSDs. These (nearly) monochromatic peaks are observed at the rotation frequency (or their harmonics) of the fans, motors, and pumps that constitute the heating, ventilation, and air conditioning system (HVAC) <cit.>. The HVAC system is necessary for operating the experimental equipments and the clean rooms near the test-masses. The spatial variation of PSDs inside the end buildings can be used to point to sources of noise. However, it gives very limited information on the propagation characteristics of the noise. For that we use the phase difference between the noise measured at different geophones. Here we study the phase characteristics of the noise measured at the NEB, since we want to establish that the dominant wave type propagating at the NEB is the Rayleigh wave.The first metric we present is the propagation velocities of the seismic waves for different frequency bands. We use a plane wave beamformer <cit.>, and estimate the frequency-domain beampower (FDB) BP as a function of slowness p (inverse of speed) and direction of propagation ϕ, which is measured anticlockwise from an eastward direction. The FDB for N concurrent segments of seismic data measured at M geophones is estimated asBP(p,ϕ) = 𝐰(p,ϕ,f)𝐒(f)𝐰^†(p,ϕ,f),where 𝐒(f) is the matrix of cross-spectral densities with M× M componentsS_m_1m_2(f)=1/N∑_n=1^N2X^n_m_1(f)X_m_2^n, *(f)/T,where `*' denotes the complex conjugate operator. The vector 𝐰(p,ϕ,f) has m=1,…,M components exp(-2jπ fτ_m(p,ϕ,f)) representing the phase delays for a plane wave to reach geophone m, and j=√(-1). The time delay for a plane wave can be expressed as τ_m(p,ϕ,f) = x_mp cos(ϕ) + y_mpsin(ϕ), where (x_m,y_m) are the coordinates of the m^ th geophone. Following equation (<ref>), we estimate the FDB for every 1200 s stretch of data during the period April 01 – May 07, 2023 corresponding to v∈[100 ,1500] m/s at an interval of 10 m/s and ϕ∈ [0^∘,356^∘] (measured anticlockwise from an eastward direction) at an interval of 4^∘. We divide the 1200 s of data into N=12 segments of length 100 s, which means that the FDB is estimated with frequency bins of width 10 mHz. An average over 20 adjacent bins is subsequently calculated to reduce the data volume. Figure <ref> shows the FDB averaged in the frequency band 10 – 10.2 Hz over all such 1200 s windows during the entire measurement period. This band is characterized by a noise peak at 10.1 Hz, which coincides with the rotation frequency of a fan, which is a part of the air handling unit (AHU) located in the technical room north of the NEB. The direction estimate from the FDB (Figure <ref>) matches with the location of the AHU, and the propagation speed of about 250 m/s is evidence for a surface wave.In order to generate statistics about the wave-propagation attributes, the velocity and direction of propagation corresponding to the maximum FDB is stored for every 0.2 Hz bin and histograms are generated using all 1200 s windows. Later, we generate the probability density functions (PDFs) of the velocity and direction of propagation. The results are shown in Figures <ref>(a) and (b). We observe that the seismic waves dominantly propagate with speeds between 100 – 250 m/s, which is characteristic of slowly propagating Rayleigh waves. The velocity PDFs for some frequency bands (Figure <ref>(a)) show multiple peaks between 100 – 250 m/s and it corresponds to the different modes of Rayleigh wave propagation. Most of the noise originates either north or south of the array (Figure <ref>(b)). For example, it is well known that the origin of the noise peaks at 10.1 Hz, 15.2 Hz, and 20.1 Hz is the AHU located north of the NEB, and our analysis also points to the same direction. The second metric we present are the normalized cross-correlations C_m_1m_2(f) between geophones m_1 and m_2 which is computed asC_ij(f)= (S_ij(f))/√(S_i(f)S_j(f)),for each of the N time segments, andrepresents the real part of a complex number. Figure <ref> shows the estimated cross-correlations between all 435 station pairs at the NEB. A strong positive correlation is observed for frequencies below 2 Hz, since these frequencies are characterized by surface waves with large wavelengths <cit.>. For the NNC frequency band between 10–30 Hz, several peaks in the cross-correlation spectra are observed. These show both positive and negative cross-correlations and are characteristic of plane waves with wavelengths that are shorter than the array aperture (≈ 25 m for the NEB). These cross-correlations can be explained with an anisotropic plane wave (APW) model. The theoretical cross-correlation value C^APW_i,j(f) between geophones located at position vectors r⃗_i and r⃗_j corresponding to a propagation speed v and angle of propagation between ϕ_1 and ϕ_2 is expressed asC^APW_i,j(f) = 1/A_P(f)∫_ϕ=ϕ_1^ϕ_2ϕA_APW(ϕ,f)cos(2π f/v(r⃗_i-r⃗_j)·(x̂cosϕ + ŷsinϕ)),where A_APW(ϕ,f) is the amplitude as a function of source azimuth, and x̂, ŷ represent the unit vectors along the east and north directions, respectively. An amplitude normalization factor A_P(f) = ∫_ϕ=ϕ_1^ϕ_2ϕA_APW(ϕ,f) is applied in order to set cross-correlation values in the range [-1,1]. Figure <ref>(a) shows the observed cross-correlations at 10.1 Hz as a function of the relative position vector between the geophones. An APW model that reproduces the observed cross-correlations using Eq. (<ref>) with v = 250 m/s and ϕ=[80^∘,110^∘] is shown in Figure <ref>(b). Several frequency bands do not show the strong positive and negative cross-correlations, and can be explained as a mixture of an APW and a Gaussian correlation model. The Gaussian correlation model between stations with position vectors r⃗_i and r⃗_j can be expressed asC^Gauss_ij(f) = A_G(f)exp(-|r⃗_i-r⃗_j|^2/σ^2(v,f)),where σ(v,f)=v/π f, v is the speed of the propagating wave at frequency f, and A_G is the amplitude of the source. Gaussian correlation models are used to model the effect of sources of noise within the array and the effect of wave reflection and scattering, which suppress correlations over larger distances compared to APW models. An application of Gaussian correlation models to the Advanced LIGO seismic data has been shown in <cit.>. Figure <ref>(a) shows the spatial distribution of the observed cross-correlations averaged in the frequency band 11 – 13 Hz. We try to model the observed cross-correlations using a mixture of APW and Gaussian correlation models. Figure <ref>(b) shows the estimated cross-correlations corresponding to v = 110 m/s and ϕ=[100^∘,130^∘]. We observe that not all of the cross-correlations are reconstructed accurately using these models and is testament to the complexity of the seismic field inside the Virgo buildings.In summary, the two metrics presented for interpreting the phase characteristics of the seismic noise at the NEB point to a dominant contribution from Rayleigh waves for the noise peaks. However, for the broadband noise, analytical models cannot reconstruct the observed correlations and only a part of the seismic field can be explained with a plane wave approach. Although we did not show any results from the CEB and WEB, seismic noise characteristics at these two buildings are similar to the NEB. § SIMULATIONS OF VIRGO NN SPECTRA The Virgo detector incorporates open spaces or recesses under the test masses as part of its clean room system. First calculations of Virgo's NN spectra relied on analytical equations that assume a flat surface, leaving room for improvement in accurately modelling the effects of recesses. The proper dimensions of the recesses under the input and end test mirrors in Virgo's central and end buildings were taken into account for NN estimation in <cit.>. Here we summarize the main results of these studies.To assess the impact of the recesses, simulations were performed of an isotropic distribution of Rayleigh-wave propagation directions in the vicinity of the test-masses. The speed of Rayleigh waves is an important parameter, which was taken from an analysis similar to what is shown Figure <ref>(a). The slower the waves (the shorter the wave length) the more effective the recess to reduce NN. Using a finite-element model, the resulting gravity perturbation caused by these Rayleigh waves was integrated. The mathematical formulation and parameters of the Rayleigh-wave field used in the simulation can be found in <cit.>. Simulations were done for frequencies between 5 Hz and 25 Hz. The integration over finite-element displacements, which gives rise to gravity perturbations δ a(r⃗_0, t) at the position r⃗_0 of the test-mass, can be expressed as follows: δ a(r⃗_0, t)= Gρ_0 ∑_iV_i 1/|r⃗_i - r⃗_0|^3·(ξ⃗(r⃗_i, t) - 3(e⃗_i·ξ⃗(r⃗_i, t))·e⃗_i).In this equation, r⃗_i represents the position of the i^ th finite element of volume V_i, ξ⃗(r⃗_i, t) is its seismic displacement, and e⃗_i is the unit vector pointing along r⃗_i-r⃗_0. The finite-element model allows us to consider both the gravity perturbations resulting from vertical surface displacement and the compression or decompression of the underlying ground medium by summing over these effects. In the estimation of the total NN with contributions from four test masses, as presented in Figure <ref>, the assumption was made that the NN in the 5–25 Hz band is uncorrelated between the test masses. This assumption is certainly valid for NN correlations between the two test masses of an interferometer arm, but it might not be valid between the two 3 km distant test masses inside the CEB. Compared to earlier analyses, it was found that the recess causes NN to be reduced by about a factor of 4 within the frequency range 10–20 Hz when accounting for the observed Rayleigh-wave dispersion. Accordingly, seismic NN is expected to largely fall below the sensitivity targets set for the upcoming observation runs, O4 and O5, with the exception of a few peaks. However, this analysis does not consider the contribution of NN transients associated with stronger transient waves of the seismic field. In fact, given the highly non-stationary character of the seismic field, one should expect frequent perturbation of Virgo data by NN transients in O5 <cit.>. § SYSTEM DESIGN The main goals of a NNC system design are to reduce NN in the GW detector data to an acceptable level and to do so reliably over months and years of operation. NNC systems will typically require a large number of sensors, which means that sensor failures and problems with data quality, e.g., introduced by electromagnetic disturbances or by the data-acquisition system, must be extremely rare. A single sensor whose data quality degrades for some reason while the NNC system is operational can spoil the NNC performance.Concerning the NNC performance, as long as the goal is to reduce the NN spectral density (instead of subtracting NN transients, which is a different problem not specifically addressed with the current Virgo NNC design), it is determined by the correlations between all the seismometers and between seismometers and the Virgo GW channel, and by the signal-to-noise ratios of the seismic measurements. The correlations depend on what is actually measured, e.g., horizontal or vertical seismic displacement. If NN is not (yet) observed, the correlations between seismometers and GW detector data must be modeled. We describe below how these correlations are used to calculate optimal array configurations.Finally, it must be decided what type of noise-cancellation filter is used. In section <ref>, we will present results for a time-invariant, finite-impulse response (FIR) Wiener filter, but adaptive Wiener filters, Kalman filters and other types of time-variant filters can be considered. In fact, we will argue in section <ref> and <ref> that adaptive Wiener filters should not only be expected to achieve better average NNC performance, but also to solve practical issues. §.§ InstrumentationAssuming that seismic NN is dominated by contributions from Rayleigh waves, the natural choice for efficient NNC would be to monitor seismic displacements along the horizontal directions of the interferometer arms since this would lead to higher correlations between seismometers and seismic NN compared to vertical seismometers <cit.>. However, the seismic sensors deployed for the NNC system at Virgo are vertical geophones. The reason is that while our array measurements presented in section <ref> provide strong evidence that Rayleigh waves make the dominant contribution to vertical surface displacement and seismic NN, horizontal surface displacement is expected to have significant contributions from Love waves. Love waves can only produce NN through inhomogeneous geology and non-planar surfaces, which means that in the presence of a dominant Rayleigh-wave field, the main effect of Love waves is to reduce correlations between horizontal seismometers and seismic NN. It is an immense benefit for NNC to deal with only one type of seismic wave <cit.>, and so the choice of vertical geophones measuring Rayleigh-wave displacement is justified. As a note, a tiltmeter was deployed at the NEB to investigate a potential utilization for NNC <cit.>. It must be emphasized that the discussion so far is accurate only if the surface is flat, which is not the case at Virgo around its test masses <cit.>. It has never been analyzed whether a combination of horizontal and vertical sensors or tiltmeters might lead to improved performance of the Virgo NNC system. Another technical design choice was to digitize the geophone data at the sensors, and to send the digitized data to a central data-acquisition unit. The rationale for this decision was that it avoids excess noise from ambient electromagnetic fluctuations coupling into cables and connectors transmitting the seismic data. This design comes with its own risks. For example, the timing and digitization of data at the sensors is a source of noise, and in fact, during the commissioning of the arrays, data-quality issues were noticed and eventually solved by modifying the sensor housing (increasing distance and adding EM shielding between geophone and digitizer). Also, receiving and packaging timed digitized data from many sensors is a complex operation, which can fail. Issues with this operation were identified in the early phase of the commissioning of the NNC system and had to be solved. The only remaining known data-quality issue is coming from loss of a few samples per day created by the digitizers of the individual sensors. However, these sample losses have a negligible impact on noise-cancellation performance. §.§ Optimizing the array configurationVirgo presents a complicated structure: the ground is not homogeneous and there is a basement under each test mass whose floor is 3.5 m below the surface. At the end buildings, the walls of the basement are disconnected by a thin gap (5 cm) from the main building floor, which reduces the transmission of external seismic disturbances <cit.>. The entire structure supported by the basement is called tower platform and it is anchored with 52 m deep pillars to a more stable gravel layer beneath the clay (there are many gravel layers, which alternate with clay in the substrate of soil beneath Virgo <cit.>). These pillars are meant to prevent the sinking of the basement. This complex structure and the presence of local seismic sources entail a seismic field that is not describable with analytical models. Therefore, unlike LIGO, finding the optimal array to cancel NN in Virgo is not a trivial task and it is necessary to rely on measured seismic correlations <cit.>. Correlation measurements can only be done between a finite set of points on the surface, and the full correlation function between any two points of the seismic field needs to be properly reconstructed. To search for the optimized array configuration two things are necessary: the reconstructed correlation function between seismometers and the correlation vector between seismometers and GW detector noise. The latter can be either modeled or measured.In the following, we summarize the main results of <cit.>. Any optimization needs a cost function to be minimized. For NNC, a commonly used cost function is the spectral density of the residual noise E(f) left after NN subtraction in the GW data. Expressed as a relative reduction of the NN spectral density N(f), the cost function can be written as <cit.>:ℛ(f) ≡E(f)/N(f) = 1 - 𝐏^†(f)𝐒^-1(f)𝐏(f)/N(f),where 𝐒(f) is the seismometer cross-power spectral density matrix of the seimometers and 𝐏(f) is the cross-power spectral density vector of seismometers and seismic NN.The optimization can be performed by minimizing - 𝐏^†(ω)𝐒^-1(ω)𝐏(ω), which is the term depending on the positions of the seismometers. The optimization for the NNC has to find the global minimum of ℛ(f) with respect to the seismometer positions. This can be done with a stochastic optimization algorithm, such as Particle Swarm or a Genetic Algorithm.At each step of the stochastic optimization, the value of the cost function (the residual) relative to randomly sampled array configuration is evaluated. The next configuration is then chosen following some criteria, which depend on the chosen algorithm <cit.>. This means that the optimizer must be able to calculate the residual, and therefore 𝐒(f) and 𝐏(ω), for any possible array configuration. Site-characterization measurements provide correlations only between a finite set of seismometer positions. Some form of interpolation of the correlation measurements is needed to carry out an array optimization. Standard interpolation techniques (linear, cubic, spline) are not accurate enough and in any case cannot be used to extrapolate to sensor positions outside the convex hull of the site-characterization array. More sophisticated Bayesian methods are computationally very expensive. A solution to this problem was presented in <cit.>. Instead of performing an interpolation of measured correlations S(x_i,y_i,x_j,y_j, f) between sensors i, j, it is possible to interpolate the Fourier transform of the signals recorded by all seismic sensors, which only depends on two coordinates. It is then possible to evaluate S(x_i,y_i,x_j,y_j, f) for any pair of sensor positions by exploiting the convolution theorem (see <cit.> for further details). To further accelerate the optimization process, one first evaluates S(x_i,y_i,x_j,y_j, f) on a denser grid with the method discussed above, and then uses a standard interpolation technique for a rapid evaluation of S(x_i,y_i,x_j,y_j, f) for arbitrary sensor locations. One thereby obtains a surrogate model of seismic correlations, which are also required for a model of 𝐏(ω) <cit.>. This means that the full cost function ℛ(f) is now given as a surrogate model and optimization can be performed. The results of such an optimization are shown in Figure <ref>. The optimization is performed at a specific frequency for an arbitrary (but fixed during optimization) number of sensors. It is also possible to perform an array optimization for broadband NNC. This can be done by building a cost function, for example, as a sum of residuals at different frequencies or by minimizing the maximum of residuals at different frequencies.§ WIENER FILTERING The filtering of seismic data for NNC in the time domain is formulated as a MISO system where the multiple inputs comprise the data from the geophones (reference channels) and the target is the GW channel of the Virgo detector, or in our case, for test purposes, the tiltmeter signal measured at the NEB (see section <ref>). Given the linear relation between the measured seismic data and the expected NN and since the cost function is quadratic in the residuals, the objective is to estimate the optimal linear filter, i.e., Wiener filter, mapping samples of the geophones to a combined NN estimate. Following the Wiener theory <cit.>, the k^ th sample of the error signal in the time domain is expressed as,e(k) = y(k) - 𝐡(k)𝐱(k),where y(k) is the target signal, 𝐡(k) = [𝐡_1(k),𝐡_2(k),⋯,𝐡_𝐌(k)]_(1× ML) is a row vector of the M impulse responses each of length L, and 𝐱(k) = [𝐱_1(k),𝐱_2(k),⋯,𝐱_𝐌(k)]^⊤_(ML× 1) is a column vector of the past L samples of the data measured at each of the M reference channels, and ⊤ is the transpose sign. Hence, every element of the vector 𝐱(k) is a column vector of the form 𝐱_𝐦(k) = [x_m(k),x_m(k-1),...,x_m(k-L+1)]^⊤_(L×1). Similarly, every element of 𝐡(k) can be expanded as 𝐡_𝐦(k) = [h_m(0),h_m(1),⋯,h_m(L-1)]_(1× L). Thus, given the past L samples of the reference data, the optimal impulse response per reference channel can be used to estimate the present sample of the target signal. The optimal set of impulse responses is obtained by solving ∂ E{e^2(k)}/∂𝐡(k) = 0_(ML× 1), and it yields𝐡(k) = 𝐏𝐒^-1.The matrix 𝐒 and row vector 𝐏 can be expressed as𝐒 =[ Φ_11 Φ_12⋯ Φ_1M; Φ_21 Φ_22⋯ Φ_2M;⋮⋮⋱⋮; Φ_M1 Φ_M2⋯ Φ_MM ], and 𝐏 =[ Ψ_y1; Ψ_y2;⋮; Ψ_yM;]^⊤.Each submatrix Φ_ij of the block matrix 𝐒 can be further written as,Φ_ij = [ c_ij(0) c_ij(1) ⋯ c_ij(L-1);c_ij(-1) c_ij(0) ⋯ c_ij(L-2); ⋮ ⋮ ⋱ ⋮; c_ij(1-L) c_ij(2-L) ⋯ c_ij(0) ],where c_ij(τ) is the cross-correlation between the reference data measured at the i^ th and j^ th channels corresponding to the time lag τ. Each element Ψ_ym of the row vector 𝐏 can be expanded as, Ψ_ym = [c_ym(0),c_ym(1), ⋯, c_ym(L-1)]. It should be noted that the cross-correlations c_ij and c_ym used in the Wiener filter calculation are typically averaged over a day of data. This makes the cross-correlations less sensitive to the temporal variability of the seismic data, and the performance of the Wiener filter becomes more robust.The NNC system aims at removing the contribution of seismic NN in the frequency band 10–30 Hz. Hence, for the real-time implementation, several stages of signal preconditioning are implemented to each of the reference channels before the Wiener output ŷ(k) = 𝐡(k)𝐱(k) can be subtracted from the target channel. At the first stage, the reference data acquired at a sampling frequency of f_ R=500 Hz are decimated to f_ D=100 Hz by using a Hamming window FIR (Finite Impulse Response) low-pass filter of order M_ A=100 and stopband frequency f_ S≥ 35 Hz. It is important to note that a M^ th order FIR filter has (M+1) coefficients. At the second stage, the 100 Hz reference data are high-pass filtered using a Hamming window FIR filter of order M_ B=50 and passband frequency f_ p≥ 10 Hz. At the third stage, the Wiener filter of order L = 100 is applied to the low-pass and high-pass filtered data. However, before the Wiener output can be subtracted from the target data, it needs to be upsampled to the sampling frequency of the target data f_ T = 10 kHz (for the tiltmeter signal) or 20 kHz (for the Virgo strain signal). At the final stage, in order to remove the aliasing effect in the upsampled data, the data is low-pass filtered with a Hamming window FIR filter of order M_ C=5000 and f_ S≥ 35 Hz. The upsampled Wiener output is finally subtracted from the target data and we produce the NN cancelled target data.For the real-time implementation of the above-mentioned steps, two things must be addressed. Firstly, a circular buffer of the reference data needs to be maintained. This is due to the fact that the application of a FIR filter of order M to a time series data produces the filtered output starting at the (M+1)^ th sample of the data. Secondly, an FIR filter of order M introduces a time delay of (M/2) samples (M ∈ even positive integer), hence the Wiener output needs to be aligned in time with the target data before subtraction. The NNC application acquires data from the Virgo data stream every second and creates a buffer of N_ B samples per channel. The length of this buffer can be expressed asN_ B = (M_ A/f_ R + M_ B/f_ D + L/f_ D + M_ C/f_ T + 1)f_ R.Corresponding to f_ R = 500 Hz, f_ T = 10 kHz, and the filter orders mentioned previously, a buffer of 1600 samples or 3.2 s (corresponding to f_ R = 500Hz) is necessary. Hence, the first second of output is only produced at the end of the first 4 s. The process then repeats and the buffer is replenished with new data every second. Next, we align the starting point of the one-second long Wiener output which was obtained by processing the 3.2 s of the reference data. The starting sample N_ S of the 3.2 s long target data that aligns with the first sample of the upsampled Wiener output is expressed asN_ S = (M_ Af_ D/2f_ R + M_ B/2 + L)f_ T/f_ D + M_ C/2 + 1.Using the sampling frequencies at the different stages of processing and the respective filter orders, the starting sample equals 16001 or 1.6001 s. Hence, the Wiener output is subtracted from the 10 kHz target data between samples 16001 and 26000. This also implies that the delay introduced in the one second long NN canceled output is 6000 samples or 0.6 s. Figures <ref>(a)-(e) show the shifts in the data at each stage of the NNC application.§ PROOF OF PRINCIPLE The performance of the NNC system was tested by using the tiltmeter signal measured at the NEB as the target data and the geophone signals as the reference data. The location of the tiltmeter inside the NEB is shown with a red star in Figure <ref>(c). The tiltmeter was initially developed within the Archimedes experiment <cit.> as a beam-balance prototype and essentially functions as a rotational sensor. The resonance frequency of the tiltmeter is about 25 mHz corresponding to a center of mass positioning within 10 μ m of the bending point. It is equipped with two different optical readout systems comprising a Michelson interferometer for higher sensitivities and an auxiliary optical lever capable of handling a larger dynamic range. A detailed description of the tiltmeter and an assessment of its sensitivity in a quiet seismic environment can be found in <cit.> and <cit.>, respectively. Figure <ref>(a) shows the 10^th, 50^th, and 90^th percentiles of the PSDs of the tilt signal. These were estimated by dividing the data into 1000 s long segments and corresponds to the period May 01 – 08, 2023. The measured tilt in the 10 – 20 Hz bandis about 10^-11 – 10^-10 rad/√(Hz) and is comparable to that measured at the LIGO Hanford site <cit.>. The sharp peaks that appear in the PSDs coincide with Rayleigh waves originating from the HVAC system of the NEB, and was previously established in section <ref>. Consequently, strong positive or negative cross-correlations between the tiltmeter and geophone signals are observed at these peaks (figure <ref>(b)). The broader peaks centered at 15, 17, 20, 27, and 34 Hz, show moderate correlations between 0.2 and 0.4. The frequency-domain cross-correlations shown in figure <ref>(b) are estimated usingequation (<ref>), and by dividing an entire day of data into 30 s long segments. Consequently, the minimum value of significant cross-correlation is 1/√(2880)≈ 1.81× 10^-2, which is represented with the red dashed line in the figure. The first step in the estimation of the Wiener filter for every reference channel is signal preconditioning. Following the steps and the filter orders mentioned in section <ref>, the tiltmeter signal is first downsampled from 10 kHz to 100 Hz, and the geophone signals are downsampled from 500 Hz to 100 Hz. Next, the data are high-pass filtered with a passband frequency ≥ 10 Hz. The mean of the 10 – 35 Hz signals are then subtracted and the zero-mean signals are used to estimate the cross-correlations between the geophones and that between the tiltmeter and the geophones. We use the data measured on May 01, 2023 to estimate the cross-correlations. Finally, following equation (<ref>), the Wiener filter is estimated using all geophones as reference channels. Figure <ref>(a) shows the amplitudes of the Wiener filter for one of the geophone channels. Filter amplitudes for other geophones look similar. We show the unwrapped phase of filters of all geophones in the frequency domain in figure <ref>(b). Filters of different geophones have different phase characteristics depending on the geophone locations and capture the propagation characteristics of the seismic noise at different points inside the NEB.The Wiener filter estimated from one day of cross-correlations (May 01, 2023) is then used to reconstruct the tiltmeter signal for the next seven days (May 02 – 08, 2023). The blue and red curves in figure <ref>(a) show the PSDs of the tiltmeter signal and the signal estimated by applying the Wiener filter to the geophone data for a 1000 s stretch. We denote the error between the measured tilt signal τ(k) and the reconstructed tilt signal τ̂(k) as e(k) = τ(k)-τ̂(k). In the frequency domain, the noise cancellation factor in decibels is then defined as ℛ_ dB(f) = 10×log_10(E(f)/T(f))where E(f), T(f) represent the PSDs of the error signal e and tiltmeter signal τ at frequency f. Since the composition of the seismic field is dependent on the frequency, we further average ℛ_ dB(f) for different frequency bands of interest. Figure <ref>(b) shows the temporal evolution of the noise cancellation factor for five different frequency bands. These bands are chosen such that they don't overlap with the sharp spectral peaks.The best noise cancellation factor of about 10 – 15 dB is observed for the bands 13.3 – 15, 15.5 – 19, 25 – 30 Hz during the day time. The cancellation factor is less than 5 dB during the night time. It is worth noting that the Wiener filter that is estimated using a day of data is dominated by strong noise cross-correlations that occur during the day, hence a better cancellation is observed during the day time. However, the lack of noise cancellation during the night does not add noise to the subtracted signal. The worst performance is observed for the band 21 – 25 Hz, where the noise cancellation factor is slightly above 0 dB, implying that it adds little noise to the output. Similar to the broadband case, the evolution of the noise cancellation factors for the spectral peaks are shown in figure <ref>(c). As expected, we observe a better noise cancellation factor of more than 15 dB and is in accord with the strong Rayleigh wave content of these signals. Unlike the broadband case, little diurnal variation is observed. These signals are characterized by a strong SNR and a stationary phase, and are affected little due to interference from local transient noise sources. The noise cancellation results shown in Figures <ref>(b) and (c) point to moderate temporal variation in the seismic field characteristics at the site. In particular, the noise cancellation in the band 21 – 25 Hz is poor (even enhancing noise). Hence we assessed the performance of the Wiener filter for two cases. In the first case, the Wiener filter was calculated every day and applied to the same day of data. In the second case we calculated the Wiener filter every 1000 s and applied it to the same data stretch. The blue curve in Figure <ref> shows that no excess noise is added to the output data for the band 21 – 25 Hz when the Wiener filter was updated every day. The performance is further improved by about 5 dB in the case when the filter was updated every 1000 s. This points to variability in the origin and the propagation characteristics of the noise in this band, and that the static Wiener filter although calculated using a full day of cross-correlations is not optimal. The variability in the direction of propagation of the noise in the 21 – 25Hz band is also observed in Figure <ref>(b), where the histograms of the estimated direction of propagation does not point to a persistent source of noise. Although the performance of the static Wiener filter for other frequency bands is satisfactory, it must be noted that updating the Wiener filter a few times every day would further improve the cancellation performance. This pattern is also reflected in the temporal evolution of the filter amplitudes for every channel. If no variation in the amplitude and phase characteristic of the filter is observed, that would imply a stationary seismic field and the noise cancellation would not vary with time. We estimate the Wiener filter for the same week during which noise cancellation results were shown earlier. Figure <ref> shows the temporal evolution of the magnitude of the Fourier transform of the Wiener filter corresponding to one of the geophones at the NEB. We observe a diurnal variation with higher amplitudes during the day. Hence, the application of a static Wiener filter which has been estimated using a certain period of the data is not the best solution when the noise varies significantly between days. In such cases the optimal filter needs to be adaptive and should be calculated for every new data sample or data stretch, depending on the needs of the cancellation system. A performance analysis of adaptive Wiener filters is beyond the scope of this work, but such schemes are currently under study and their suitability for a real-time application are being tested. § PREPARATIONS FOR FUTURE PERFORMANCE IMPROVEMENTS A question that needs to be answered as part of the risk management and design phase of the Virgo NNC system is what to do if its performance is not good enough, or not as good as expected. The sensor array, data-acquisition system, and data-processing pipeline are well characterized at this point and function as foreseen. The noise-cancellation performance with the tiltmeter as target channel is as expected, i.e, similar performance was observed at the LIGO Hanford site with a tiltmeter and temporary array deployment <cit.>. This means that if the performance of the NNC system is not as good as expected, then the most likely explanation is that the sensors do not provide all the required information about environmental fields to do efficient noise cancellation. Site characterization, NN modeling, and array optimization were the three most important steps to predict NNC performance. Structural vibrations near the test masses were studied with vibration measurements and NN modeling with the conclusion that they make a small contribution to NN. Arrays of microphones were deployed in all buildings (more than 70 microphones in total). These microphones were planned from the beginning as part of the NNC system to cancel NN from the acoustic field <cit.>. They turned out to be less important to NNC after the detector infrastructure team managed to reduce the level of acoustic noise by making changes to the ventilation system <cit.>.Only after these changes, Rayleigh-wave NN became the clearly dominant predicted contribution to NN. Nevertheless, the microphones are being used for site characterization, and they might become valuable in the future to improve the NNC performance. Construction of finite-element models has begun for dynamical simulations of the seismic field, implementing all we know about the structure of buildings, surface, and geology. If NNC does not perform as expected, these simulations would provide important information about missed properties of the seismic field and how to adapt the NNC array to improve performance.Another design modification that promises performance improvements is to switch to time-variant filters, e.g., adaptive Wiener filters. These can take the form of recursive least squares filters, or Kalman filters, etc. We have some indication that correlations of the seismic field change during the day and during the week, and implementing adaptive Wiener filters might improve performance <cit.>. Studies are already underway to explore time-variant filters for noise cancellation and to assess their robustness and effectiveness with respect to static Wiener filters. More important design upgrades have been discussed. As can be seen in figure <ref>, the distance of some of the test masses to the building walls is only several meters. The array optimization does not suggest sensor placements outside the buildings, but the calculated optimized arrays are not expected to provide a broadband NN reduction by more than a factor 3 in amplitude <cit.>. For greater noise reduction, it might be necessary to add outdoor sensors to the array. It will also be investigated whether seismic tiltmeters can improve NNC performance as expected for sites with flat surfaces <cit.>.Finally, the most advanced design upgrade might come from a robotic sensor array currently under development <cit.>. A pilot project called Flexible Grid Mapping Tool (FGMT) is being carried out at the Virgo interferometer site with the collaboration of the European Gravitational Observatory and the Gran Sasso Science Institute. The FGMT is part of the European research project AHEAD-2020. The idea is to move the array optimization from a simulated environment to the real system. The robots will move the sensor to their optimal locations, and after a data-taking phase at these positions, an improved array configuration will be calculated and the robots will move to their next locations. This process is meant to repeat until the performance of the NNC system converges to its optimum. The main challenges of this system are to manage the robot charging cycles, to navigate with high accuracy inside the buildings, to provide good ground connection of the accelerometers during measurement phases, and to realize a low-latency communication with the Virgo data-acquisition system and timing signal. § CONCLUSION In this paper, we have presented the design and implementation of the Newtonian-noise cancellation (NNC) system for the Virgo detector as part of its AdV+ technological upgrades <cit.>. It is the first such system in the current global network of GW detectors. The main steps that led to the design are (1) selecting sensors, (2) designing the array data-acquisition system, (3) site characterization, (4) Newtonian-noise (NN) modeling, (5) array optimization, (6) design of the noise-cancellation filter, and (7) defining data-processing steps for the online implementation. The design phase started in 2018 and was completed in 2023 soon followed by the completion of the commissioning of the system. The AdV+ NNC system consists of 110 vertical geophones whose data are digitized directly at the sensor and transmitted to a central data-acquisition unit at each of the three Virgo stations (two end buildings and the central building). These units communicate with the Virgo data-acquisition system and share its timing signal, which is propagated to all the sensors. More than 70 microphones (the number is increasing steadily due to the interest of the noise-hunting team) have been deployed as well to cancel NN from the acoustic fields. The data of these microphones are not yet included in the online NNC pipeline since acoustic NN is predicted to be a smaller contribution to the total NN.The first implementation of the NNC pipeline uses a time-invariant, time-domain (FIR) Wiener filter. We studied its performance in a proof-of-principle with a tiltmeter as target channel. We assessed performance limitations and studied their variations with time. The Wiener filter models the PSD of the tiltmeter signal accurately above 15 Hz, but this does not necessarily mean good coherent noise-cancellation performance. For example, in the 21–25 Hz band, the cancellation performance diminishes significantly within a few hours after the data stretch used to calculate the filter. In this band, the performance does not get better again later on, which is different from the clear day-night cycle in performance seen in other bands. A careful study of this interesting observation is needed. In any case, it is to be expected that a time-variant Wiener filter will significantly improve the average noise-cancellation performance coming from these temporal changes. According to our predictions, the AdV+ NNC design meets the requirements for a factor 3 NN reduction in average <cit.>. The predicted performance depends on a model of seismic NN, which has limitations since the site characterization only produced measurements of vertical surface displacement. These limitations could be overcome by doing new measurements with three-axis seismometers; some of those deployed inside boreholes. Simulations based on refined finite-element models will be important as well for future improvements of NNC performance. The impact of NN transients has not been analyzed yet. While future NN observations might lead to better NNC designs, improving NNC designs beyond state-of-the-art will become an ever more challenging problem. An increasing amount of details concerning geology, topography and more extensive surveys of the seismic field, and possibly other NN contributions from structural vibrations and the atmosphere will have to be considered. The experience of the next years will be crucial to assess the true complexity of NNC also with respect to proposed next-generation detectors like the Einstein Telescope <cit.> or Cosmic Explorer <cit.>.§ ACKNOWLEDGEMENTSThe authors gratefully acknowledge the Italian Istituto Nazionale di Fisica Nucleare (INFN), the French Centre National de la Recherche Scientifique (CNRS) and the Netherlands Organization for Scientific Research (NWO), for the construction and operation of the Virgo detector and the creation and support of the EGO consortium. The authors also gratefully acknowledge research support from these agencies as well as by the Spanish Agencia Estatal de Investigación, the Consellera d’Innovació, Universitats, Ciència i Societat Digital de la Generalitat Valenciana and the CERCA Programme Generalitat de Catalunya, Spain, the National Science Centre of Poland and the European Union—European Regional Development Fund; Foundation for Polish Science (FNP), the Hungarian Scientific Research Fund (OTKA), the French Lyon Institute of Origins (LIO), the Belgian Fonds de la Recherche Scientifique (FRS-FNRS), Actions de Recherche Concertées (ARC) and Fonds Wetenschappelijk Onderzoek—Vlaanderen (FWO), Belgium, the European Commission. The authors gratefully acknowledge the support of the NSF, STFC, INFN, CNRS and Nikhef for provision of computational resources.Soumen Koley acknowledges the support through a collaboration agreement between Gran Sasso Science Institute and Nikhef and from the European Gravitational Observatory through a collaboration convention on Advanced Virgo +. The authors also gratefully acknowledge the support of the Italian Ministry of Education, University and Research within the PRIN 2017 Research Program Framework, n. 2017SYRTCN. Tomasz Bulik, Marek Cieslar, Mateusz Pietzak, and Mariusz Suchenek are supported by the grant “AstroCeNT: Particle Astrophysics Science and Technology Centre” (MAB/2018/7) carried out within the International Research Agendas programme of the Foundation for Polish Science (FNP) financed by the European Union under the European Regional Development Fund. Tomasz Bulik, Marek Cieslar, Mateusz Pietzak, and Mariusz Suchenek are supported by the funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 952480 (DarkWave). Tomasz Bulik, Bartosz Idzkowski,were supported by the TEAM/2016-3/19 grant from the Foundation for Polish Science. apsrev
http://arxiv.org/abs/2310.17781v1
{ "authors": [ "Soumen Koley", "Jan Harms", "Annalisa Allocca", "Enrico Calloni", "Rosario De Rosa", "Luciano Errico", "Marina Esposito", "Francesca Badaracco", "Luca Rei", "Alessandro Bertolini", "Tomasz Bulik", "Marek Cieslar", "Mateusz Pietrzak", "Mariusz Suchenek", "Irene Fiori", "Andrea Paoli", "Maria Concetta Tringali", "Paolo Ruggi", "Stefan Hild", "Ayatri Singha", "Bartosz Idzkowski", "Maciej Suchinski", "Alain Masserot", "Loic Rolland", "Benoit Mours", "Federico Paoletti" ], "categories": [ "gr-qc", "astro-ph.IM", "physics.ins-det" ], "primary_category": "gr-qc", "published": "20231026211355", "title": "Design and implementation of a seismic Newtonian-noise cancellation system for the Virgo gravitational-wave detector" }
Polarization vs. magnetic field: competing eigenbases in laser-driven atoms Christian T. Schmiegelow January 14, 2024 =========================================================================== The structural characterization of hetero-aggregates in 3D is of great interest, e.g., for deriving process-structure or structure-property relationships. However, since 3D imaging techniques are often difficult to perform as well as time and cost intensive, a characterization of hetero-aggregates based on 2D image data is desirable, but often non-trivial. To overcome the issues of characterizing3D structures from 2D measurements, a method is presented that relies on machine learning combined with methods of spatial stochastic modeling, where the latter are utilized for the generation of synthetic training data. This kind of training data has the advantage that time-consuming experiments for the synthesis of differently structured materials followed by their 3D imaging can be avoided. More precisely, a parametric stochastic 3D model is presented, from which a wide spectrum of virtualhetero-aggregates can be generated. Additionally, the virtualstructures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images. The preset parameters of the 3D model together with thesimulated STEM imagesserve as a database forthe training of convolutional neural networks, which can be usedto determine the parameters of the underlying 3D model and, consequently, to predict 3D structures of hetero-aggregates from 2D STEM images.Furthermore, an error analysis is performed to evaluate the prediction power of the trained neural networks with respect to structural descriptors, e.g. the hetero-coordination number.Keywords: synthetic HAADF-STEM, nanoparticle aggregate, hetero-aggregate, convolutional neural network, stereological characterization, stochastic 3D model, statistical image analysis§ INTRODUCTIONThe properties of many functional materials depend to a large extent on their structure and chemical composition. Hence, measuring both is mandatory in order to understand and improve their effective properties. An important class of materials are hetero-aggregates, which are compositions of at least two dissimilar classes of primary particles, called, for the sake of simplicity,particles from now on. Properties of hetero-aggregates can be quite different in comparison to aggregates that consist of monodisperse particles. A prominent example in applications concerned with photocatalysis are hetero-aggregates made of titanium dioxide (TiO2) and tungsten trioxide (WO3) <cit.>. The combination of both materials leads to aggregates with hetero-junctions, i.e., points at which two particles made from different materials touch. At such junctions, photogenerated electron-hole pairs are spatially separated,hindering their direct recombination, which results in a higher photocatalytic activity compared to pure TiO2 <cit.>.In order to accurately investigate the properties of hetero-aggregates with imaging techniques, it is essential to resolve the individualparticles within the structure. A suitable tool for the characterization of hetero-aggregates, consisting of particles with radii of a few nanometers, is (scanning) transmission electron microscopy, (S)TEM. With a spatial resolution in the sub-nanometer regime, even the atomic structure can be investigated. However, in conventional STEM only two-dimensional (2D) projection images of the aggregates can be acquired while information about the third dimension is lost. This problem can be overcome using STEM tomography, where the sample is tilted with respect to the electron beam such that a series of projection images under various projection angles is acquired, see <cit.>. From this series of STEM projection images, the three-dimensional structure can be reconstructed, e.g., with iterative reconstruction techniques <cit.>. The major disadvantage of STEM tomography is the fact that acquisition of a single tilt series can take several hours, and thus, this method does hardly allow for the investigation of a large number of aggregates. Furthermore, many samples do not allow for such a long measurement as hetero-aggregates and nanoparticles can change their structure and arrangement during extensive exposure to the electron beam, hindering the reconstruction.As opposed to STEM tomography, 2D STEM images can be acquired within a few seconds, allowing for the acquisition of several images of various aggregates in a reasonable amount of time. For this reason, it is desirable to use 2D STEM images in order to characterize the3D morphologyof aggregates. This can be achieved by trainingneural networks to predict structural properties of 3D hetero-aggeregates from 2D STEM images. However, the training ofneural networks requires a broad database of pairs of differently structured hetero-aggregate and corresponding2D STEM images. The experimental acquisition of such a database, i.e., the synthesis of differently structured aggregates and their imagingwould be expensive in both time and resources. Alternatively, simulated image data can be used for training purposes, see <cit.> for a similar approach.In the present paper, a stochastic 3D model for the generation of virtual aggregates and a physics-based STEM model for the simulation of corresponding 2DSTEM images is combined in order to provide training data. In other words, methods of stochastic geometry <cit.> are utilized to derive a parametric model for the generation of a wide spectrum of virtual, but realisticaggregates.Additionally, the virtual structures are passed to a physics-based simulation tool in order to generate virtual scanning transmission electron microscopy (STEM) images. The preset parameters of the 3D model together with the simulated STEM images serve as a database for the training of convolutional neural networks, which can be used to predict the parameters of the underlying 3D model and, consequently, to predict 3D structures of hetero-aggregates from 2D STEM images. In literature, there are already CNN-based approaches that do not use stochastic geometry models to generate stochastically equivalent 3D structures from 2D images <cit.>. However, the presented approach aims to generate such digital shadows by combining a well-established parametric stochastic 3D model and a CNN-based approach. In order to use such a parametric stochastic 3D model to generate digital shadows of hetero-aggregates, appropriate values of the model parameters must be chosen. The focus of the present paper is on this calibration procedure, also called model fitting. More specifically, it is investigated how convolutional neural networks (CNNs) <cit.> can be used to determine the parameters of the stochastic 3D model and, consequently,to generate digital shadows of 3D aggregates, from2D STEM images. CNNs are a type of artificial neural networks commonly used in image analysis and recognition tasks, see e.g. <cit.>. They consist of multiple layers of neurons that learn to recognize patterns and features in the input data through a calibration process, called training.In conventional spatial stochastic modeling of complex 3D morphologies, the process of model fittingtypically involves several steps, see for example <cit.>. First, image data has to be acquired, preprocessed, and segmented. Subsequently, an appropriate model type is chosen, and its modelparameters are adjusted accordingly using descriptive statistics of the segmented image data. However, the approach considered in the present paper differs from the classical one. On the one hand, the image data does not have to be segmented, which is advantageous since image segmentation can be a time-consuming complex task. Moreover, the model parameters are predicted by the neural networks directly, meaning that thedescriptive statistics are not chosen by hand. This allows for the use of stochastic 3D models with parameters which are not easily estimatable from the image data.In order to evaluate the performance of such a CNN-based approach, structural descriptors of aggregates drawn from the stochastic 3D model with preset parameter values are compared with structural descriptors of aggregatesdrawn from the 3D model with parameter values predicted by the CNN-based approach.However, the structural similarity of the measured image data ofaggregates and image data drawn from the fitted 3D modelstrongly depends on two factors, (i) the suitability of thechosen model typefor the given data, and (ii) the ability of the selected CNN approach to determine the parameters of the stochastic 3D model from 2D STEM image data. More specifically, when analyzing measured image data of experimentally synthesized hetero-aggregates,there might not be any configuration of model parametersthat results in a high-quality fit. In this case, the dissimilarities between the original image data and its digital shadows, generated by the fitted model, cannot necessarily be attributed to the fitting procedure, but rather to the inadequatechoice of the model type. Thus, in the present paper, to be able to attribute these dissimilarities to an inadequate CNN approach, including data preprocessing, model architecture and learning procedure,the same stochastic3D model is used as both the generator for the training data and the model to be fitted.For an adequately designed CNN approach and adequately chosen type of the stochastic 3D model,the digital shadows drawn from the fitted 3D model should be statistically equivalent toexperimentally synthesized aggregates in terms of their 3D structure and chemical composition. Then, these digital shadows can be used as geometry input of (spatially resolved) numerical modeling and simulation,to determine their functional properties,see e.g. <cit.>. In this way, 3D imaging techniques like STEM tomography of the aggregates can be avoidedin order to derive quantitative process-structure or structure-property relationships for hetero-aggregates.Note that by means of such relationships, optimized specifications of process parameterscan be deduced, which lead to hetero-aggregates with desired structures and properties.Digital shadows used forstructure-property optimization are also referred to as digital twins. Their implementationwill be the subject of a forthcoming study. The present work is organized as follows: In Section <ref> the CNN-based approach is described to predict the 3D structure of hetero-aggregates from 2D STEM images. In particular, the generation of synthetic training data is explained which are used for the prediction of model parameters. Then, in Section <ref>, the results are presented which have been obtained for various aspects of model parameter prediction. Section <ref> compares the methods developed in the present paperwith analysis tools considered in the literature. Section <ref> concludes.§ METHODS This section provides details how the presented CNN-based approach is built for predicting the 3D structureof hetero-aggregates from 2D STEM images. It comprises two main steps. First, virtual but realistic STEM images are generated from simulated 3D image data.More specifically, syntheticaggregates are drawn from a stochastic 3D modelwith preset model parameters, where the latter describe the aggregation procedure simulated by the model and therefore influence structural properties of the generated aggregates. These aggregates are then used to generate corresponding STEM images by means ofa physics-based simulation tool, see Figure <ref>a. Systematically varying the parameters of the stochastic 3D model provides a wide range of differently structuredaggregates and theirSTEM images. In a second step, visualized in Figure <ref>b,the parameters of the stochastic 3D model together with thesimulated STEM imagesserve as a database forthe training of CNNs, in order to learn how to reconstruct theparameters of the stochastic 3D modelfromSTEM images.For the reconstruction, initially, a CNN extracts features from STEM images which characterize the depicted structure of aggregates in an informative but not necessarily interpretable manner. Then, these features are utilized to predict our interpretable predefined model parameters. For more details, see Section <ref>. This approach is designed to allow for quick and accurate prediction of model parameters for realhetero-aggregates from measured STEM imagesand, consequently, to predict the 3D morphology of hetero-aggregates from 2D STEM images.The quality of the predictor is evaluated with respect to the similarity between predefined and predictedmodel parameters. Recall that interpretable model parameters describe the aggregation procedure simulated by the stochastic model. Thus, a good match between predefined and predicted model parameters can already be an indication for a good structural match between aggregates generated by the model with predefined/predicted parameters. Nevertheless, some structural descriptors (i.e., quantities which characterize the structure of aggregates like hetero-coordination number) may be sensitive with respect to changes in the model parameters.Therefore, the quality of the predictor is further evaluatedby comparing structural descriptors ofaggregates drawn fromstochastic 3D models withpre-defined and predicted parameters, respectively, see Figure <ref>c,d. The structural descriptors considered in this paper, which are chosen due to their relevance inprocess engineering, are displayed in Table <ref>. They are complementary to the features, utilized in the model parameter prediction. Furthermore, these descriptors are interpretable and characterize the 3D structure of the aggregates (whereas the features describe the structure observed in 2D images). §.§ Generation of synthetic training data The use of synthetic training data requires careful attention to ensure that the artificially generated data accurately reflects particularities of experimentally measured data such that a regression model (e.g., a CNN) trained on synthetic data can be extended to new, real-world data. More precisely, if the generation of realistic data is successful, a network trained on this data can be used for applications on real-world data, and thus, reducing the amount of experimentally measured and labeled training data. In the present study, synthetic training data was generated through a three-step process. First, virtual hetero-aggregates were generated using a stochastic 3D model. Then, using a physics-based simulation tool, STEM intensities were determined based on the material and thickness of the aggregates. Finally,virtual but realistic STEM images were computed by adding noise and other sources of variability to the previously determined STEM intensities. In the following, the stochastic 3D model is introduced and then more details about each of the data generation steps mentioned above is provided.§.§.§ Stochastic 3D modelIn this section, the stochastic 3D model is introduced, which will be used to generate a wide spectrum of virtual hetero-aggregates by varying the values of four different model parameters, denoted by , , ,, where ∈(1,3), ∈(0,1), and ,∈={1,2,…}.These model parameterscontrol the fractal dimension, the mixing ratio, andclustering properties of the hetero-aggregates, respectively.Throughout this paper, a spherical particle is defined as a triplet p=(x,r,l) of particle position x ∈^3, radius r ∈^+=(0,∞) and label l∈{0,1}. Moreover, a hetero-aggregate A, consisting of N particles for some fixedN ∈, is a set of connected and non-overlapping spherical particles, i.e., A= {p_i=(x_i,r_i,l_i):x_i ∈^3,   r_i ∈^+,  l_i ∈{0,1},  1≤ i≤ N }.In this context, two particles p,p^'∈ A are said to be connected if for some j∈{2,…,N}there is a set of indices {i_1,…,i_j}⊂{1, …, N} with p=p_i_1 and p^'=p_i_j, such that x_i_k-x_i_k+1≤ 1.01(r_i_k+r_i_k+1)   for allk∈{1,…,j} ,where y= √(∑_k=1^3 y_k^2) denotes the Euclidean norm of . The prefactor 1.01 in Eq. (<ref>) represents the maximum distance of particles which is allowed toconsider them to be in contact. It is determined to be 1% of the sum of their radii. Moreover, two particles p=(x,r,l),p^'=(x^',r^',l^') are said to beoverlapping if the distance of their centers is smaller than the sum of their radii, i.e., x-x^'< r+r^'. The label l of a particle p=(x,r,l) determines its material. More precisely, in our case, a particle with label l=0 consists of WO3, whereas a particle with label l=1 consists of TiO2.The mixing ratio ρ of an aggregate A is defined as its fraction of particles with label l=0, i.e.,ρ(A)=#{p_i∈ Al_i=0}/#A ,where # denotes cardinality.Notice the distinction in notation betweenand ρ since these values are not necessarily equal. More precisely, the model parametercan be set to an arbitrary value in the interval [0,1] and it primarily influences the distribution of the structural descriptor ρ of aggregates generated with , as explained in more detail later on. Furthermore, the radius of gyration R_g>0 ofan aggregate A is given byR_g = √(∑_i=1^Nm_i·x_i - c_0^2/∑_i=1^N m_i) , with   c_0 = ∑_i=1^N m_i x_i/∑_i=1^N m_i ,where m_1, …, m_N>0 denote the particle masses and c_0 is the aggregate's center of mass. The stochastic 3D model described below is motivated by the idea that hetero-aggregates have a fractal-like structure <cit.>. This fractal-like structure of an aggregate A can be quantified by the so-called fractal dimension D_f, given byD_f = log(N/k_f)/log(R_g/a) ,where k_f>0 is a fractal prefactor, whichis setto 1.3, and a=1/N∑_i=1^N r_i is the mean radius of the particles. For example, aggregates with a fractal dimension D_f close to 1 are arranged in a nearly straight line, whereas those with a fractal dimension D_f close to 3 are composed of densely packed particles. Thus, realistic hetero-aggregates have values for D_f within the interval (1,3), see e.g. <cit.>.Note that thehetero-aggregate model presented in this paper is based on cluster-cluster aggregation, which involves a two-stage process for aggregate formation. In the first stage, primary particles aggregate to form small, homogeneous primary clusters. These primary clusters then undergo a second aggregation stage, leading to larger hetero-aggregates.If an aggregate is homogeneous, i.e., all its particles share the same material, the labels {l_i}_i=1^N will be neglected, and therefore the description of the aggregate A can be compressed toA= {p_i=(x_i,r_i):x_i ∈^3,   r_i ∈^+,  1≤ i≤ N }.In this case, the primary cluster modelis introduced as a random set Φ_N= {P_i: 1 ≤ i≤ N }⊂^3×^+ which models the geometry of small homogeneous clusters of size N for some fixedN ∈, compare <cit.> for earlier work. Here,P_i=(X_i,R_i), whereX_i is a random vector and R_i isa non-negative random variable describing the position and radius of a particle, respectively, for each i ∈{1,…,N}. The random variablesR_1,…,R_N are independent and log-normally distributedwith parameters μ =[12]nmand σ = [3]nm. However, the random vectorsX_1,…,X_Nwhich describe the particle positions, are recursively defined due to the dependency of X_i on X_1, …, X_i-1 and R_1, …, R_i for all 1<i≤ N. This approach ensures that every realization of Φ_N is a set ofconnected and non-overlapping particles, with a predetermined fractal dimensionD_f.Note thatfor technical reasons the random vector X_i can take not only values from ^3, but also the fictitious value ∞. The latter value is used to model invalid particle positions. More precisely, X_1=(0,0,0) and,under the condition that the values x_1,…,x_i and r_1,…,r_i+1 of X_1,…,X_i and R_1,…,R_i+1 are given for some i∈{1,…,N-1}, the random vector X_i+1is uniformly distributed on some set L(A,r_i+1)⊂^3,provided that(∞,r) ∉A for all r ∈^+ and L(A,r_i+1) ≠∅, otherwise X_i+1=∞. Here,A={(x_1,r_1),…,(x_i,r_i)} and L(A,r_i+1) ⊂^3 isthe set of all permissible particle positions x∈^3 such that the set A ∪{(x,r_i+1)} describes a cluster of connected and non-overlapping particles with fractal dimension D_fbeing equal to somepreset value ∈(1,3). In other words, L(A,r_i+1) is the set of positions where a particle of radius r_i+1 can be added to the cluster A without violating the equationD_f=.If no such position exists, X_i+1 will be assigned ∞, indicating that the cluster A cannotbe extended. To draw a sample from the random set Φ_N= {P_i: 1 ≤ i≤ N }⊂^3×^+, the procedure described above is repeated until X_i≠∞ for all i=1,…,N. The primary clusters generated in this way then undergo a second aggregation stage, leading to larger, hetero-aggregates which consist ofprimary clusters for some integer ∈. More formally, for some sequence of primary cluster sizes N_1,…,N_,independent random setsΦ^(1)_N_1,…, Φ^(n)_N_ are considered as described above.The cluster Φ^(k)_N_k is assigned a random position C_k in ^3∪{∞} for each k∈{1,…,}, ensuring that realizations of the resulting hetero-aggregates are union sets of connected and non-overlapping spheres, which adhere to a preset fractal dimension D_f=. In the following, the cluster Φ_N_k^(k) which has been shifted by a (random) displacement vector C_k is denotedbyΦ_N_k^(k) + C_k = {(X+C_k,R)(X,R) ∈Φ_N_k^(k)}.Furthermore, for each k∈{1,…,}, the cluster Φ_N_k^(k) + C_k is assigned a (random) label L_k which can be equal to 0 or 1, determining whether the cluster consists of WO3orTiO2. The clusters of label 0 have a size ofwhereas the clusters of label 1 have a size of . These cluster sizes ,∈{1,…,6} are a further model parameter. The labeled version of Φ_N_k^(k)+C_k with the (random) label L_k is denoted by (Φ_N_k^(k)+C_k) × L_k = {(X+C_k,R,L_k)(X,R) ∈Φ_N_k^(k)}.Finally, the stochastic 3D modelΨ_of hetero-aggregates, which consist ofprimary clusters, isgiven by Ψ_ = ⋃_k=1^(Φ^(k)_N_k + C_k) × L_k. Here, the random variables L_1,…,L_, modeling the labels of the primary clusters, are independent and Bernoulli-distributed with (L_k=1)=(1-) /(1-) + for each k∈{1,…,}. Note that the label of a primary cluster does not only determine its material but also its size. Specifically, thesize N_k of thek-th primary clusteris given by N_k =+L_k (-)for each k∈{1,…,}, i.e., a cluster has a size ofif its label is equal to 0, andotherwise. For sufficiently large ∈, according to the law of large numbers, these definitions of L_1,…,L_ andN_1,…,N_ ensure that the mixing ratios ρ ofhetero-aggregates drawn from the stochastic 3D modelΨ_ are approximately equal to the preset value . The random displacement vectors C_1,…,C_ that describe thepositions of primary clusters in the hetero-aggregate model Ψ_ are again defined recursively to ensure that the particles of the random hetero-aggregate are connected and non-overlapping and that the fractal dimensionis maintained. More precisely, C_1 is put to (0,0,0) and, given that ⋃_k=1^i(Φ^(k)_N_k + C_k) =A_1 and Φ_N_i^(i+1)=A_2for some i∈{1,…,-1},the random vector C_i+1 is uniformly distributed on some set L(A_1,A_2)⊂^3, provided that (∞,r) ∉A_1 ∪ A_2for allr ∈^+ and L(A_1,A_2) ≠∅, otherwise C_i+1=∞. In this context, L(A_1,A_2) ⊂^3 is the set of all cluster positions c∈^3 for which the set A_1 ∪ (A_2+c) represents a hetero-aggregateof connected and non-overlapping particles with fractal dimension D_fbeing equal to thepreset value ∈(1,3).The resulting hetero-aggregate model Ψ_ which is described by themodel parameters ,,, can now be used to generate virtual aggregates. These aggregates consist ofprimary clusters with a fractal dimension of , an expected mixing ratio , and have a label-dependent clustering properties mainly influenced byand . Moreover, the model parameters have a multivariate influence on further structural descriptors, e.g. the hetero-coordination number, see Section <ref>.In theory, this can be achieved by drawing samples from Ψ_ under the condition that (∞,r) ∉Ψ_ for all r∈ R^+. However,due to computational limitations, this procedure can only be performed in an approximate sense. In the following section,thiswill be explained in detail. §.§.§ Generation of virtual hetero-aggregatesThe recursively defined models ofprimary clusters andhetero-aggregates described above can be used to construct algorithms for drawing samples from these models. More precisely, the simulation starts by selecting an initial particle (or cluster), to which particles (or clusters) are added sequentially. Each additional particle (or cluster) is assigned a random radius (or label) and placed at a uniformlysampled random position in L (or L), to be added to the existing cluster (or aggregate). This procedure is iterated until a desired cluster (or aggregate) size is reached. In the following, the desired size of each aggregate is independent, uniformly selected from the range {20, …,80}. The sets L,L⊂^3 in the stochastic 3D model, from which particle (or cluster) positions are uniformly sampled in order to generate aggregates with a given fractal dimension, are only implicitly defined. Therefor, uniform sampling on L and L is computationally expensive. To enable efficient uniform sampling from both L(A, r_i+1) and L(A_1, A_2), the radii of the particles in the sets A ∪{p_i+1} and A_1 ∪ A_2 are temporarily replaced by their respective arithmetic mean. Note that this replacement is used exclusively when calculating the fractal dimensionwithin the definitions of L and L.Thus, all permissible positions for the center of mass of the added particle (or cluster) are located on the surface of a sphere around the center of mass of the cluster (or aggregate), theradius d of which is given byd=√(a^2(+)^2/ (+/k_f )^2/-(+)/R_A^2-(+)/R_C^2),whereanddenote the number of particles in the aggregate and the cluster to be added, respectively, andare their respective radii of gyration, introduced in <ref>, and a and k_f are the quantities used in the definition of D_fgiven in Eq. (<ref>), see also <cit.>. Since uniform sampling on the sphere surface can be performed efficiently, by means of rejection sampling, uniform sampling from the modified sets L or L can be done much faster. This procedure results in aggregates with fractal dimensions randomly distributed around the target value . For further details on the distribution of the fractal dimension (A) of an aggregate A generated by this model, see Section <ref>.Fordata acquisition the four model parameters , , , of the stochastic hetero-aggregate model are systematically varied, by name, the parameters regarding the fractal dimension , the intended mixing ratioand the primary cluster sizesandof the two materials. In this manner a broad spectrum of aggregates is obtained, which differ not only in preset model parameters used for their generation but also in structural descriptors like the ones listed in Table <ref>.The fractal dimension of TiO2-WO3 hetero-aggregates, which form by diffusion-limited cluster-cluster-aggregation, is expected to approach the value of D_f=[1.5] for particles with dispersed sizes <cit.>,and D_f=[1.78] for monodispersed particles <cit.>. Furthermore, the fractal dimension is expected to increase, when particles start to sinter at their contact points <cit.>. In order to create a large database of differently structured virtual hetero-aggregates and their corresponding STEM images, the model parameterwas varied in the present work from =1.5 to 2.5 in steps of 0.1. The intended mixing ratiowas varied from =0.1 to 0.9 in steps of 0.1 and the primary cluster sizesandwere chosen between one and six in steps of one for both materials, see also Table <ref>. Some examples of virtual hetero-aggregates for various values of the model parameters ,,, are visualized in Figure <ref>.§.§.§ Simulation of STEM intensitiesAfter generating virtualhetero-aggregates, reference simulations to calculate the high-angle annular darkfield (HAADF)-STEM intensity of TiO2 and WO3 are conducted as a function of the sample thickness and material density. For that purpose, multi-slice simulations in the frozen-lattice approach <cit.> with the STEMSIM software <cit.> were performed. Simulations were done for the rutile and anatase phases of TiO2 as well as for gamma and delta phases of WO3. Crystal parameters and Debye-Waller factors were taken from <cit.>, and elastic atomic scattering amplitudes from <cit.> were used. The HAADF-STEM intensity for microscope parameters equal to those one would use in experiments with a ThermoFisher 60/300 Spectra microscope were simulated. This machine is equipped with a Cs-corrector for the probe forming system, an X-FEG and SuperXG2 EDXS detectors. A semi-convergence angle of β=[21.1]mrad and an acceleration voltage of [300]kV were set. The simulated HAADF-STEM intensity was obtained by integration of electrons scattered into the annular range between [55]mrad and [250]mrad after application of a detector specific sensitivity curve <cit.>.The HAADF-STEM intensity further depends on the orientation of the crystal with respect to the electron beam. To account for this effect, various orientations for each material and phase of the crystal were simulated.Therefore, the crystal was systematically tilted in nine equal steps from a [100]- towards a [010]-viewing direction. In addition, a random tilt was simulated. The final result is a data set with the HAADF-STEM intensity as a function of the sample thickness for TiO2 and WO3, each in two different crystal phases, each with ten orientations of the crystal with respect to the electron beam. §.§.§ Generation of realistic STEM imagesThe third step combines the HAADF-STEM reference simulations described above with the virtual 3D hetero-aggregates. STEM images show 2D projections of the aggregates, seeFigure <ref>.Therefore, the projections of the individual particles along one direction are computed, as usual in electron microscopy, the electron beam direction and hence the projection direction is referred to as z-direction. This results in thickness maps for the individual particles. Using the reference simulations, these thickness maps are translated into maps of the HAADF-STEM intensities. To this end, for each particle, the reference simulation of the respective material was chosen in a random phase and a random orientation of the crystal with respect to the electron beam. In an aggregate, which extends several tens of nanometers in z-direction, not all particles appear in focus. Only particles with centers located at height z=[0]nm are in focus as the electron beam is focused on this plane. To account for this effect, each HAADF-STEM map of the individual particles is convolved with a Gaussian kernel. More precisely, for a particle located at height z, the standard deviation σ_STEM of the Gaussian kernel with which the corresponding HAADF-STEM map is convoluted is chosen as σ_STEM = |z|·tan(β), where β=[21.1]mrad is the semi-convergence angle, assuming a conical beam shape. Then, blurred HAADF-STEM maps of individual particles are summed up to obtain the artificial HAADF-STEM image of the hetero-aggregate. Finally, shot noise according to a typical electron dose of [149]electrons/Å^2 <cit.> and scan noise according to a possible typical beam displacement of [0.01]nm <cit.> were applied. §.§ Statistical analysis and processing of simulated dataIn this section the need for the usage of neural networks is explained, addressing some problems connected with the reconstruction of preset values of the modelparameters, , ,,based on virtual aggregates drawn fromthe stochastic 3D model.Further complications in this reconstruction task arise when using 2D STEM data instead of the full 3D geometry of the aggregates, through a loss of information.Therefore, it is explained how image processing methods can be used to simplify the extraction of information from STEM data. §.§.§ Estimating the parameters of the stochastic 3D model One of the challenges associated with predicting theparametersof the stochastic 3D modelfrom STEM images is that some model parameters are even imperceivable from the 3D structure of a virtual aggregatefrom which the corresponding STEM imageis determined. This is due to the simplifying assumptions made within the simulation process of the 3D model and its stochastic nature, see Section <ref>,resulting in empirical values of the model parameters slightlydiffering from the preset ones. For example, the fractal dimensioncomputed by means of Eq. (<ref>) for a virtual hetero-aggregate A might differ from the preset value of the model parameter . More precisely, in the simulation ofhetero-aggregates,the radii r_1,…,r_N of particles considered in Eq. (<ref>) are replaced by their arithmetic mean(r_1+…+r_N)/N, see Section <ref>. Also, the mixing ratio ρ of an aggregate A computed by means of Eq. (<ref>) can deviate from the model parameter , due to the randomly chosen labels of primary clusters, modeled by the Bernoulli-distributed random variables L_1,…,L_. For example, the first aggregate in Figure <ref> has an expected (preset) mixing ratio of =0.3, but the actual mixing ratio ρ computed from Eq. (<ref>) is ρ=9/41≈ 0.22. Recall that, in order to distinguish between these quantities, the vector ofmodel parameters used to generate A is referred to as θ = (, ,,), while (A) and (A) describe the empirical fractal dimension and mixing ratio of the aggregate A, computed from Eqs. (<ref>) and (<ref>),respectively. Figure <ref> illustrates the discrepancy between preset model parameters and the empirical fractal dimension and mixing ratio ofvirtual aggregates, computed from Eqs. (<ref>) and (<ref>),respectively.Nevertheless, Figure <ref> indicates that, on average, the structural descriptorsandnicely coincide with the preset model parametersand . Therefore, rather than attempting to determine the model parameters , ,,from a single aggregate A, a family B={A_1,…,A_ν} of>1 aggregates is used instead, called batch in the following. More specifically, it is expected that choosing a larger batch size would yield more accurate results, but at an increased cost.We were not able to find any scalar features that can be utilized to predict the model parametersandassociated with the cluster size used in the generation of virtual aggregates, i.e., in the cluster-cluster-model introduced in Section <ref>. For example, in order to predict the model parameter , an obvious choice for such a scalar feature would be to describe the average size of observable WO3 clusters, where an observable cluster is an inclusion maximal homogeneous subset C⊂ A of an aggregate A, i.e., there is no larger homogeneous subset C'⊂ A such that C⊂ C'. These clusters can differ from the primary clusters used in the construction algorithm described in Section <ref>. Specifically, the observable clusters are formed byunions of primary clusters, whereas, contrary to the latter ones, the observable clusters are recognizable in the 3D data, see Figure <ref> for a visualization. However, this average (observable) cluster size can not be used to predict . Figure <ref> shows that there are various specifications of model parameters that differ inand, nevertheless, yield similar average cluster sizes of WO3 particles.The prediction of the model parameter vector θ is further complicated by the fact that only the 2D STEM image can be utilized which may not perfectly inform the 3D morphology of A.To predict the preset vector of model parameters θ from a family B of aggregates using only their simulated STEM images, CNNs are initially utilized to extract relevant features from these images. These features are subsequently utilized to predict the preset model parameters, see the schematic description of this workflow shown in Figure <ref>b. While the process of extracting features from the STEM images remains largely consistent across all model parameters, the calculation of the estimators for , ,, exhibits significant variations, see Sections <ref>-<ref> below. For instance, when estimating the model parametersand , the features computed from a STEM image I of an aggregate A are scalar values that approximate (A) and (A). Then, the arithmetic mean of the respective image-wise features of a family B={A_1,…,A_ν}of aggregatesis used as estimatorsandforand .In contrast, when predicting the model parametersand , a neural network is employed to identify high-dimensional features from which the estimatorsandforandare computed, see Section <ref> below.§.§.§ Data processing and augmentationVarious common image processing methods are used to simplify the extraction of information from STEM images. In particular, the pixel intensity values ofSTEM imagesare linearly scaled to the entire range of [-0.5, 0.5] and rounded to 256 equidistant values in order to achieve faster convergence to a lower error during the training process. More specifically, the scaling centers the pixel intensity values around zero <cit.>,whereas the rounding reduces the noise of the images. Note that this procedure is performed on all STEM images, even if not explicitly mentioned, whereas the subsequent preprocessing steps will only be applied during training.Overfitting is a common problem where neural networks achieve good results ontraining data but perform rather poorly when applied to previously unseen data. Thiscan occur when the model learns irrelevant information within the dataset. As a result, the model fits too closely to the training set and becomes overfitted, making it unable to generalize well to new data. To address this issue,augmentation of training data is used. In the context of the present paper, this means that the input data is randomly modified in each training step, such that during each step of the training procedure the network is provided with input data which differs from the input data of previous steps.Therefore, a significantly larger number of training steps can be conducted while still providing the neural network with novel training data in each step, and thus, avoiding overfitting. Note that there is a wide variety of possible methods for modifying input data which are commonly used in training data augmentation, e.g., rotation, reflection, radial transformation, elastic distortion <cit.> and random erasing <cit.>. However, in order to preserve certain structural descriptors of aggregates observed in image data,like shape and size descriptors of particles, only random rotations, reflections and small displacements are used for training data augmentation. §.§ CNN-based approach for the prediction of model parameters The goal of this section is to introduce the CNN-based methodology for predicting the model parameters ,, , of the hetero-aggregate model from(simulated) STEM images.Due to computational constraints, it was not feasible to generate the required number of aggregates for each possible preset of the model parameters ,, ,. Therefore, in order to ensure robust training, the focused was on generating 100 aggregates for each triple (, , ) in {0.1,…, 0.9}×{1,…, 6}×{1,…, 6}, as these parameters exhibited interactive effects that were crucial for our study. More specifically, for each such triple, two values ^(1),^(2) ofwere chosen at random from {1.5, …, 2.5} and each resulting model parameter preset (^(1),, , ),(^(2),, , ) was used to generate 50 aggregates. After applying the STEM simulation described in Section <ref>, this results in a setG = {(A_i,I_i,θ_i) : 1 ≤ i ≤ 32 400 } of 32 400 triplets of 3D aggregates A_i, corresponding STEM images I_i, and vectors of preset model parameters θ_i = (_,i, _,i,_,i,_,i). The set G is thereafter split into two datasets, one for training and one for evaluation. For both, training and evaluation, batches B={(A_i_1,I_i_1,θ_i_1),…,(A_i_v,I_i_v,θ_i_v)}⊂ Gwill be used for some ν>1, that are generated by the same preset of model parameters, i.e., θ_i_1=…=θ_i_v. To ensure the availability of such batches, the split of G is done such that there is no model parameter configuration that occurs less than 20 times in neither the data used for training nor in the data used for evaluation.These two datasets will be referred to by their respective index sets T (for training) and E (for evaluation), where T∪ E = {1, …, 32 400} with #T=19 440 and #E=12 960. In the following, it is explained how the triplets (A_i,I_i,θ_i) are used to generate pairs of image data and ground truth labels, which will be utilized for the training of the neural networks. First, general aspects of network architecture and training are presented and, then,some specifics regarding the prediction of each of the four model parameters, ,, are given. §.§.§ Network architecture and trainingThe networks used to extract features are all based on the same basic network architecture, regardless of the model parameter being predicted.This network architecture consists of stacked convolutional layers with a kernel size of 3×3, batch normalization layers <cit.>, the ReLu activation function, given byReLu(x)= max{0,x}and max pooling layers with a kernel size of 2×2, followed by fully connected layers. The basicarchitecture of the convolutional neural networks considered in the followinghas the form CNN = g(f) ,i.e., it is represented asthe composition of two subnetworks, f and g. The subnetwork f consists of the convolutional part of the basic network architecture, a flatten layer and two dense layers with a final output dimension of 112. The subnetwork g consists of two dense layers with a final output dimension of 1.A schematic representation of the network architecture is given in Figure <ref> (left), whereas details regarding the this architecture areprovided in Table <ref>.To achieve a high prediction quality, the parameters of the neural networks have to be adopted. This will be done supervised. More precisely,the dissimilarity between the ground truth, denoted asy=(y_1,…,y_n), and the network outputy=(y_1,…,y_n), i.e., y_1=CNN(x_1),…,y_n=CNN(x_n)for some input x=(x_1,… x_n), n>1, will be minimized. For example, when predicting the fractal dimension , the input x of the network consists of STEM images I_1,...,I_n and the ground truthis given by the vector of fractal dimensions of the respective aggregates A_1,...,A_n, i.e. y=((A_1),...,(A_n)). The comparison between the ground truth and the prediction is done in terms of the mean square error (MSE), given by MSE(y,y)= 1/n∑_i=1^n(y_i- y_i)^2. The resulting loss MSE(y,y) isminimized by a gradient descent method using an Adam optimizer <cit.> with a learning rate of 0.0001, where the value of n in Eq. (<ref>) determines the number of network evaluations before a step of the gradient descent method is applied. These evaluations are done on the training data, given by the index set T, where n is put to 16 when predictingor , and n=8 otherwise.The general network architecture described above and the prediction procedure will be slightly adapted foreach of the four model parameters,, and . In the following, detailed explanations will be provided regarding theseparameter-specific adaptations.§.§.§ Fractal dimension The fractal dimensions (A_i_1),…,(A_i_ν) of the aggregates A_i_1,…,A_i_ν in a batch B, as introduced in Eq. (<ref>),are typically symmetrically distributed around the preset value of , which will be denoted by (B) in the following, see Figure <ref>a. Therefore, themean value (B), given by (B) = 1/ν∑_k=1^ν(A_i_k), could be used as an estimator for (B).However,since the fractal dimensions (A_i_1),…,(A_i_ν) cannot be directly determined from the STEM images I_i_1,…,I_i_ν, approximations (I_i_1),…,(I_i_ν) are used instead. These approximations are computed by a convolutional neural network , where the STEM images I_i_1,…,I_i_ν are used as input.Thus, finally, the estimator (B) for (B) is given by (B) = 1/ν∑_j=1^ν(I_i_j) = 1/ν∑_j=1^ν(I_i_j). Thearchitecture of the neural networkcoincides with the one described in Section <ref>. The activation function of the output layer is a scaled sigmoid function. This kind of activation function is a standard choice for NNs with bounded outputs. More precisely, the activation function is given by γ(x)=α1/1 + e^-x + β for x∈, where α=1.4 and β=1.3 are selected to ensure that the network can represent the expected range of values for , with added tolerances on each side of the expected range, see Figure <ref>a. Note that the input of the network during training consists of augmented versions a(I_i) of the STEM images I_i for i ∈ T, i.e., images that arise fromI_i by reflecting, rotating and displacing, as described in Section <ref>. The corresponding supervisory signal consists of the fractal dimension of the corresponding aggregates.Hence, the network training is conducted using pairs (a(I_i), (A_i)), for i∈ T. §.§.§ Mixing ratioIn Figure <ref>b the distribution of the mixing ratio ofaggregates in dependence of the model parameter is visualized.From there, it is evident that the mixing ratios (A_i_1),…,(A_i_ν) of aggregates A_i_1,…,A_i_ν within a batch B, generated by the 3D model with a preset value of (B), follow a distribution the mean of which is approximately equal to (B). This suggests using a similar approach as described above in Section <ref>. However, note that there are some aggregates with a mixing ratio (A_i) being equal to 0 or 1. Aneural network with an architecture as that ofdoes not reflect these discrete values properly. Therefore, the prediction procedure for the mixing ratio is slightly modified by initially classifying whether an image I_i depicts an aggregate with mixing ratio of exactly 0 or 1, using a classification network , and afterwards predicting the mixing ratio of the corresponding aggregate, using aregression network .For this purpose, thenetworksand , having the same basic network architecture as described in Section <ref> and a commonly used <cit.> unscaled sigmoid function γ(x)=1/1 + e^-x for x ∈ as activation function in the output layer, are trained for the respective tasks. The training of the regression networkis done on pairs (a(I_i),(A_i)), i∈ T, of augmented STEM images and corresponding ground truth mixing ratios, whereas the training of the classification network is done on pairs of augmented STEM images and corresponding binary class labels, where a class label of 0 or 1 identifies the corresponding aggregate as heterogeneous or homogeneous, respectively.However, it is a well-known problem that number-wise imbalanced classes can lead to poorly performing classifications since classifiers tend to neglect the underrepresented classes, also known as imbalance problem <cit.>. To address this issue, the augmented STEM images of homogeneous aggregates, which account for about 10% of all images, were oversampled in the training procedure of the classifier to achieve balanced classes. Finally, to predict the mixing ratio of an aggregate via its STEM image, the outputs of, which identifies homogenous aggregates, and , which determines the mixing ratio, are combined. More specifically, for a STEM image I, the predicted mixing ratio (I) of the corresponding aggregate is given by(I) =η((I)),if (I)>0.5, (I), else,where η: [0,1]→{0,1} is the function that rounds a number x∈[0,1] to its closest integer η(x)∈{0,1}. This results in the estimator (B) for the preset model parameter (B) of a batch B={(A_i_1,I_i_1,θ_i_1),…,(A_i_v,I_i_v,θ_i_v)}, given by (B) = 1/ν∑_j=1^ν(I_i_j). §.§.§ Size of primary WO3 clustersIn the procedures for predicting the model parametersand , described above, the process of determining an estimator involved the identification of a scalar feature that describes an aggregate property, namely, the fractal dimensionand the mixing ratio , that is predominantly influenced by the corresponding model parameter. This scalar feature can be directly computed from the virtual 3D aggregates, and thus, it is possible to predict itfrom the corresponding 2D STEM images. Consequently, using this scalar feature,formulas for estimating the model parameter from this property has been derived, see Eqs. (<ref>) and (<ref>).Since the model parameteris designed to control the cluster sizes of WO3 particlesfor the cluster-cluster-aggregation model introduced in Section <ref>, such a property should relate to the number of connected WO3 particles.However, the sizes of observable clusters are not only influenced bybut also by . On the one hand, larger values oflead to larger primary cluster sizes of clusters of label 0 and thus larger observable clusters. Lower values oflead to larger proportions of primary clusters of label 0. Therefore, it is more likely that two primary clusters that are in contact, share the material label 0, and thus, the expected size of observable clusters of label 0 increases, see Figure <ref>. This makes the average of the observable cluster size on its own an unsuitable property for estimating the model parameter .Therefore, one has to search for another feature that is functionally related to. Additionally, a functional relationship that suitably mapsfeatures derived from STEM images to an estimator ofmay not be captured solely by an average, necessitating the search for another suitable function. However, these two steps can be quite complex and time-consuming if done heuristically. To address this, a data-driven approach utilizing a neural network is adopted. This approach allows us to determine the feature vectors and the formula that relates them to the corresponding model parameter . More specifically, the identification of relevant features is conducted by meansof part f of the basic network architecture described in Section <ref>.The subnetwork f is applied to all images in a batch individually, and the concatenated results are then used as input of part g of the basic network architecture, which is in charge of determining the relationship between the feature vectors determined by f and the model parameter .In detail, this results in a modified network, denoted as CNN^0,which is given byCNN^0(I(B)) = g ( f(I_i_1),…,f(I_i_ν) ),where I(B)={I_i_1,…,I_i_ν} denotes the STEM images corresponding to the aggregates A_i_1,…,A_i_ν in a batch B.Referring to Table <ref>, the feature vectors of STEM images up to the output of layer 17 are computed as before. Then these feature vectors of a batch are concatenated and used as input of layer 18. The final output layer uses a ReLu transfer function. The modified network architecture is illustrated on the right-hand side ofFigure <ref>. Note that the approach described above differs from the commonly used technique where a network, denoted as f^', takes multi-channel input data, i.e., in our case f^'(I_i_1,...,I_i_ν). Such an approach allows the network to detect spatially resolved interdependencies among the images. In contrast, our approach considered in Eq. <ref> employs identical CNNs f for dimensionality reduction and feature extraction on each input channel individually. As a consequence, thisensures uniform feature extraction for every input image while also reducing the number of trainable parameters in the CNN. The choice of this approach is rooted in the concept that each image within a batcha priori contains the same information regarding the underlying model parameters, and the lack of spatial interdependence between the images which would be relevant for the prediction of model parameters.Due to the problem-specific architecture of the network , the training data no longer consists of pairs of individual images and corresponding ground truths. Instead, for each batch B, the training pair ({a(I_i_1),…,a(I_i_ν)},(B)) consists of a corresponding batch of augmented images and the underlying model parameter (B).Since the model parametercan onlytake integer values, the output of the network has to be rounded to obtain a valid estimator for, given by (B)=η((I(B)), wherewhere η: [0,∞)→{0,1,…} is the function that rounds a number x≥ 0 to its closest integer η(x)∈{0,1,…}.§.§.§ Size of primary TiO2 clusters The method used to predict the model parameterfor the cluster size of TiO2 particles is similar to the approach described in the previous section.Nonetheless, given that the pixel intensity values of TiO2 particles in the STEM images closely resemble the background and are significantly lower than those of WO3 particles, they are considerably more difficult to differentiate by visual inspection. Consequently, it might be plausible that a neural network could also encounter challenges in tasks which depend on the identification of TiO2 particles. As shown in Figure <ref>, the neural networkachieves unsatisfactory results when using unadjusted image data, which may be due to the difficulty mentioned above.To address this issue, the intensity value p>0 ofnon-background pixels in the STEM images is replaced by its multiplicative inverse p^modified, i.e., for some threshold t>0 the modified pixel value is given by p^modified = p,p^-1, otherwise. This procedure is applied to all STEM images used in the prediction ofbefore the preprocessing steps described in Section <ref> are applied. The highlighting effect of this adjustmemt of pixel intensity values is shown in Figure <ref>. § RESULTSIn this section, the results of the analysis on various aspects of model parameter prediction are presented. To ensure that these results accurately represent the generalization capability of the trained neural networks, all evaluations were conducted on data not used during training. More specifically, recall that the data corresponding to the index set T is used for training, whereas the data corresponding to the index set E is used to evaluate results, see Section <ref> for details on the training-test split. As a prelude to the main findings, first, the impact of batch size on prediction quality isassessed for all four model parameters, ,,.For that purpose, Figure <ref> illustrates how the batch size affects the quality of the predictions with respect to the mean absolute error (MAE), defined asMAE(y,y)= 1/n∑_i=1^n|y_i- y_i|,where n>0 is the number of predictions and y=(y_i)_i=1, ..., n are the predictions of the ground truth values y=(y_i)_i=1, ..., n. Note that the mean absolute error given in Eq. (<ref>) is more robust to outliers and yieldsmore easily interpretable values compared to the mean squared error considered in Section <ref>. As expected, it can be observed that larger batch sizes lead to better predictions. However, no significant improvement is observed for values exceeding 10. Thus, the results presented below, which were computed with a fixed batch size of ν=12, can be considered representative for the presented methodology. §.§ Fractal dimension The accuracy of the estimatorfordepends on two key properties. First, the mean error of the single STEM image predictions (I) should be centered around zero, since otherwise a bias could be propagated through the averaging procedure and therefore bias the estimator , see Eq. (<ref>). Second, the variance of the single image prediction error should be low, so a low variance estimator can be achieved even with a small batch size . In Figure <ref>a the error for the predicted fractal dimensionis shown. As desired, the error of the network output exhibits a small absolute value for the bias and a low variance, as indicated by a mean value of -0.006 and interquartile range of 0.118.As the network output is a suitable basis forpredicting the model parameter ,the estimatorachieves an MAE of 0.041, see Figure <ref>b. The network tends to slightly overestimate the fractal dimension of the depicted aggregates for small preset values ofand underestimate it for large ones. This behavior is further pronounced in the estimator . For a possible explanation of this trend, see Section <ref> below. §.§ Mixing ratioTo evaluate the accuracy of the estimator for the model parameter , first, the image-wise straightforward case is considered, where the output(I) of the regression network is used as an estimator for the mixing ratio (A), without considering the classification network. As shown in Figure <ref>a, theoutput(I) of the regression network exhibits a relatively high bias for aggregates Asuch that (A) ∈ [0,0.1] or (A) ∈ [0.9,1], with biases of about0.04 and -0.1, respectively.To address this issue, in Section <ref> a procedure which utilizes an additional classification networkis presented. In Figure <ref>b, the resulting image-wise error ofusing this procedure is shown. It is evident that theerror ofis significantly reduced for homogenous aggregates. More precisely, the bias of (I) for aggregates A with (A) ∈ [0,0.1] or (A) ∈ [0.9,1] decreases to about 0.009 and 0.02, respectively. Incorporating the additional network, the MAE of the image-wise predicted mixing ratio (I) of an aggregate A decreases from 0.059 to 0.053. Consequently, the MAE of the batch-wise predictionofimproves significantly, reducing from 0.027 to 0.017. Note that thediagonally arranged points in Figure <ref>b are due to a small number of falsely classified heterogeneous aggregates, whereas the significantly thinned vertical lines are due to correctly classified homogeneous aggregates. The amounts of correctly and falsely classified aggregates are displayed in Table <ref>.§.§ Sizes of primary WO3 clusters and primaryTiO2 clustersFigure <ref>a shows the difference between the network output(I(B))for the STEM imagesI(B)={I_i_1,…,I_i_ν}corresponding to the aggregates A_i_1,…,A_i_ν in a batch B(given in Eq. (<ref>), i.e., prior to rounding of the output which would result in the estimator ) and the preset value (B) of the model parameter . Figure <ref>b shows the error distribution ofafter rounding, where in about 48% of allcases the value ofcoincides with . Additionally, in more than 92% of the cases, the error ofis less than or equal to 1.Although the largest mean absolute error occurs in the case of =6, the resulting inaccuracy corresponds to an average relative error of about 20%. The quality of the estimator introduced in Section <ref> is similar to that of , see Figure <ref>. After rounding the output of the network ,32% of the predictions coincided with the preset values of . In about 82% of the cases, an error less than or equal to 1 occurred. The mean absolute error for = 6 is equal to 1.44, where the resulting inaccuracy corresponds to an average relative error of about 24%. §.§ Furtherstructural descriptors of hetero-aggregates Recall that the goal of the method presented in this paper is to generate realistic digital shadows of hetero-aggregates in 3D, solely from observations provided by 2D STEM images of the aggregates. For that purpose, the parameters, ,, of the stochastic 3D model introduced in Section <ref>are predicted in order to specify the model configuration with which to generate digital shadows. However, so far, only the accuracy of the predictors, ,, for , ,, was evaluated, rather than investigating further structural descriptors of hetero-agggregates in order to evaluate the structural similarity between the resulting digital shadows and the original hetero-aggregates, i.e., the aggregates which were used for predicting the model parameters, ,,. Moreover, many structural properties of the digital shadows are influenced by multiple model parameters, and thus, evaluating the quality of the four predictors, ,,separately is not sufficient. Therefore, three further structural descriptors, which characterize the 3D morphology of hetero-aggregates and have not yet been considered in this paper, are investigated in order to assess the similarity between original aggregates and corresponding digital shadows, see also Figure <ref>c-d. §.§.§ Average cluster size andcoordination numbersThe average cluster size (A) of TiO2 particles of an aggregate A= {p_i=(x_i,r_i,l_i):x_i ∈^3,   r_i ∈^+,  l_i ∈{0,1},  1≤ i≤ N }. describes the average cardinality of clusters of connected TiO2 particles in A. It is given by (A) = 1/#C_TiO2(A)∑_c ∈ C_TiO2(A)#c,where C_TiO2(A) denotes the set of all TiO2 clusters in A.While the value of (A) is primarily influenced by the preset values of and , the value ofalso has some (minor) influence on (A)through its appearance in the definitionof theBernoulli-distributedlabelsL_k of the stochastic 3D model, see Section <ref>.Furthermore, the so-called average hetero-coordination number (A) of an aggregate A is considered, which is given by (A) =1/#A∑_p∈ A#{p^'∈ Ap, p^' are in contact, l ≠ l^'} =2 #{set of heterogeneous contacts in A}/#A ,where #A (=N) is the total number of particles in A. Thus,(A)is the average number of contacts ofparticles in Awithparticles of the other material. Finally, the average coordination number (A), given by(A) =1/#A∑_p∈ A#{p^'∈ Ap, p^' are in contact}=2 #{set of contacts in A}/#A ,is considered, which is the average number of contacts ofparticles in Ato other particles, regardless of their material.Since the number of contacts of a particle within an aggregate A strongly depends on theshape of A, the model parametersignificantly influences the values of the descriptors (A) and (A). Further,(A) tends to increase withclose to 0.5 and decreasing primary cluster sizes determined byand . §.§.§ Comparison of original hetero-aggregates and their digital shadowsTo evaluate the quality of the predictor θ=(,,,) in terms of the structural descriptors introduced in Section <ref>, 50configurations of θ = (,,,) were selected at random, out of the index set E ofevaluation data. For each of these numerical specifications of θ, 800 new aggregates A_1,…,A_800were drawn from the corresponding stochastic 3D model, and theirstructural descriptors (A_i), (A_i) and (A_i) for i∈{1,…,800} were computed.Furthermore, for each case,the (preset) ground-truth parameter vector θ has been estimated using the methods explained in Section <ref>.Then, for each of the 50 specifications of θ, 800 additional aggregatesA_1^',…,A_800^'and computed their structural descriptors(A_i^'), (A_i^') and (A_i^') for i∈{1,…,800} were generated. Figure <ref> visualizes the distributions of these structural descriptors for four numerical specifications of θ, where the aggregatesA_1,…,A_800andA_1^',…,A_800^'were generated using either the presetparameter vector θ (blue) or its prediction θ (orange), respectively. Note that the gaps in the histograms of the average coordination numbers(A_i) and(A_i^') (right column)are due to the limited size of the considered aggregates, see Section <ref>. More specifically,the average coordination numbers(A_i) and(A_i^')given in Eq. (<ref>), of aggregates A_i,A_i^' withsizes smaller than or equal to 80,can only take values in the setH={2q_1/q_2:q_1,q_2∈, q_1 ≤ q_2 ≤ 80},where H ∩ (1.975,2) = ∅ because of the limited denominator q_2 on the right-hand side of Eq. (<ref>). Furthermore,note that the predictor θ forθ displayed in the top row of Figure <ref>has a much smaller mean absolute error than the one displayed in the second row. Nevertheless, the latter (blue and orange) histograms show a higher agreement than those in the top row of Figure <ref>. Meaning that a high degree of similarity (in terms of MAE) of θ and θ does not necessarily imply a high degree of similarity of the resulting descriptor distributions. Wequantitatively analyzed this discrepancy between the distributions of the structural aggregate descriptors resulting from the preset configuration of model parametersand their prediction. For that purpose, the absolute differenceof the means of these pairs of distributions were computed . For example, the mean values of (A_i) and (A_i^') (vertical lines)in the top row of Figure <ref> are equal to 5.38 and 11.00 for the preset parameter vector θ and its prediction θ, respectively. This results in an absolute error of5.62. Over all 50 pairsof θ andθ, a MAE error of 2.165 is achieved, see also Table <ref>, wheretheMAEs for all threestructural descriptors considered in this section are given as well as the corresponding coefficient of determination R^2 defined asR^2(y,y)= 1-MSE(y,y)/MSE(y,y). Here, the vectors y=(y_1,…,y_50),y=(y_1,…,y_50)∈^50 consist of the mean values of the distributions of the given aggregate descriptorcomputed for the 50preset specifications of θ and their predictions θ. More precisely, for j∈{1,…,50},y_j =1/800∑_i=1^800γ(A_ij) and y_j = 1/800∑_i=1^800γ(A^'_ij), where γ stands for either , or , and A_ij,A^'_ij denotethe i-th aggregate drawn from the j-th specification of θ and its prediction θ,respectively.Furthermore, y = 1/50∑_i=1^50y_i. § DISCUSSION The analysis of image data in order to determine the fractal dimension of finite aggregates hasbeen a popular approach for some time. Two commonly used methods for this purpose are the box counting and sandbox methods, which are relatively simple image analysis tools <cit.>. These methods can provide meaningful structural information, but the quality of the results is highly dependent on the quality of the images. Specifically, high contrast and resolution are necessary to obtain clear STEM images from which accurate structural information can be extracted. However, in cases where a high fractal dimension is present, i.e., (A)>2, these classical methods have to be adopted to avoid problems with geometric opacity. There are attempts to solve this problem under certain conditions, see <cit.>. Although this difficulty can be observed in the slightly decreasing accuracyfor values of >2.2, which has been obtained by theCNN-based approach proposed in the present paper, a satisfying accuracy was achieved even for high fractal dimensions,as shown in Figure <ref>a. Furthermore, the CNN approach works well independently of the aggregate size, see Figure <ref>a. Probably the most comparable conventional method for determining the mixing ratio of an aggregate via its 2D STEM image, is based on determining the particle label of each pixel using a threshold value. More specifically, depending on the pixel intensity, the pixel is classified as TiO2, WO3 or background, and then, usingthe a priori known particle size distributions, a mixing ratio can be predicted. However, since the representation of thick TiO_2 particles or of many overlappingTiO_2 particlescan have the same pixel intensity values as the representation of thin WO_3 particles, this threshold approach has a large source of errors <cit.>. The best appearing thresholdsusing a ”brute force”algorithm on a representative data were determined. This results in an MAE of 0.078 per aggregate when estimating the mixing ratio. Compared to the MAE of 0.053, see Section<ref>, of the CNN approach described in the present paper, the error increases by 40% for thethresholding method described above. This is likely due to the increased values of pixel intensitywhich are caused by overlapping particles (see Figure <ref>), where these pixels with increased intensity values tend to be classified as WO3.Therefore, conventional threshold methods become increasingly inaccurate with an increasing number of overlapping particles, contrary to the behavior of the CNN approach proposed in the present paper, see Figure <ref>b. Regarding the prediction of the remaining two model parametersand , as far as we know, there is no comparable conventional method based on 2D image data.Suchmethods, if they do not consider depth information, would not be able to recognize if overlapping particles are touching or not, and thus, it is unlikely that they can accurately predict the values ofand .Recall that the objective of the present paper is to generate digital shadows that are stochastically equivalent to the ground-truth aggregates used for model fitting. These digital shadows, which have known a 3D structure, can then be employed to predict the structural properties of the ground-truth aggregates at significantly reduced costs.Therefore, rather than just evaluating the accuracy of the predicted model parameters θ=(,,,), the morphological similarities of the resulting digital shadows and their ground truth in terms of further structural descriptors, i.e.,average clusters sizes and coordination numbers were also investigated. As already mentioned in Section <ref>, the MAE of θ is no appropriate tool to evaluate the similarity ofdigital shadows and their ground-truth aggregates.For instance, an extreme mixing ratio leads to a situation where the precision of eitherorhas only a negligible impact on the structure of the resulting aggregates due to the corresponding material occurring very rarely. Moreover, the structural similarity of the resulting digital shadows is more strongly affected by small errors and rounding of (I(B)) and (I(B)) when the values ofandare small, as opposed to when they are large. In particular, errors in the prediction of ground truths for small values of andresult in higher relative errors. In such cases, large relative errors seem to have a greater impact on the structural discrepancies observed between aggregates generated for predicted and preset model parameters, seeFigure <ref>. This effect can be further exacerbated by the application of subsequent rounding operations.Although the predictorθ=(,,,) proposed in this paper shows only minor discrepancies across all descriptors listed in Table <ref>, adapting the model and training process to address the issues mentioned above could enhance the similarity ofdigital shadows and original aggregates even further. For example, expanding the possible values ofandto the interval [1,6], rather than just considering the discrete set {1, 2, …, 6}, would result ina diversity of aggregates, while also avoiding rounding errors that can arise in the prediction ofand . More specifically, this could be achieved by modifying the aggregation model Ψ_ introduced in Eq. (<ref>),such that the sizes of the primary clusters are randomly distributed, instead of choosing a constant cluster size. This would achieve a more detailed coverage of possible aggregate structures, especially for small values ofand . The training of CNNs could benefit from an adapted cost function that takes the values of other model parameters into account and assigns weights to errors based on the importance of theground truth to be predicted. § CONCLUSIONA method has been developed in orderto determine the parameters of a stochastic 3D modelfor synthetic TiO2-WO3 hetero-aggregates, based on their 2D STEM images. The method relies on convolutional neural networks that utilize distinct problem-specific architectures. If such an appropriately calibrated stochastic 3D model is available, the neural network approach bypasses the need for using traditional microstructure analysis and modeling techniques, which are expensive in time and costs, such as tomographic STEM imaging as well as complex image processing and segmentation. The networks were capable of predicting model parameters that describe fractal dimension, number-wise mixing ratio, and the sizes of primary clusters.Theaggregates drawn from the stochastic 3D model with predicted model parameters exhibited almost the same coordination numbers and average cluster sizes as those generated by themodel with the original (preset)parameters.In the present paper, synthetic TiO2-WO3 hetero-aggregates are used asmodel system, because these two materials show a good material contrast in STEM images. However,only spherical particles were usedfor the generation ofsynthetic 3D aggregates. It would be interesting to investigate the effectiveness of the proposed method if particles for the hetero-aggregates are considered, which feature similar STEM intensities but differ significantly in their shape or size. Moreover, since experimentally measured aggregates feature more varied cluster sizes thansynthetically generated ones, it can be presumed that a larger variability in cluster sizes would requiremore comprehensive data sets in order to make accurate predictions, but investigating this effect systematically is still important. Finally, in a forthcoming study, the presented method will be experimentally validated. More precisely, experimentally acquired 3D STEM image data of hetero-aggregates will be analyzed to investigate how well thestochastic 3D model proposed on the present paper can describe real aggregates. § ACKNOWLEDGEMENTS This work was financially supported by the German Research Foundation (DFG) through the research grants RO 2057/17-1, MA 3333/25-1 and SCHM 997/42-1. abbrv
http://arxiv.org/abs/2310.18523v1
{ "authors": [ "Lukas Fuchs", "Tom Kirstein", "Christoph Mahr", "Orkun Furat", "Valentin Baric", "Andreas Rosenauer", "Lutz Maedler", "Volker Schmidt" ], "categories": [ "cs.CV", "eess.IV" ], "primary_category": "cs.CV", "published": "20231027224908", "title": "Using convolutional neural networks for stereological characterization of 3D hetero-aggregates based on synthetic STEM data" }
Jorge Martínez-Palomera [email protected], [email protected]]Jorge Martínez-Palomera Bay Area Environmental Research Institute, P.O. Box 25, Moffett Field, CA 94035, USA. NASA Ames Research Center, Moffett Field, CA, USA0000-0002-3385-8391]Christina Hedges NASA Goddard Space Flight Center, Greenbelt, Maryland, United States University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, Maryland, United States0000-0003-4206-5649]Jessie Dotson NASA Ames Research Center, Moffett Field, CA, USA NASA's Kepler primary mission observed about 116 deg^2 in the sky for 3.5 consecutive years to discover Earth-like exoplanets. This mission recorded pixel cutouts, known as Target Pixel Files (TPFs), of over 200,000 targets selected to maximize the scientific yield. The Kepler pipeline performed aperture photometry for these primary targets to create light curves. However, hundreds of thousands of background sources were recorded in the TPFs and have never been systematically analyzed. This work uses the Linearized Field Deblending (LFD) method, a Point Spread Function (PSF) photometry algorithm, to extract light curves. We use Gaia DR3 as input catalog to extract 606,900 light curves from long-cadence TPFs. 406,548 are new light curves of background sources, while the rest are Kepler's targets. These light curves have comparable quality as those computed by the Kepler pipeline, with CDPP values <100 ppm for sources G<16. The light curve files are available as high-level science products at MAST. Files include PSF and aperture photometry, and extraction metrics. Additionally, we improve the background and PSF modeling in the LFD method. The LFD method is implemented in thelibrary . We demonstrate the advantages of this new dataset with two examples; deblending of contaminated false positive Kepler Object of Interest identifying the origin of the transit signal; and the changes in estimated transit depth of planets using PSF photometry which improves dilution when compared to aperture photometry. This new nearly unbiased catalog enables further studies in planet search, occurrence rates, and other time-domain studies. § INTRODUCTION NASA's Kepler mission delivered to the community one of the most precise time series datasets ever produced. During its primary mission, Kepler observed more than 200,000 target stars <cit.>.Kepler found more than 2,600 exoplanet candidates <cit.>, observed numerous supernovae from earliest stages of explosion <cit.>, and more than 2,900 eclipsing binary systems <cit.>. The Kepler mission had a significant impact on a range of astrophysical domains owing to its precise, accurate time series of a large sample of stars, for the 3.5 year prime mission.Yet its contribution to time domain astronomy is not finished. Thanks to the use of current catalogs and methods it is possible to significantly expand the volume of data products originating from Kepler's observations. This work presents a catalog of 606,900 light curves including 406,548 from new sources and 200,352 Kepler targets. §.§ Kepler Target Selection Function Kepler's primary mission selected over 200,000 targets to maximize the yield of Earth-like exoplanet discoveries <cit.>.These targets were selected from approximately half a million stars brighter than 16th magnitude in the Kepler passband (K_p).The selection used stellar parameters to estimate the radius of the smallest planet detectable in the habitable zone, the number of detectable transits, and samples per transit.These combined with a crowding metric for the photometric aperture and the target brightness resulted in a prioritization criteria that was used to rank and select the target list. The target list mainly focuses on main-sequence G-type stars (half of the target sample) with a large fraction of them brighter than magnitude K_p < 14.The target list also includes M-type dwarfs and a small sample of hot main-sequence O- and B-type stars. Using Gaia DR2 catalog <cit.> found that Kepler's target selection is nearly complete for main-sequence stars brighter than K_p < 14 mag and it is biased against binary systems.The study found that Kepler's selection favored cool dwarfs fainter at the faint end. Additionally, the target selection effectively separated red giants from red dwarfs.The abovementioned study found a significant drop in the observed fraction of red giants at fainter magnitudes, particularly for low-luminosity, cool giants. The same work also found no significant bias in target kinematics. §.§ Kepler Data Products Kepler data products are available in three categories, discussed below; Target Pixel Files (TPFs), Light Curve Files (LCFs) and Full Frame Images (FFIs).During its primary mission, Kepler observed seventeen 93-days periods named quarters.The Kepler instrument consisted of 21 science modules, each having 4 output channels, for a total of 84 CCD channels. The telescope rotated 90^∘ every quarter which led to the same objects being observed in the same CCD channel every 4 quarters. The Kepler telescope observed an approximately 116 squared-degree region of the sky at a cadence of 30 minutes. During the prime mission, pre-defined targets were downloaded as images and were converted to flux time-series on the ground.Target cutouts were centered on stars selected from the Kepler Input Catalog <cit.> and are typically 4 to 9 pixels around the target. The Kepler Science Data Processing Pipeline <cit.>, produced two science products from these cutouts, the Target Pixel Files (TPFs) and the Light Curve Files (LCFs). TPFs contain the time series at the pixel level and the aperture mask used to compute the photometry of the target. LCFs are flux time series of the target. Both data products were created for a short 1-minute and a long 30-minute cadence mode.Short cadence targets required more onboard storage and different processing on the ground, and so were used on high-value targets only.All short cadence targets also produced long cadence products.In this work, we will only consider the long cadence targets, as these are available for the full Kepler sample.We leave any discussion or treatment of short cadence targets to future work.Kepler's prime mission also downlinked single Full Frame Images (FFIs) of the entire field of view each month. FFIs were downloaded for calibration and diagnostic purposes <cit.>. FFIs have an exposure time of 30 minutes but were only captured each month, then in this work, we will not use them for time series. We leave any discussion of the benefits of FFI data for extracting time series for future work.Kepler light curves were extracted using Simple Aperture Photometry (SAP) with a pre-computed aperture mask.This aperture mask balanced the precision of the flux measurement while keeping the contamination from neighbors low.The LCFs also contain a corrected version of the SAP flux, the Presearch Data Conditioned Simple Aperture Photometry <cit.> (PDCSAP), which corrects for the systematics of the instrument.PDCSAP light curves are corrected using vectors of common trends from targets on the same detector channel, and largely address the systematics introduced by effects such as differential velocity aberration, and any spacecraft motion.Thanks to the instrument design, observation strategy, and data analysis the pipeline delivered light curves with high precision, enabling the detection of transits with < 10 ppm depth. §.§ Improving Kepler Light Curves with Gaia and PSF Photometry The Kepler spacecraft was launched in 2009, after years of development. As such, the Kepler Input Catalog <cit.> predates the advent of the Gaia mission <cit.>, and was assembled from earlier, less accurate catalogs. Using the KIC, the Kepler pipeline performed photometry and computed optimized apertures for every target source, providing metrics that characterize the completeness of the flux and amount of contamination within the aperture. However, with updated knowledge from the Gaia catalog, we can now revisit these apertures and understand that many are significantly contaminated with background fainter sources (G > 16) or by bright neighbor sources.In total, the Kepler pipeline produced light curves for more than 206,000 sources.But current more complete catalogs such as Gaia Data Release 3 <cit.> lists a more than 1.4 million sources brighter than magnitude G=19 around the pixel cutouts.In this work, we revisit Kepler's archival data to create a complete catalog of light curves using robust photometry, with our updated knowledge from Gaia. We use the Linearized Field Deblending (LFD) photometry method <cit.> to create light curves of 606,900 sources.Of this, more than 400,000 corresponds to newly extracted light curves of background sources, which doubles the number of Kepler targets. The LFD method provides a fast yet robust approach to perform Point Spread Function (PSF) photometry in Kepler-like data. LFD models the image at the pixel level to create a PSF model of the sources in the scene. Here, the scene is defined as the collection of sources observed in a list of neighboring TPFs (here and after also called a stack of TPFs).The LFD method introduces perturbations to a mean PSF model in order to correct instrumental signals such as spacecraft motion and optic changes.Both the PSF fitting and evaluation are modeled as a linear problem and solved using least-square minimization.Through this, the LFD method is able to quickly estimate the PSF shape and perform PSF photometry.The use of PSF photometry and current Gaia catalogs led to three main improvements over the original Kepler light curve catalogs.First, PSF photometry enables robust flux estimation and target deblending which is extremely relevant in crowded regions.These regions could be particularly problematic for aperture photometry due to close proximity of sources in the image and a varying range of source brightness contrast (the difference in magnitude between two nearby sources). Secondly, Gaia catalogs provide precise astrometry and an improved census of objects in the field when compared to the KIC which enables access to a larger volume of sources.Thirdly, a blind massive light curve extraction leads to a nearly unbiased catalog useful for a better characterization of planet occurrence rate as well as further time-domain studies.Here we present Kepler Bonus (KBonus), a catalog of extracted light curves that includes Kepler Targets and background sources. All the light curves produced in this work are publicly available to the community as FITS Light Curve Files.These can be accessed via the Mikulski Archive for Space Telescopes (MAST) archive [KBonus Kepler Background [10.17909/7jbr-w430]10.17909/7jbr-w430]. We introduce new functionalities to the original LFD method to improve the PSF modeling and correction.These are available in version 1.1.4 of thepackage [ v1.1.4 <https://github.com/SSDataLab/psfmachine/tree/v1.1.4>].Additionally, accompanying this article we publish the KBonus repository[<https://github.com/jorgemarpa/KBonus/tree/main>] that shows examples of the processing pipeline and configuration files used for this work as well as an example of how to load the light curve files and its content.This article is structured as follows.Section <ref> details the characteristics of the data used for this work as well as the steps followed to compute the PSF models, photometry, flux metrics, and light curves. In Section <ref> we present our results: we characterize the quality of the extracted light curves, discuss the demographics of the resulting catalog, and showcase two science results using these light curves.In Section <ref> we discuss the limitations of this work and in Section <ref> the opportunities that this new unexplored dataset provides to the community. Finally, Section <ref> summarizes this work.§ DATA PROCESSINGWe process the Kepler data using the Python packagethat performs Linearized Field Deblending (LFD) photometry <cit.>, a newly introduced type of rapid PSF photometry.In this work, we further improveby adding a background estimator to remove rolling band noise, PSF models estimated with Kepler's FFIs, and the use of custom basis vectors to correct the scene motion due to differential velocity aberration.In this section, we describe the data used for this work, the additional analysis introduced from the original LFD work as well as the new algorithms and modules added to . For an in-depth explanation of how the photometry of each source is extracted, we direct the reader to <cit.>. §.§ Kepler's Target Pixel FilesKepler observations are split inThe Kepler pipeline delivered the observed data in the form of Target Pixel Files, a stack of pixels around each selected target for all observed cadences.We accessed a total of 204,933 TPFs from MAST archive [<https://archive.stsci.edu/missions-and-data/kepler/kepler-bulk-downloads>] as well as other relevant engineering data (see Section <ref>). Kepler 17 observing quarters and the 84 output channels distributed across the focal plane provide a natural strategy to process the TPFs to isolate instrument systematics spatially and temporally. Therefore, we process the TPFs on a per-quarter-channel basis. Within each quarter/channel combination, we split the list of available TPFs into “batches”, in order to make the model fit memory efficient.Each “batch” contains around 200 TPFs spatially sorted (i.e. 200 TPFs that are close on the detector).The batch size is not fixed due to the non-homogeneous distribution of targets around the focal plane and the changing total number of targets observed across quarters.We found that using ∼ 5 000 pixel time-series and ∼ 400 unique sources, which is typically reached with ∼ 200 TPFs, provides a robust fit of our mean and perturbed PSF model (see Sections <ref> and <ref>). In some crowded regions like around the open clusters NGC 6819 and NGC 6791, fewer TPFs are needed to constrain the model, owing to the source density in these clusters. §.§ Source CatalogThe LFD method works by allowing the “scene” of stars to move as one, and each source to vary in brightness, but does not allow any individual source to move with respect to the others. The LFD method requires an astrometric catalog as input to fix the location of sources in the scene and to have a flux reference for each object.For this purpose, we use the Gaia DR 3 catalog. Gaia DR 3 provides a complete catalog between magnitudes G=12 and G=17. It offers an astrometric precision of 0.4 mas and a photometric precision of 6 mmag at magnitude G=20 <cit.>. We query the Gaia DR 3 catalog with the center of each available TPF, a generous search radius of the cutout size plus 16 (≈4 Kepler pixels) to allow sources off the TPF edge and a magnitude limit of G=19.We propagate Gaia proper motions for every quarter observed by Kepler. We obtain a list of 1.4 million sources which acts as the input catalog for this work. To increase efficiency, we perform a more conservative query to the input catalog using theAPI.We allow sources up to 4 away from the TPF edge, remove sources brighter than G=10 to avoid saturated pixels and the nonlinear response of the CCD, and filter blended sources within 1 by keeping the brighter objects.Highly blended sources, closer than 1 are difficult to successfully deblend, and imposing this filter helps to diminish the number of degenerated solutions. The resulting catalog of successfully extracted sources contains 606,900 entries.From the total, 200,352 corresponds to Kepler targets for which the Kepler pipeline produced light curve files.The remaining 406,548 objects correspond to background sources for which this work releases new light curves.Additionally, we perform a cross-match between the KIC and Gaia DR3 with a 2 radius and accounting for proper motion, to identify original Kepler targets.The apparent magnitude distribution of Kepler targets (Figure <ref>) shows evidence of the target selection from the prime mission. This is reflected in, for example, the number cutoff at G=16 and the over-density around G=13.8 due to Sun-like stars being targeted. The apparent magnitude distribution of background sources shows no signs of selection bias based on star properties.Figure <ref> shows the spatial distribution of Kepler targets and background sources across the field of view.The two high-density regions in the Kepler targets are the NGC 6819 and NGC 6791 open clusters.In contrast, the density of background sources shows an increasing number count closer to galactic latitudes. We removed saturated pixels and bleed columns from the sample to avoid introducing uninformative data points to the fitting process. We used a conservative flux value to flag saturated pixels of 1.2e5 e^-/s and masked out up to three pixels around saturated ones to account for bleeding. Additionally, we masked out pixels within a 800 from extremely bright sources (G≤8) which typically exhibit halos due to internal reflections within the telescope.Sources that fall in these removed pixels are also ignored from the analysis.§.§ Background ModelKepler data shows a moving background signal known as “rolling band” <cit.>.This correlated signal is more likely to occur on certain channels, at certain times of the year due to changes in the thermal background, and is difficult to model or predict. The rolling band is observed as a shift in background level that moves almost parallel to the x-axis of the sensor.This artifact signal is small in amplitude, ∼ 20 counts per pixel, but coadds to a large signal for large aperture photometry, and can adversely affect quiet sources. Crucially, this background is an additive signal and so can not be effectively removed by methods that divide out systematics (e.g. the CBV method.)The pipeline processed TPFs contain background-subtracted flux values as well as the subtract model computed by the Kepler pipeline.Although the pipeline provides a good estimate of the background model, the Kepler pipeline only addressed the rolling band issue by including a data quality flag <cit.>. In order to model and remove this signal, we build a background model as a function of time and pixel rows. Our method relies upon the strong row dependency of the rolling band signal to simplify the model and assume there is no signal in the orthogonal column direction.To constrain the model, we identify and model "background" pixels in the data set.To identify “background” pixels, we use the source mask computed byto find the pixels without a nearby source <cit.>, and we perform a sigma clipping to reject pixels that show significant variability.In addition, we augment the TPF pixel dataset with the mission background pixel data.The mission background data was taken during every quarter across every channel on a predefined grid distributed across each CCD<cit.>.Adding this dataset improve significantly the background model, especially in crowded regions where the TPF background pixel count is low. We take the median average of the pixels in the column direction to find the average time series at every unique observed pixel row. We model the time series of the background pixels as two third-degree b-spline functions in both time (t) and pixel row number (y). We use knot spacings for the spline functions of 2 hours in the time direction, and 6 pixels in the row direction.This enables us to produce a flexible model that adapts to the fast-changing rolling band signal. This effectively builds a smooth model that averages values of pixels close in time and space.We model the background of a batch of TPFs that have n_tot total pixels data, n_bkg of which are background pixels, and l cadences as follows. We build a design matrix 𝐗_bkg using the combination of two spline functions in time (t) and pixel row positions of the background pixel time-series (y_bkg): 𝐗_bkg = vec([ 1; 𝐭; 𝐭^2; 𝐭^3 ][1 𝐲_bkg 𝐲_bkg^2 𝐲_bkg^3] ) where vec() denotes the vectorization operation, which unrolls a matrix into a vector, and 𝐗_bkg is a 2D vector with shape (l × n_bkg, 16).We find the best fitting model using linear least squares <cit.>. The resulting background model for each pixel and time 𝐟̂_bkg is given by: 𝐟̂_bkg = 𝐗_𝐛𝐤𝐠·ŵ where ŵ are the best-fitting weights and 𝐟̂_bkg denotes the best-fitting flux time series for the background pixels. The same weights can now be applied to a design matrix 𝐗 of the pixel row positions of all pixels 𝐗 = vec([ 1; 𝐭; 𝐭^2; 𝐭^3 ][1 𝐲 𝐲^2 𝐲^3] ) to evaluate the model at every pixel as 𝐟̂ = 𝐗·ŵ, where 𝐟̂ is the background flux time series of every pixel in the batch of TPFs. Cadences where there is a significant, single deviation from this smooth model are identified and iteratively removed from the fit. Figure <ref> shows the column-wise binned flux data as a function of pixel row and time for both the data (left) and model (center), and average flux (right).The model is able to capture the rolling band signal at the end of the quarter moving vertically in the CCD (vertical pale blue lines). To enable reproducibility, we package this model as a standalone simple Python package kbackground[<https://github.com/SSDataLab/kbackground>]. The background model is subtracted from the original flux data to model the rolling band and any background trend (e.g. see the slope in Figure  <ref>)for each pixel. In this way, we obtain a zero-centered flux time series.This rolling band model is adequate for our purposes but could be improved; we use only pixels that have no significant flux and do not model the sources simultaneously with the background.We use a simple spline model with fixed knot spacings rather than, for example, a Gaussian Process approach where hyper-parameters could be estimated.Finally, we average over the column dimension, removing any possibility of modeling the rolling band in the orthogonal direction.If there is any residual trend in this dimension, we will average over it.§.§ Point Spread Function ModelWe used the PSF models computed in <cit.> which used Kepler's FFIs to generate robust and detailed PSF models for each CCD channel and quarter.The FFIs are single-cadence (30-minute exposure) images over every pixel taken at the beginning, middle, and end of a quarter (approximately one per month).<cit.> computed PSF models using Gaia EDR3 <cit.> sources with a limiting magnitude G=20 which lead to the use of ∼ 12 000 sources and ∼ 100 000 pixel data per CCD to fit the models.As noted in <cit.>, the PSF models were computed for quarters and channels where Kepler extended background (EXBA) masks are available, i.e. quarters 4 to 17 and all CCD channels but 5 to 8, therefore we computed missing models for all channels in quarter 0 to 3.The models are stored in a Zenodo [<https://doi.org/10.5281/zenodo.5504503>] repository and are fully integrated into theAPI.We evaluate the FFI PSF models in a pixel grid 10 times finer than the original Kepler pixel size of 4/ pixel (i.e. 0.4/ pixel) to find the factor by which the model needs to be scaled such that it integrates to one on the pixel grid from the stack of TPFs. This scaling factor encodes a combination of two effects, the finite integration due to the instrument pixel scale and the differences between Kepler K_p and Gaia G filters, as the model uses Gaia G band fluxes as prior <cit.>.Figure <ref> illustrates the PSF model and its residuals for quarter 5 channel 37. This corresponds to the PSF model fitted with the respective FFI data and evaluated at the positions of 250 TPFs.The PSF is fairly round for channels in the center of the Kepler field (like channel 37) with a slight elongation in one axis. The centroid of the PSF data is under ∼0.3 in each axis, see red marker in Figure <ref> top left panel.The scene centroid offset is computed as the mean of the offset values in each cadence.These cadence centroid offsets are estimated by averaging each data point (pixel) distance to its source coordinates (Gaia R.A. and Decl.) weighted by the Poisson uncertainty estimate.See Figure 2 in <cit.> for a display of PSF models for all channels (quarter 5).This shows that in channels near the border of the field, the PSFs are significantly distorted, with elongation and characteristic spike patterns.§.§ Correcting the Scene MotionIn the original LFD method, the differential velocity aberration effect <cit.> is corrected using a spatial model and a third-order polynomial in time to create a time-dependent model <cit.>.This time-dependent model is used to “perturb” the PSF model at each cadence, shifting the scene in its entirety.In this work we refer to the model that extracts flux time-series using the average PSF as the “mean” model, and the model that extracts flux time-series using the average PSF having been perturbed as the “perturbed” or “corrected” model. The perturbed model accounts for small motions and slight changes in shape. However, this method is only applicable if the PSF is fairly Stable and does not vary significantly.This is true for Kepler's primary mission observations, but not for K2 observations where the reaction wheel failure caused a systematic jitter motion in the spacecraft.An alternative approach to correct the scene motion is to fit the PSF model to each frame separately, building a unique PSF model for every cadence.This process will mean fitting every variable in the PSF model (<200) for all of the ∼4,500 of Kepler frames in a quarter.This adds up to ∼900,000 parameters. With the number of usable pixel data (∼3,000) in a stack of 250 TPFs this problem is not sufficiently constrained, resulting in a noisier estimation of the PSF overall.This problem could be overcome with more sources and more pixels, and fitting PSFs individually per frame becomes more tractable and beneficial but less computationally efficient.With the perturbation method approach, we fit a relatively small number of variables, <200 for the mean PSF model and <1,000 for the full perturbation model, leading to a well constrained and robust model.We found that the third-order polynomial in time used originally in the LFD method can be too flexible and can introduce large-scale, spurious trends in the corrected light curves.This polynomial also does not address systematics other than large-scale motion, for example, the characteristic “focus change” signal that happens after the spacecraft downlink data. We implement an improved method to correct the scene motion and other instrumental signals.To find a reasonable solution we explored several approaches and their combinations.i) The centroid positions in each axis as basis vectors.These were either the mission-defined positional correction vectors <cit.> or the centroid shifts computed byvia momentum method (average weighted by the Poisson noise). ii) The components of common trends between the source pixels. These were estimated using principal component analysis (PCA) of the set of pixels belonging to aperture-extracted pixel time series of sources.iii) The mission Cotrending Basis Vectors <cit.>. The CBVs were built by the mission pipeline <cit.>.CBVs are built from the common trends across sources and contain multiple instrument systematic trends in sixteen basis vectors, systematic such as centroid shift due to reaction wheels adjustment, focus change due to data downlink, and others.For Kepler, single-scale CBVs are available in MAST archive and combine different time-scales systematics, e.g. long time scales to capture trends such as differential velocity aberration or short-term to capture focus change. Based on our investigation, we find of the three approaches CBVs perform the best to remove velocity aberration and focus change without introducing spurious signals. We find using the first four CBV vectors is sufficient to address the instrumental signal while keeping the dimensionality of the matrices low, and therefore computationally efficient.We apply a 2-day window smoothing b-spline function to each CBV vector to avoid introducing high-frequency noise from the CBVs into the corrected light curves.This smoothing step accounts for data discontinuity such as time gaps and value jumps. Figure <ref> shows an example of the first four CBV components for channel 37, quarter 5.While our work and the Kepler pipeline both use CBVs to address long-term trends, the methods used by each work are different. The Kepler pipeline used CBVs to detrend each source individually <cit.>.In our work, we apply CBVs to find the correction to the PSF model to best fit all sources in the batch of TPFs simultaneously, which is frequently >400 sources.Fitting multiple sources prevents overfitting the velocity aberration for an individual source.Additionally, in our method, we fit a low-resolution model in time to improve computational efficiency. We use the CBVs to build our “perturbation matrix” which is then applied to the mean PSF model in all frames to track its changes. Figure <ref> shows an example of the perturbation matrix. This matrix is multiplied into the PSF model in order to change it to best fit the data in time. The PSF changes from wider with significant wings to a narrower PSF, which the perturbation matrix is able to capture, see <cit.> for more discussion.To build our perturbation matrix we bin data in time.This binning keeps the matrix small, thereby making memory usage and computing time low.By binning we reduced the time resolution from ∼ 4,500 frames in time to 90 frames.Once the binned version has been fit to the data to find the best fitting weights, the model can be evaluated at all the cadences.By binning the data in this way we are assuming the motion is smooth and uniform, and that any differences between the data and this model are Gaussian distributed, which holds largely true during Kepler's primary mission observations.In contrast, data from the K2 mission exhibit a strong pattern due to the roll motion of the spacecraft, therefore this binning approach may not be adequate.The binned time sequence for each data point of the perturbed model is shown in Figure <ref>. The figure shows the changes in time of each uncontaminated pixel used to fit the model for a batch of 250 TPFs on channel 37 during quarter 5. The data in this figure are mean normalized, causing there to be a turnover point close to index 50. The common trend (red to blue) is due largely to velocity aberration, focus change is evident as vertical clear stripes. Note that the magnitude and “sign” of this trend are different for each pixel, depending on whether a pixel is close to the center of a source or the wings, and whether the pixel is on the leading or lagging side of the target as it moves due to velocity aberration. The magnitude of the effect is commonly ≈20%. Our time series model is shown in the middle panel, and is built from the PSF model for each source which has been perturbed and then fit to the image data to find the source flux.After the removal of the perturbed model in our approach, the pixel time series residuals markedly improved to ≈ 2 %.Our perturbation matrix results in a well-regularized model that preserves real physical variability, such as stellar activity or long-period variables, and removes most of the systematic trends due to velocity aberration and focus change. This is crucially different to the Kepler Pipeline approach, as we are using the pixel data to inform our fit of the systematics and prevent overfitting. See Appendix <ref> for a direct comparison of long-period variable (LPV) light curves extracted with Kepler'sand our PSF photometry.§.§ Flux Priors and IterationSince the LFD method uses linear modeling to fit the flux data, the solver can yield negative solutions that are mathematically correct.As negative flux values for stars are non-physical, we use narrowing priors to ensure target flux remains positive.As discussed in <cit.>, to estimate the flux of a source we solve the linear equation 𝐟̂ = 𝐒·𝐯, where 𝐟̂ is our estimate of the pixel flux data, 𝐒 is the PSF model, and 𝐯 is a vector representing the intrinsic flux value of a source.Each source has a prior which is defined as a Gaussian with a mean (μ_𝐯) and a standard deviation (σ_𝐯).μ_𝐯 and σ_𝐯 are set as the Gaia G-band flux (F_G) and 10 √(F_G), respectively.The latter is a Poisson noise estimate, with a fairly wide prior. In cases where 𝐟̂ for a source is negative, we narrow the priors for that source and its neighbors (up 5 apart) by reducing σ_𝐯 by a factor of 2, constraining the fit.This narrowing is repeated three times, and any remaining negative sources are dropped from the source catalog. Then a final fit is done with only the remaining positive sources. While narrowing priors could potentially dampen intrinsic source variability for extreme cases, we find this approach to be adequate.With this iteration process, we are able to reduce the number of sources that return negative flux to about 2-5% depending on how crowded is the area.Ultimately, <5% of the input sources are removed from the catalog due to negative flux estimations.§ RESULTSThis work presents a light curve catalog with 606,900 sources. Our light curve files provide three main types of photometry; "aperture photometry", "mean PSF", and "corrected PSF"; as well centroid estimations, background model, and chi-square time-series from the PSF model.Both mean and corrected PSF are computed with the methods explained above.* “aperture” photometry is computed from an aperture mask estimated as in <cit.>, which optimizes contamination and completeness of the flux within the mask. * “mean PSF” photometry uses only the shape model loaded from the corresponding FFI and evaluated in the TPF stack data. * “corrected PSF” photometry is obtained from the perturbed model fitted with the observed cadences in the TPF stack. * centroid vectors are computed by correcting the Gaia coordinates with offsets estimated using the momentum method (weighted average by Poisson uncertainty estimate) at every cadence.* background flux corresponds to the sum within the aperture of the model described in Section <ref> * chi-square time-series corresponds to χ^2 = ∑(𝐟_model - 𝐟_data)^2/𝐟_data, where 𝐟_model is the perturbed PSF estimate of the pixel data, 𝐟_data is the pixel flux, and the sum is over the pixel corresponding to the source. Chi-square time-series can be used both to diagnose where our extracted time-series may be imperfect, and any instances where the model does not fit well due to a changing PSF shape <cit.>.Our light curve files contain the per-quarter light curves with the aforementioned measurements.Additionally, we provide a stitched version that contains the aperture, corrected PSF, and a flattened version of the PSF photometry. The latter was flattened with a 2-day window b-spline function, designed to better enable the community to perform transit searches. Appendix <ref> provides a specification of the content of the light curve files. §.§ Light Curve qualityTo assess the quality of the photometric extraction and performance of the light curves presented in this work, we produce a series of metrics.This section details the extraction of quality metrics for both types of photometry (aperture and PSF) as well as noise metrics to measure the light curve accuracy.§.§.§ Quality MetricsDuring the light curve extraction process, we compute two aperture quality metrics and three quality PSF metrics. Similarly to the Kepler pipeline we compute FLFRCSAP and CROWDSAP, as described in. FLFRCSAP is the fraction of target flux contained in the photometric aperture over the total target flux.CROWDSAP is the ratio of target flux relative to the total flux within the photometric aperture including contaminating sources. These two metrics are computed using the evaluated PSF model on every source. It is important to highlight that extracted sources with only partial coverage in the pixel data, (i.e. sources partially outside of the pixel cutout) could have overestimated FLFRCSAP and CROWDSAP values.FLFRCSAP can be lower because it is estimated only with recorded pixel data, while the CROWDSAP values could not account for contaminants further than 4 away from the TPF edge. We generate three new metrics to describe the quality of our light curves:* PSFFRAC: how much of the total expected PSF was saved in the TPF (values of 0 to 1). Sources fully enclosed in a TPF will have values near 1 (because finite integration values are slightly lower than 1). Background sources that are partially on the TPF have values between 0 and 1. * PERTRATI: the ratio between the average flux from the mean model, and the average flux from the perturbed model. Sources with stable perturbed model have values close to 1. Values significantly different than 1 suggest a poor perturbation model mostly due to sparse fit data for the source. * PERTSTD: the ratio between the standard deviation of the mean model, and the standard deviation of the perturbed model. Small values indicate a stable perturbed model that does not introduce large variations to the extracted light curve. From the PSF model, we estimated the object PSF fraction (PSFFRAC) on the pixel data, this is how much of the expected PSF was saved in the TPF.By design, the Kepler targets have the entire PSF inside the TPF.Background sources can have partial PSF, especially objects near or outside the TPF edges.For these sources, the PSF fraction can also vary between observation seasons due to the change in the spacecraft pointing or changes in TPF size.This can yield to changes in photometric accuracy, leading to noisier light curves when the PSF fraction decreases.Due to this effect, we only provide stitched light curves from quarters with a PSF fraction larger than 0.5 to avoid the use of low-quality photometry.We still include all the extracted quarters in the light curve FITS file, regardless of low PSFFRAC values.To measure the effects of introducing the perturbation PSF model, we compare it against the mean PSF model estimated early in the process.We computed the ratio between the mean PSF and the perturbed PSF and took the mean (PERTRATI) and the standard deviation (PERTSTD).These metrics measure how much the perturbed model deviates from the mean PSF and the introduced variance.Both metrics can be used to filter light curves where the perturbed model introduces artifacts.In this case, we recommend defaulting to the photometry fitted with the mean PSF model.§.§.§ Photometric NoiseThe Kepler pipeline introduced a metric to estimate the noise quality of light curves, this is the Combined Differential Photometric Precision (CDPP).We use the implementation of the estimate of CDPP metric included in thePython package, which implements a simpler version of CDPP <cit.>. We compute this metric for every extracted source in this word.Figure <ref> shows the estimated CDPP values as a function of G-band magnitude for all sources with a PSF fraction larger than 0.5.A large number of sources, brighter than G = 16 have CDPP values under 100 ppm, which is comparable to values estimated from the PDCSAP light curves computed by the Kepler pipeline. Figure <ref> shows there is a turnover point where PSF photometry becomes more accurate than aperture photometry, at approximately G=13.25, indicating a significant benefit in precision.The high-density horizontal ridge at log_10(6h-CDPP) ∼ -3.5 between 12th and 14th magnitude shows CDPP values about one order of magnitude larger than the main trend.An inspection of the Color-Magnitude diagram (CMD, see Section <ref> for details) showed that these sources correspond to red giant stars near the horizontal branch, see Figure <ref>. This sample of red giants is about 1.6% of the total catalog. §.§.§ Light Curve Correlation Although the LFD photometry method presents many advantages such as computing speed, extraction of a large number of sources simultaneously, and ability to deblend contaminated sources, it suffers the problem of correlated light curves.Correlated time-series from different sources can occur when * Extremely close targets are difficult to separate, and the solution becomes close to degenerate* The PSF model and/or source locations are incorrectly estimated* The PSFs of each source vary in shape in a way that is not captured in the model, (e.g. sources of different colors have weakly different PSF shapes) To assess when light curves are significantly correlated with each other, we compute the Pearson coefficient r between pairs of light curves.We found all pairs of time series within 60 from each other and then removed the long-term trend from the light curves using a third-degree polynomial in time, (removing any long-term variability due to residual systematics, while preserving periodic variability).We compute the Pearson correlation coefficient between the time-series pairs. Figure <ref> shows the distribution of statistically significant (p-value < 0.05) coefficients as a function of pair distance.High values of r indicate that the pairs are significantly correlated. The majority of pairs have values r < 0.15, meaning no significant correlation.Pairs with values r ∼ 0.4 demonstrated no visual correlation after inspection. Almost all correlated pairs (r > 0.5) are within 25, which relates to the typical size of a TPF (∼ 5 pixels across) meaning correlated pairs are likely found in the same TPF.Less than 1 % of pairs fall in the correlated region (r > 0.5 and d < 25).Pairs with r > 0.5 beyond 25 do not show correlated signals and the r values are likely due to remaining monotonic trends. For a correlated pair, we assume the brighter source is the true variable, and the fainter gets contaminated.We opt to remove from our light curve catalog the faint source (∼ 1 % from the total data set) from every correlated pair.§.§ Sources DemographicThis work presents the first catalog of light curves using observations from Kepler's prime mission nearly without a selection bias. The number of new light curves (> 400,000) doubles the total delivered by the Kepler pipeline (∼ 200,000).Figure <ref> shows the color-magnitude diagrams (CMD) using Gaia DR3 photometric bands and distances computed by <cit.>.As a comparison, Figure <ref>shows Kepler targets only (top left), new sources (top right), all sources in the catalog (bottom left), and the ratio between both samples (bottom right).The KBonus Background sample has significantly more sources around the main sequence region, particularly toward redder colors and the binary sequence.The number of new sources is smaller than Kepler's target for some evolved stars, particularly for luminous red giants, where the addition of new sources is ∼10%.However, there is a significant increase in new low-luminosity red giants. This reflects the target selection bias imposed by the Kepler mission that favored luminous red giants instead of cool, low-luminosity giants.The Kepler mission targeted approximately 3,700 M-dwarf stars.In this work, we expand the catalog with almost 27,500 new light curves in the M-type dwarf region of the CMD.We follow the prescription presented in <cit.> based on Gaia, WISE, and 2MASS bands (if available) to select potential M-dwarfs combined with cuts in the characteristic range of stellar temperature of m-type stars using Gaia's effective temperature. Additionally, there are 50 new light curves in the white dwarf (WD) sequence, in addition to the previously extracted 41 WD Kepler targets. §.§ Confirmed ExoplanetsWe compared the estimated transit depth of previously confirmed Kepler exoplanets between Kepler's PDCSAP and our PSF light curves.We select all exoplanets with Archive Disposition `CONFIRMED' from the NASA Exoplanet Archive <cit.>.To compare the transit depth measured directly on PDCSAP and PSF light curves, we performed a Box Least-square <cit.> periodogram using a dense grid around the reported periods. The BLS method searches for periodic variability by fitting the data with an upside-down top-hat periodic model and it has been extensively used to analyze transiting signals <cit.>. Both PDCSAP and PSF light curves were flattened beforehand to remove stellar variability using 2-day window b-spline functions while masking out cadences with transits. Figure <ref> shows the comparison in transit depth between both PDCSAP and PSF light curves.Overall the computed transit depths are consistent.A Skewness value of 5 for the ratio between depth values (PSF over PDCSAP) means the PSF light curves yield slightly deeper transits.This is expected, as some of the Kepler apertures could be contaminated by nearby sources, which is addressed by the PSF photometry.The 6h-CDPP values from both light curves are also consistent. We measure the impact of the transit depth change on the estimated planet radius by scaling literature planet sizes by a factor which is the ratio between the radius estimation from the PSF and the PDCSAP light curves.We found a minor change in the exoplanet population towards a tighter distribution in planet size (see Figure <ref>), particularly for orbital periods longer than 20 days. But no significant change in the population of global planet sizes.Further analysis of planet size populations will require complete exoplanet modeling using up-to-date host stellar parameters and Bayesian inference.We leave this analysis for a future study.§.§ Revisiting False Positives KOIsTo demonstrate the potential unlocked by this new light curve catalog, we examine a sub-sample of Kepler Object of Interest (KOIs) that were flagged as centroid offset false positive exoplanet candidates.These are light curves where it is likely the aperture is contaminated by a background eclipsing source, likely an eclipsing stellar binary, one of the main sources of contamination when searching for exoplanets via the transit method.Thanks to the use of a (nearly) complete source catalog (down to G magnitude 19th) and PSF photometry we are able to separate blended sources up to 1.In this section, we present two KOI examples where we are able to separate the eclipsing sources.See Appendix Section <ref> for more KOI examples.A full analysis of all the false positive KOIs (> 1800) is left for future work.§.§.§ KOI 770.01KOI 770.01 is a false positive candidate with a reported transit period of 1.506 days and a 2211 ppm depth, but it was flagged with centroid offset.The top panels in Figure <ref> show the Kepler PSCSAP light curve (red line) computed from a 4-pixel aperture mask (red mask on the pixel image).We generate two PSF time-series for this dataset, for the two sources blended in the image (shown in the pixel image by a black and blue marker). Our PSF light curve (black line) of this source (black marker) does not show the transiting signal.The neighbor source Gaia DR3 2134870879540928896 (blue marker in the pixel image), is 1.36 from the KOI and it is 2.6 magnitudes fainter. The PSF photometry for the contaminant shows a clear eclipse signal at the same period (blue light curve).By the shape of the transit and its depth, the contaminant is a potential EB. Our PSF photometry successfully separates both highly contaminated sources at high contrast.§.§.§ KOI 909.01Similar to the previous case, the KOI 909.01 is also a centroid offset false positive.Figure <ref> shows the pixel image and the light curve of the target and neighbor.The transit depth is 4147 ppm with a period of 16.37 days.The source of contamination in the target aperture is a neighbor Kepler target, KIC 8256044, flagged as an EB.While these two sources are not highly blended, the separation between stars is 8 (2 pixels), KIC 8256049 flux leaked into the candidate's aperture. The PSF photometry is able to successfully deblend both light curves. The difference in amplitude of the stellar variability seen between the Kepler PDCSAP and our PSF photometry for KIC 8256049 is mostly due to aperture contamination.§ LIMITATIONSDue to the assumptions made throughout this work, some limitations for the PSF, the perturbation model, and the light curve arise.Here we list and discuss some of these limitations. * The light curve catalog is limited to sources brighter than G band 19th magnitude and dimmer than magnitude 10th.For blended sources within 1 only the brighter object was extracted while the fainter was removed.Users can use theAPI to extract sources outside the aforementioned ranges, although a fine-tuning of model parameters could be required for the PSF model to work outside the linear response range of the CCDs. * Sources that are near the edge of the TPFs or outside of them are fitted using partial data.Although PSF photometry is still able to extract them, the precision is not optimal.Due to seasonal pointing accuracy, changes in the TPF shape, or high proper motion sources around the edge of the TPFs can have a different fraction of flux on the pixel data across quarters.This is reflected as a change in photometry precision between quarters.To minimize the risk of using subpar precision light curves we only stitched quarters with PSFFRAC ⩾ 0.5.Users can still access the light curves for all quarters as they are provided in the multi-extension FITS files.Additionally, aperture photometry, as well as the extraction metrics FLFRCSAP and CROWDSAP, for these partial sources are underestimated. * The PSF models are fitted, as described in <cit.>, by solving a linear model as a function of positions.This approach does not account for the change in shape due to source brightness.Brighter sources can have a slightly different profile shape than fainter sources.Although we evaluated the option of a flux-dependent PSF model, these changes were noticeable in the outer regions of the PSF wings but at a minor scale.The latter can become relevant for sources showing high-amplitude variability, such as LPVs, where the change in PSF profile can impact amplitude measurements. * The PSF model could also depend on the CCD location.We tested a PSF model with additional dependency on the pixel and row position with respect to the center of the field of view and we did not find significant changes in PSF shape across the CCD. This is expected for Kepler observations, where the sky coverage of a single CCD (∼ 1.2 deg^2) is relatively small. Although, for larger fields of view instruments like the cameras in the Transiting Exoplanet Survey Satellite <cit.> that has CCDs covering ∼ 144 deg^2 the PSF changes significantly within the CCD, making this model dependency necessary. * The PSF model for CCD channels in the border of the field of view often exhibits extreme distortions and prominent features (see Figure 2 in <cit.> for a display of PSF models across CCDs) that could affect the model performance.We tested light curves created with PSF models varying their flexibility (number of spline knots) and their center.These alterations mimic possible miss-calculation of the centroids due to distorted PSF shapes and lack of model fidelity when steep gradients in the PSF profile are present.We computed several metrics such as median flux, linear trend slope, amplitude, CDPP, and multiple flux percentile ratios, to assess the stability of the extracted light curves.We found that even when the PSF centroid is missed by less than 2 (a half pixel) or when the model struggles with drastic gradient changes (e.g. a PSF shape with two close “leg” features), the light-curve metrics distributions are consistent between models.This shows that our models are statistically robust to model parameters and small imperfections.Although, some exceptions happen for highly blended and high-contrast sources.The latter is the case for Tabby Star <cit.> and a contaminant fainter (G = 17.6) star (Gaia DR3 2081900944807842560) located less than 2 away.For both, our method produced imprecise photometry levels across quarters that were only possible to overcome when fitting the sources alone (i.e. removing one of them from the input catalog).Although this approach is useful when working with a specific target, it is not optimal when performing massive source extraction. * As described in section <ref>, correlations can still be found between highly variable-bright sources and fainter neighbors, especially when they are close on the detector.We computed a correlation metric across pairs of light curves and removed all faint sources that showed a correlation metric above the threshold.Although this metric is effective in removing correlated sources, remaining small correlations can still be present leading to detecting the wrong cause of variabilityWe encourage users of these light curves to further analyze neighbor sources to secure the true origin of the variability. * Although the perturbation model removes most of the velocity aberration and focus change trends, because the model is fitted for the entire scene these trends are not fully suppressed in every target.This is particularly true for highly blended sources and with partial data where only the wings of the PSF are used to fit the models.* The LFD photometry method relies on solving a linear model by means of least-square minimization.This simplifies and enables rapid model fitting and light curve extraction, but no physical constraint are placed on the expected flux values, therefore negative solutions are mathematically possible.We mitigate this issue by iterating the solving step while narrowing the priors (see <ref>) for sources with predicted negative fluxes and their contaminating neighbors (which force the negative solution).Sources that still have negative fluxes after the iteration is completed are rejected from the final catalog.We found this affect mainly sources with mid- to high-contrast (≥ 2 mag) and within 15.Figure <ref> illustrates the combined extraction biases discussed above in (1), (2), (4), and (7).The figure shows the magnitude contrast and distance distribution for pairs of sources from the input and extracted catalogs.The former is denser for low-contrast and blends between 5 and 13mainly because of sources slightly outside the TPF that get rejected due to insufficient pixel data (2) and fewer due to predicted negative fluxes (7).The decrease in density beyond 15seen in the extracted catalog is due to the typical size of TPFs.The absence of pairs within 1 is due to the selection bias described in (1).The apparent larger number counts in the extracted catalog for pairs with mid- to high-contrast is mainly due to (1) and (2) and partially to (4).§ FUTURE WORKA natural step forward is to use thelibrary to extract light curves from K2 <cit.> mission and the TESS mission. K2 data presents one major challenge, the failure of the spacecraft's reaction wheels caused a loss in the telescope pointing precision leading to a strong and characteristic jitter motion with a half-day timescale.This jitter motion drastically affects our perturbed model.First, the scene motion is not smooth anymore and the binning done to fit the perturbed model needs to increase in resolution to capture the motion.Increasing the time resolution of the perturbed model increases memory usage and computing time.Secondly, the CBV vectors is likely not be the best basis vectors to fit the perturbed model.Preliminary results have shown that using the centroid (or the mission positional corrector ) vectors leads to better corrected light curves.An alternative approach is to compute PSF models and offset corrections for every cadence.Fitting a PSF model per cadence requires a large number of objects and pixels available, which can only be achieved by increasing the number of TPFs or when working with K2 superstamps pixel masks <cit.>, adding to computing costs.By its design, the LFD method and the Python implementationwork well with TESS data after fine-tuning model parameters that account for the difference in pixel scale (TESS is 21/pix), integration times, and crowding effects.Although it is tentative to compile large catalogs of light curves for the entire TESS archive (several TB of TPF data), we believe that providing the users with a well-build and robust Python library able to quickly extract light curves (by using pre-computed models) or with full control of model parameters, represents a bigger contribution to the community. Moreover, there are other active pipelines extracting similar PSF photometry to TESS primary and extended mission data. <cit.> follows a similar approach using Gaia DR3 as input catalog and fits the effective PSF and background signal as a single linear model.This model is fitted later to every source but the extracted target to create a model of the full image, subtract this model from the data and then perform photometry on the decontaminated image of the target source.This approach is limited by the assumption that the background level is constant at the target's location and that stars around it are constant. By design, the LFD method does not assume this and could improve on light curve precision. The current state of theAPI implements loading PSF profile models pre-computed from Kepler's FFIs as discussed in Section <ref>.This enables users to quickly perform PSF photometry on single sources or on a small number of TPFs.However, this is only limited to the use of the mean-PSF model and not the full perturbed PSF model.This limitation is suitable for Kepler data where the perturbed-PSF model can only be fitted with a moderate number (⩾ 150) of TPFs and not with the FFIs due to the low number of cadences per quarter.TESS FFIs are observed with a 30- or 10-minute cadence.This data presents the opportunity to compute and save the perturbed model for posterior extraction of light curves from any TESS data. We plan to extend theAPI to implement the saving and loading of the perturbed PSF model, resulting in a way to extract time-series from individual TPFs, using our best fit perturbation model.These new methods will speed up the photometry extraction using the fully corrected model, especially when extracting a small number of targets.Light curve extraction from TESS FFIs is also possible with , with the caveat that this process is considerably more memory intensive due to the loading of thousands of 2048 x 2048 pixel images. A tractable solution is the combination of processing the FFIs in small cutouts (e.g. a 200 x 200 pixels cut has sufficient sources to estimate robust PSF models) and using pre-computed models. § SUMMARYKepler's primary mission consisted of eighteen 90-day quarters during which the telescope constantly observed the same field of view for almost 4 years. These observations enable the community to find thousands of new exoplanets using the transit method, as well as perform numerous stellar variability and transient studies. The Kepler mission delivered more than 200,000 image cutouts around previously selected targets and their aperture photometry light curves. In this work, we reanalyze the image cutouts and extract PSF photometry light curves for all sources detected in the pixel data. We created a catalog with 606,900 extracted sources from which 406,548 are new light curves from background sources.These background sources are objects detected on the pixel data but do not correspond to Kepler targets. In our extraction pipeline, we used the method described in the LFD photometry method <cit.>. The LFD method performs PSF photometry in a collection of TPFs by modeling the scene simultaneously. It leverages the accuracy and precision of Gaia catalogs to fix the source locations and estimate a PSF model of the scene. The method also computes corrections to the PSF model to account for the scene motion due to the velocity aberration effect, focus change, and pointing instabilities.Our extraction pipeline includes background modeling and subtraction, PSF fitting and photometry, aperture photometry using the PSF profile shape, and numerous extraction metrics useful to characterize the quality of the data. The light curves produced in this work are available for public access via the MAST archive. We implemented new methods and routines to the Python packagesuch as the background modeling, PSF model loading from pre-computed ones using FFIdata, and user-defined basis vectors for the perturbation model. These new features are included in v1.1.4 of .We demonstrated that the quality of our light curves reaches similar accuracy levels as those delivered by the Kepler pipeline.The computed CDPP values range from 10s ppm for sources brighter than G=14 to 100s ppm for sources between 16th and 18th magnitude.Statistically, PSF photometry performs up to 40% better, in CDPP value, compared to aperture photometry for sources fainter than G=13.25. We listed and discussed the limitations of our extraction pipeline and the resulting light curves. This serves as guidelines for users of this dataset.We show two applications as examples of what can be accomplished with these high-level science products. First, we compared the transit depth and estimated exoplanet radius between PDCSAP and our PSF light curves.The result suggests that the PSF photometry yields slightly deeper transits and therefore larger planets. Although, we did not find significant changes to the planet size-period relationship. Secondly, we show examples of the power of PSF photometry to deblend contaminated sources by revisiting KOI false positives due to background binary contamination. The LFD photometry method successfully separates highly blended sources at high contrast which is relevant to distinguish false positives from real exoplanet candidates. This new dataset presents other numerous opportunities to the community not limited only to exoplanet studies. To name some, there are 50 new white dwarf light curves which add to the original 41 in the Kepler target list, thousands of light curves of potential M-dwarf stars, and expand asteroseismic analysis of rotating stars. This paper includes data collected by the Kepler mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the Kepler mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555.This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www. cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa. int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement.Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center.Funding for this work for JMP is provided by grant number 80NSSC20K0874, through NASA ROSES.Keplerastropy <cit.>, lightkurve <cit.>,numpy <cit.>, psfmachine <cit.>, scipy <cit.>, § KBONUS LIGHT CURVE EXAMPLES: CONFIRMED PLANETSFigures <ref>, <ref>, <ref>, <ref>, and <ref> compare Kepler PDCSAP and our PSF light curves for confirmed exoplanets at different signal-to-noise ratio levels.§ KBONUS LIGHT CURVE EXAMPLES: KOISFigures <ref>, <ref> and <ref> complement Section <ref> showing three more false positive KOI examples.§ KBONUS LIGHT CURVE EXAMPLES: LONG PERIOD VARIABLESWe selected ten LPV from the Gaia DR3 variable catalog <cit.> to illustrate the inter-quarter photometry.Figures <ref> and <ref> show the light curve examples and the PDCSAP photometry for comparison. Thanks to the perturbed model (see Section <ref> that fits the scene velocity aberration and long-term instrumental trends, stellar long-term variability is preserved, and the photometry from consecutive quarters matches in almost all cases.§ KBONUS LIGHT CURVE FILESThe light curve files are delivered as multi-extension FITS files following a similar organization as the original Kepler LCFs.Each file is named with the following patternwhereis the corresponding Kepler Input Catalog number (e.g. ) if the source exists in the catalog or the Gaia DR3 number(e.g. ) if not.Table <ref> shows a description of the FITS files. The LIGHTCURVE_STITCHED is the main extension containing the fully stitched time series using quarters with a PSF fraction greater than 0.5.Table <ref> details the column in this extension. The LIGHTCURVE_Q extension contains the light curve for single quarters, Table <ref> details its content, this extension contains per-quarter metrics and other measurements such as centroid values.The APERTURE_Q extension contains the pixel mask of the corresponding quarter used for the aperture photometry in the shape of the TPF of origin.The FITS files only have LIGHTCURVE and APERTURE extensions for quarters where the source was detected, therefore the number of extensions varies. The FITS files are structured to work seamlessly with the <cit.> package.In this way, users can easily load the stitched light curve into aobject.See the KBonus documentation[<https://github.com/jorgemarpa/KBonus/tree/main>] for further details on how to work with these files. lllcl[htb!] Multi-extension FITS file example for file .0ptNo. Name Type Cards Dimensions0PRIMARYPrimaryHDU38 - 1LIGHTCURVE_STITCHED BinTableHDU 40 65276R x 11C2LIGHTCURVE_Q0 BinTableHDU 62 469R x 13C3APERTURE_Q0 ImageHDU10 (9, 8)4LIGHTCURVE_Q1 BinTableHDU 62 1624R x 13C 5APERTURE_Q1 ImageHDU10 (8, 7)6LIGHTCURVE_Q2 BinTableHDU 62 4081R x 13C 7APERTURE_Q2 ImageHDU10 (7, 6)8LIGHTCURVE_Q3 BinTableHDU 62 4135R x 13C 9APERTURE_Q3 ImageHDU10 (6, 6)10 LIGHTCURVE_Q4 BinTableHDU 62 4110R x 13C 11 APERTURE_Q4 ImageHDU10 (6, 5)12 LIGHTCURVE_Q5 BinTableHDU 62 4487R x 13C 13 APERTURE_Q5 ImageHDU10 (6, 6)14 LIGHTCURVE_Q6 BinTableHDU 62 4272R x 13C 15 APERTURE_Q6 ImageHDU10 (7, 6)16 LIGHTCURVE_Q7 BinTableHDU 62 4227R x 13C 17 APERTURE_Q7 ImageHDU10 (6, 6)18 LIGHTCURVE_Q8 BinTableHDU 62 3107R x 13C 19 APERTURE_Q8 ImageHDU10 (6, 5)20 LIGHTCURVE_Q9 BinTableHDU 62 4653R x 13C 21 APERTURE_Q9 ImageHDU10 (6, 6)22 LIGHTCURVE_Q10BinTableHDU 62 4442R x 13C 23 APERTURE_Q10ImageHDU10 (7, 6)24 LIGHTCURVE_Q11BinTableHDU 62 4474R x 13C 25 APERTURE_Q11ImageHDU10 (6, 6)26 LIGHTCURVE_Q12BinTableHDU 62 3885R x 13C 27 APERTURE_Q12ImageHDU10 (6, 5)28 LIGHTCURVE_Q13BinTableHDU 62 4243R x 13C 29 APERTURE_Q13ImageHDU10 (6, 6)30 LIGHTCURVE_Q14BinTableHDU 62 4270R x 13C 31 APERTURE_Q14ImageHDU10 (7, 6)32 LIGHTCURVE_Q15BinTableHDU 62 4367R x 13C 33 APERTURE_Q15ImageHDU10 (6, 6)34 LIGHTCURVE_Q16BinTableHDU 62 3535R x 13C 35 APERTURE_Q16ImageHDU10 (6, 5)36 LIGHTCURVE_Q17BinTableHDU 62 1289R x 13C 37 APERTURE_Q17ImageHDU10 (6, 6) lllcl[htb!] Description of the columns available in the LIGHTCURVE_STITCHED extension.0ptColumn FieldFormatUnitsDescription1Time float64BJD - 2454833 Time value in BKJD. 2Cadencenoint32- Cadence number. 3Flux float64e^-/s PSF flux from stitched quarters. 4Flux_errfloat64e^-/s PSF flux error from stitched quarters. 6SAP_fluxfloat64e^-/s SAP flux from stitched quarters. 7SAP_flux_err float64e^-/s SAP flux error from stitched quarters. 8PSF_flat_fluxfloat64e^-/s PSF flux from stitched quarters after flattening. 9PSF_flat_flux_err float64e^-/s PSF flux error from stitched quarters after flattening. 10 SAP_quality int32- Quality flag from the TPF. 11 Flatten_maskint32- Quality flag from the flattening process.lllcl[htb!] Description of the columns available in the LIGHTCURVE_Q extensions.0ptColumn FieldFormatUnitsDescription1Time float64BJD - 2454833 Time value in BKJD. 2Cadencenoint32- Cadence number. 3Flux float64e^-/s Corrected PSF flux. 4Flux_errfloat64e^-/s Corrected PSF flux error. 5SAP_Fluxfloat64e^-/s SAP flux. 6SAP_Flux_err float64e^-/s SAP flux error. 7PSF_flux_novafloat64e^-/s Mean PSF flux. 8PSF_flux_nova_err float64e^-/s Mean PSF flux error. 9SAP_BKG float64e^-/s SAP background flux. 10 Centroid_Column float64pix Centroid column value. 11 Centroid_Rowfloat64pix Centroid row value. 12 Red_chi2float64- Reduced chi-squared value between PSF model and data. 13 SAP_quality int32- Quality flag from the TPF. § KBONUS SOURCE CATALOGThe catalog released with this work contains the list of extracted sources resulting in a light curve FITS file. It also contains extraction metrics and availability flags that can be used to filter sources. Table <ref> shows all the fields available in this catalog.lllcl[htb!] Description of Columns in Source Catalog.0ptColumn FieldFormatUnitsDescription1gaia_designationString- Gaia designation number 2ra Float32 deg Right Ascension 3decFloat32 deg Declination 4sap_mean_fluxFloat32 e^-/s Mean of SAP flux5sap_mean_flux_err Float32 e^-/s Mean of SAP flux error6psf_mean_fluxFloat32 e^-/s Mean of PSF flux7psf_mean_flux_err Float32 e^-/s Mean of PSF flux error8flfrcsap Float32 - Minimum detected SAP flux fraction9crowdsap Float32 - Minimum detected SAP crowding 10 npixsapFloat32 - Minimum detected SAP number of pixels 11 psffracFloat32 - Minimum detected PSF fraction 12 pertrati Float32 - Mean detected perturbed/mean PSF ratio13 pertstdFloat32 - Minimum detected perturbed PSF standard deviation 14 psf_avail String- String encoding PSF flux availability per quarter 15 sap_avail String- String encoding SAP flux availability per quarter 16 psf_fraction_flagString- String encoding PSF fraction quality per quarter17 phot_g_mean_mag Float32 mag Gaia G band mean magnitude18 phot_bp_mean_magFloat32 mag Gaia BP band mean magnitude 19 phot_rp_mean_magFloat32 mag Gaia RP band mean magnitude 20 tpf_org Int32 - TPF where sources was detected21 kicInt32 - Kepler input catalog number 22 kic_sep Float32 arcsecDistance between KIC and Gaia DR3 23 kepmag Float32 mag Kepler magnitude24 file_name String- FITS file name § DATA BUNDLESTo facilitate data access to specific source types, we have created the following data bundles containing the light curves and the source catalog:* M-dwarfs: contains a total of 29,800 sources. We follow the object selection described in Section <ref>.* KOIs and neighbors: it packages light curves listed in the NASA Exoplanet Archive, including confirmed and false positive candidates. It also includes the light curve of neighbor sources around each KOI in a 30 radius. This data bundle is useful for users that desires to explore false positives candidates and their neighbors.* White dwarfs: contains a total of 91 light curves as described in Section <ref>. The files are stored in the Mikulski Archive for Space Telescopes (MAST) archive [KBonus Kepler Background [10.17909/7jbr-w430]10.17909/7jbr-w430] and can be downloaded in bulk mode.
http://arxiv.org/abs/2310.17733v1
{ "authors": [ "Jorge Martinez-Palomera", "Christina Hedges", "Jessie Dotson" ], "categories": [ "astro-ph.EP", "astro-ph.IM", "astro-ph.SR" ], "primary_category": "astro-ph.EP", "published": "20231026184843", "title": "Kepler Bonus: Light Curves of Kepler Background Sources" }
Quantization of two- and three-player cooperative games based on QRA Ivan Eryganov, Jaroslav Hrdina and Aleš Návrat Accepted: 8 August 2023 ==================================================================== Paper <cit.> shows that the (vertex) spanning tree degree enumerator polynomial of a connected graph G is a real stable polynomial (id est is non-zero if all variables have positive imaginary parts) if and only if G is distance-hereditary. In this note we generalize the result on weighted graphs. This generalization allows us to define the class of weighted distance-hereditary graphs.В статье <cit.> показано, что степенной (вершинный) перечислитель остовных деревьев связного графа G является вещественно-стабильным многочленом (то есть не обнуляется при подстановке переменных с положительными мнимыми частями) тогда и только тогда, когда G принадлежит классу дистанционно-наследуемых графов. В данной заметке мы обобщаем данный результат на взвешенные графы. Полученное обобщение позволяет определить класс взвешенных дистанционно-наследуемых графов.§ ВВЕДЕНИЕ Определим верхнюю комплексную полуплоскостьℍ:={ z ∈ℂ|(z)>0}.Полином P(x_1,x_2, …,x_n) с вещественными коэффициентами называется вещественно ста­биль­ным, если P(z_1,z_2, …, z_n)≠ 0для любых z_1,z_2,…, z_n ∈ℍ. Ясно, что ненулевой многочлен вида a_1x_1+a_2x_2+…+a_nx_n,a_i ∈ℝ_+является вещественно стабильным, и очевидно, что произведение двух вещественно стабильных многочленов есть вещественно стабильный многочлен. Следующее утверждение является широко известным, например, см. <cit.>. Пусть P(x_1,x_2, …, x_n) — вещественно стабильный многочлен.Тогда следующие многочлены также являются стабильными или тождественным нулем (i) x_1^d_1P(-1/x_1, x_2, …, x_n), где d_1 — степень P по переменной x_1;(ii) ∂ P/∂ x_1(x_1,x_2, …, x_n);(iii) Q(x_1,x_2, …, x_n-1) :=P(x_1,x_2, …, x_n-1,a) при любом вещественном a.Больше о стабильных многочленов и их применениях можно узнать в обзорах <cit.> (также см. комментарии авторов в разделах 1 и 4 статьи <cit.>). Пусть G=(V, E) — конечный простой связный неориентированный граф, и пусть |V|=n, |E|=k. Пусть N_G(v)={u∈ V vu∈ E} — окрестность вершины v, а _G(v) — степень вершины v в графе G. Для подмножества вершин U ⊂ V определим индуцированный подграф G[U] как граф, вершинами которого являются элементы U, а рёбрасуть те рёбра графа G, оба конца которых лежат в U. Обозначим через S(G) множество всех остовных деревьев графа G.Обозначим полный граф на n вершинах и полный двудольный граф с долями размера n и m через K_n и K_n,m, соответственно.Двусвязным называется связный граф, остающийся связным при удалении любой вершины (и всех инцидентных ей рёбер). Пронумеруем рёбра G числами от 1 до k, и каждому ребру i=1,…,k сопоставим переменную x_i. Определим реберный остовный многочлен графа GQ_G(x_1,x_2, …, x_k )=∑_T∈ S(G) ∏_j ∈ E(T)x_j.Как известно <cit.>, многочлен Q_G является вещественно стабильным для любого конечного связного простого графа G, содержащего не меньше двух вершин. Можно пронумеровать вершины от 1 до n, также сопоставить им переменные x_1,…,x_n и оп­ре­де­лить вершинный остовный многочлен P_G(x_1, x_2, …, x_n)=∑_T ∈ S(G)∏_v ∈ Vx_v^_T(v)-1, Этот многочлен не всегда является вещественно стабильным. Назовем стабильными графы, для которых многочлен P_G является вещественно стабильным. Назовем граф дистанционно-наследуемым, если для любого его связного индуцированного подграфа расстояние между любыми двумявершинами в подграфе равно расстоянию между ними в исходном графе (более подробно о дистанционно-наследуемых графах см. в разделе <ref>). В статье <cit.> показано, что эти классы совпадают.Стабильными графами являются в точности те графы, которые являются дистанционно-наследуемыми. В статье <cit.> приведены комбинаторная мотивация и приложения теоремы <ref>, а также сформулированы связанные открытые вопросы. Решению одного из них — получению классификации стабильных взвешенных графов — и посвящена эта статья.Определим взвешенный вершинный остовный многочленграфа G c функцией весов w из E(G) в ℝP_G,w(x_1, x_2, …, x_n)= ∑_T ∈ S(G)∏_e ∈ Tw(e) ∏_v ∈ V x_v^_T(v)-1 .Назовем взвешенный граф (G,w) стабильным, если многочлен P_G,w стабильный.Не умаляя общности, можно считать, что w не обращается в 0 (такие рёбра можно удалить).В статье <cit.> показано, что можно считать w положительной функцией; мы повторим это рассуждение для полноты изложения. Если взвешенный граф не двусвязен, то можно менять знак w в каждой компоненте двусвязности: по лемме <ref> нули многочлена P_G,w зависят только от его редукций на компоненты двусвязности.Если взвешенный двусвязный граф (G,w) стабилен, то w имеет фиксированный знак на всех ребрах G. Предположим противное. Тогда в графе есть два ребра разного знака. Тогда в силу связности есть два ребра, имеющих общий конец, разного знака. Пусть это вершина v, и ребра — vu_1и vu_2, и w(vu_1)>0>w(vu_2). Заметим, что при подстановке x_v=0 мы получим P_G ∖ v (x_1, …, x_n) · (∑_t ∈ N(v) w(v,t)x_t).Тогда оба этих многочлена не являются тождественным нулем (т.к. G ∖ v связный), а значит они оба стабильны, но второй многочлен очевидно не является стабильным, потому что в него можно подставить все переменные, кроме x_u_1, x_u_2 нулями, и он должен остаться стабильным, но при подстановке x_u_1=-iw(vu_2), x_u_2=iw(vu_1) он обращается в ноль. Противоречие.Значит внутри каждой компоненты двусвязности знак одинаковый, и в каждой компоненте мы можем поменять знак, и это не повлияет на стабильность всего графа. Значит достаточно классифицировать графы, в которых все веса положительны (произвольный взвешенный граф стабилен тогда и только тогда, когда получается сменами знаков в компонентах двусвязности из взвешенного стабильного графа с положительными весами).Далее везде будем считать все графы взвешенными, а все веса положительными, если не оговорено противное. Следующая теорема дает явную (проверяемую за полиномиальное время) характеризацию взвешенных стабильных графов.Взвешенно стабильными графами с положительной w являются в точности графы, получаемые из графа с одной вершиной путем копирований вершин, сохраняющих веса (с добавлением ребра произвольного положительного веса или без него), сочленений двух взвешенно-стабильных графов по вершине и умножений всех ребер из одной вершины на положительное число.Отметим, что класс взвешенно-стабильных графов не сводится к классу стабильных графов с помощью операций домножения.Иллюстрирующим примером является полный граф на четырех вершинах, пять ребер которого единичного веса, а шестое имеет вес 2. Структура статьи.В разделе <ref> приводятся эквивалентные определения дистанционно-наследуемых графов, часть из которых мы используем. В разделе <ref> доказывается основная теорема. В разделе <ref> обсуждается связь полученного многочленного определения взвешенного дистанционно-наследуемого графа и других определений.§ ДИСТАНЦИОННО-НАСЛЕДУЕМЫЕ ГРАФЫНапомним, что дистанционно-наследуемый граф — это граф, у которого любой связный индуцированный подграф сохраняет расстояние между любыми двумя вершинами.Более подробно о дистанционно-наследуемых графах можно прочесть, например, в статье <cit.> и книге <cit.>. В частности, в этой книге даны следующие эквивалентные определения класса дистанционно-наследуемых графов:(i) Это графы, в которых любой порождённый путь является кратчайшим.(ii) Это графы, в которых любой цикл длины по меньшей мере пять имеет две или более диагоналей и в которых любой цикл длины в точности пять имеет по меньшей мере одну пару пересекающихся диагоналей.(iii) Это графы, в которых любой цикл длины пять и более имеет по меньшей мере одну пару пересекающихся диагоналей.(iv) Это графы, в которых для любых четырёх вершин u, v, w и x по меньшей мере две из трёх сумм расстояний d(u,v)+d(w,x), d(u,w)+d(v,x) и d(u,x)+d(v,w) равны.(v) Это графы, в которых отсутствуют следующие индуцированные подграфы: цикл длины пять и более, изумруд, дом или домино (см. рис. <ref>). (vi) Это графы, которые могут быть созданы из одной вершины с помощью последовательности из следующих трёх операций: * Добавление новой висячей вершины, соединённой одним ребром с существующей вершиной графа.* Замена любой вершины графа парой вершин, каждая из которых имеет тех же соседей, что и удалённая вершина. * Замена любой вершины графа парой вершин, каждая из которых имеет тех же соседей, что и удалённая вершина, включая другую вершину из пары. Отметим, что доказательство теоремы <ref> основывалось на эквивалентности определений (v) и (vi). Доказательство теоремы <ref> также использует определения (v) и (vi), а также идею, стоящую за определением (iv).Еще одно алгебраическое определение появилось позднее в статье <cit.>. Мы подробно обсудим его в разделе <ref>.§ ДОКАЗАТЕЛЬСТВО ТЕОРЕМЫ <REF>Напомним, что по умолчанию все графы являются взвешенными с положительными весами. §.§ Любой построенный граф является взвешенно-стабильным Следующие леммы являются взвешенными обобщениями лемм из статьи <cit.>.Пусть дан взвешенный граф (G,w) на n вершинах. Рассмотрим взвешенный граф (G_1,w_1), получаемый из графа (G,w) добавлением вершины v_n+1, соединением ее с вершинами из N(v_n) ребрами, равными по весу ребрам из v_n, и с вершиной v_n ребром веса p ≥ 0 (отметим, что случай p=0 соответствует непроведению этого ребра). Тогда P_(G_1,w_1)(x_1, …, x_n, x_n+1)=P_G,w(x_1, …, x_n+x_n+1) ( ∑_t ∈ N(v_n)w(tv_n)x_t+ p(x_n+x_n+1) ).Рассуждения аналогичны случаю невзвешенных графов. Действительно, давайте заметим, что любое дерево в графе G устроено следующим образом — на всех вершинах кроме v_n берется некоторый лес, такой, что в каждой компоненте есть хотя бы одна вершина из N_G(v_n), и потом вершина v_n соединяется с ровно одной вершиной из каждой компоненты. Обозначим за L множество всех таких лесов, за t(K) — число компонент связности в лесу K,и назовем A_1, A_2, …, A_t пересечения множества N_G(v) с компонентами связности леса K. Пусть W(K)= ∏_uv ∈ Kw(uv)x_ux_v.Тогда из рассуждения выше P_G,w(x_1, x_2, …, x_n)=∑_K ∈ L(W(K)∏_i=1^t(K)(∑_v ∈ A_iw(v_nv)x_v) x_n^t(K)-1).Теперь давайте смотреть на то, как устроены деревья в графе G_1. Эти деревья делятся на те, которые не содержат ребро v_nv_n+1, и те, которые содержат его. Тогда пусть множество деревьев первого типа это S_1, второго — S_2. ТогдаP_G_1,w_1(x_1, …, x_n+1)=∑_T ∈ S(G)∏_e ∈ Tw(e) ∏_v ∈ V x_v^_T(v)-1= ∑_T ∈ S_1∏_e ∈ Tw(e) ∏_v ∈ V x_v^_T(v)-1+∑_T ∈ S_2∏_e ∈ Tw(e) ∏_v ∈ V x_v^_T(v)-1= P_1(x_1, …, x_n+1)+P_2(x_1, …, x_n+1).Деревья в S_1 устроены следующим образом. Аналогично исходному графу мы рассматриваем лес, который содержитвсе вершины кроме v_n, v_n+1, и такой, что каждая его компонента содержит хотя бы одну вершину из N_G(v_n), после чего соединяем одну из долей с обеими вершинами из v_n, v_n+1, а каждая из остальных t(K)-1 долей — с ровно одной из этих вершин. Заметим, что с точки зрения веса дерева не важно, с какой из вершин v_n, v_n+1 мы соединяем очередную вершину, потому что по построению w(v_nt)=w_1(v_nt)=w_1(v_n+1t) для любого t ∈ N(v_n). Тогда P_1(x_1, …, x_n+1)= ∑_K ∈ L(W(K)∑_i=1^t(K)((∑_v ∈ A_i w(v_nv)x_v)^2∏_j=1, j ≠i^n(∑_v ∈ A_jw(v_nv)x_v)(x_n+ x_n+1)^t(K)-1))= =∑_K ∈ L(W(K)∑_i=1^t(K)((∑_v ∈ A_v w(v_nv)x_v)∏_j=1^n(∑_v ∈ A_jw(v_nv)x_v)(x_n+ x_n+1)^t(K)-1))= =(∑_v ∈ N_G(v_n)w(v_nv)x_v)·∑_K ∈ L(W(K)∏_j=1^n(∑_v ∈ A_jw(v_nv)x_v)(x_n+ x_n+1)^t(K)-1).Теперь видно, что второй сомножитель это P_G,w(x_1, …, x_n+x_n+1).Давайте разбираться с деревьями из S_2. Заметим, что они устроены так: мы снова берем лес с такими же условиями, каждую из долей соединяем с ровно одной из вершин v_n, v_n+1, и соединяем их между собой. Отметим, что снова не важно, с какой конкретно из них мы соединяем, причем по тем же причинам. Тогда P_2(x_1, … x_n+1)=p∑_K ∈ L(W(K)∏_i=1^t(K) (∑_v ∈ A_iw(v_nv)x_v ) ( x_n+x_n+1)^t(K))= = p(x_n+x_n+1) P_G,w(x_1, x_2, …, x_n+x_n+1).Отметим, что p в начале берется из рассматриваемого ребра v_nv_n+1, а множители x_n и x_n+1 сокращаются с -1 в степенях вхождения переменных. Тогда получается, чтоP_G_1,w_1(x_1, …, x_n+1)=P_1(x_1, …, x_n+1)+ P_2(x_1, …, x_n+1)= =P_G,w(x_1, …, x_n) (∑_v ∈ N(v_n)w(v_nv)x_v )+ P_G,w(x_1, …, x_n) · (px_n +px_n+1)= P_G,w(∑_v ∈ N(v_n)w(v_nv)x_v) +p(x_n+ px_n+1) ). Пусть дан взвешенный граф G, имеющий точку сочленения v, и пусть множество вершин графа G после удаления v распадается на компоненты связности с множествами вершин V_1,V_2, …, V_k. Положим G_i = G[V_i ∪{v}].Тогда P_G(x_1, x_2, …, x_n)=∏_i=1^kP_G_ix_v^k-1,где в P_G_i подставляются переменные, соответствующие вершинам, принадлежащим G_i. Очевидно, что каждое остовное дерево в графе G однозначно соответствует набору остовных деревьев в {G_i}, а степень вершины v это в точности сумма её степеней в остовных деревьях G_i, что соответствует умножению многочленов.Пусть P_G,w — стабильный многочлен взвешенного графа G. Тогда домножение всех ребер, инцидентных некоторой вершине v, на вещественное c>0 не меняет взвешенную стабильность G.Новый многочлен будет стабильным тогда же, когда и старый, поскольку домножение соответствует замене переменной x_v в v, на cx_v:P_new(v,c)(x_1,x_2,…, x_v, …, x_n)=P(x_1, …, cx_v, …, x_n)/c,и cx_v∈ℍ тогда и только тогда, когда x_v∈ℍ. §.§ Любой взвешенно-стабильный граф получается операциями По лемме <ref> мы можем домножать все ребра вокруг одной вершины на положительную константу, приводя весовую функцию к удобному виду. Пусть для некоторого графа G и функции весов на нем w многочлен P_G,w стабильный. Тогда G стабильный как невзвешенный граф. Предположим противное. Тогда G не является стабильным как невзвешенный граф, то есть по теореме <ref> не является дис­тан­ци­он­но-наследуемым, значит по эквивалентному определению (v) G содержит длинный цикл, домино, дом или самоцвет как индуцированный подграф. Разберем эти случаи, и покажем, что в каждом из них путем нескольких операций домножения всех ребер, инцидентных вершине на положительное число можно сделать веса на индуцированном подграфе единичными. Лемма <ref> и утверждение <ref> (iii) дают противоречие. Циклы. Цикл длины пять сводится к циклу, в котором все ребра имеют веса 1 при помощи операций домножения. Рассмотрим цикл длины хотя бы шесть, не имеющих хорд. Домножим во всех вершинах кроме одной последовательно, так, чтобы веса всех ребер, кроме одного, стали 1. Если последнее ребро это v_1v_n, то вес всех деревьев, кроме того, которое получается при удалении v_1v_n это w(v_1v_n). Тогда можно просто переписать доказательство для обычного цикла без весов, потому что слагаемое, которое отличается, все равно в нашей подстановке оказывается нулем. Домино. Пусть ребро, не входящее в цикл v_1v_2 … v_6v_1, это v_1v_4 (см. рис. <ref>). Последовательными применяя лемму <ref>, добьемся равенств1=w(v_2v_1)=w(v_1v_6)=w(v_6v_5)=w(v_5v_4)=w(v_4v_3).Тогда рассмотрев набор v_1,v_6,v_5,v_4 получим, что w(v_4v_1)=1; аналогично для набора v_2v_1v_5v_3 имеем, что w(v_2v_3)=1. Получилось обычное (невзвешенное) домино, а оно нестабильно.Дом. Применим лемму <ref> так, чтобы ребра цикла получили вес 1. Теперь рассмотрим подграф C_4, образованный вершинами v_1v_2v_4v_5.Покажем, что в стабильном C_4 можно нормировками сделать все веса равными. Давайте за три нормировки сделаем веса всех ребер, кроме одного, единицами. Необходимо показать, что последнее ребро (пусть это v_4v_1) имеет вес 1. При этом наш многочлен это x_2x_3+w(v_1v_4)(x_1x_2+x_3x_4+x_1x_4).Пустьw(v_1v_4)=1/t, тогда наш многочлен это1/t(tx_2x_3+x_1x_2+x_3x_4+x_1x_4).Если t>1, то рассмотрим подстановку x_3=x_4=1. Тогда получится, чтоx_2(tx_1+1)+x_1+1=0.Но у этого многочлена есть кореньx_1=i, x_2=-(1+i)(1-ti)/t^2+1=-(t+1)+(t-1)i/t^2+1,что противоречит стабильности. Второй случай (t<1) разбирается аналогично. Получилось, что t=1, что мы и хотели.Значит перед нами невзвешенный дом, а он не стабилен.Самоцвет. Обозначим v вершину степени 4. Давайте отнормируем так, чтобы все веса всех ребер, инцидентных v стали 1. Тогда оставшиеся три ребра одинаковые, а значит если рассмотреть индуцированный граф на множестве вершин без v, получится, что эти одинаковые ребра имеют вес 1, то есть перед нами обычный самоцвет (потому что получается, что в множестве {x,x^2,0} есть два равных числа, и x≠ 0). Но обычный самоцвет нестабилен. Вернемся к доказательству теоремы <ref>.Заметим, что если граф G не двусвязен, то достаточно доказать утверждение теоремы для каждой компоненты двусвязности.Назовем пару вершин x_1 и x_2 стягиваемой, если их окрестности в графе G совпадают (возможно по модулю них самих) и отношениеw(vx_1)/w(vx_2)одинаковое для всех вершин v, соединенных с x_1 и x_2.Заметим, что домножение весов всех ребер, инцидентных вершине, на положительную константу не меняет множество стягиваемых пар графа.В доказательстве мы будем часто пользоваться этим свойством, применяя лемму <ref>.Мы докажем по индукции следующее, несколько более сильное, утверждение. Пусть для некоторого двусвязного графа G на хотя бы четырех вершинах и функции весов на нем w многочлен P_G,w стабильный.Тогда в G найдутся хотя бы две непересекающиеся стягиваемые пары. §.§ База для четырех вершинДвусвязные графы на четырех вершинах это полный граф K_4, цикл C_4 и цикл с диагональю C_4^+. Случай C_4 разобран при рассмотрении дома в предыдущем разделе.Случай K_4. Путем домножений в вершинах 1, 2 и 3 можно добиться того, чтобы ребра из вершины 4 имели единичный вес. По лемме <ref> данные операции не влияют на свойство стабильности графа. Обозначим e_1 = w(2,3), e_2 = w(1,3), e_3 = w(1,2). В новых обозначениях P_K_4,w=x_4^2+x_4(x_1(e_2+e_3)+x_2(e_1+e_3)+x_3(e_1+e_2))+ (x_1+x_2+x_3)(x_1e_2e_3+x_2e_1e_3+x_3e_1e_2).Выражение является квадратичным относительно переменной x_4, а значит при любой вещественной подстановке x_1,x_2,x_3 дискриминант D соответствующего квадратного трехчлена хотя бы 0. Имеем 0 ≤ (x_1(e_2+e_3)+x_2(e_1+e_3)+x_3(e_1+e_2))^2-4(x_1+x_2+x_3)(x_1e_2e_3+x_2e_1e_3+x_3e_1e_2)= =x_1^2(e_2-e_3)^2+x_2^2(e_3-e_1)^2+x_3^2(e_1-e_2)^2-2x_1x_2(e_2-e_3)(e_3-e_1)-2x_2x_3(e_3-e_1)(e_1-e_2)-2x_3x_1(e_1-e_2)(e_2-e_3)при любых вещественных x_1,x_2,x_3. Если никакие две переменные из e_1,e_2,e_3 не совпали, то мы можем подставить x_1=1/e_2-e_3, x_2=1/e_3-e_1, x_3=1/e_1-e_2,что дает D =-3. Это противоречит неотрицательности дискриминанта. Значит из трех величин e_1,e_2,e_3 какие-то две совпали, что влечет совпадение каких-то двух произведений противоположных ребер.Утверждение для K_4 доказано. Случай C_4^+ аналогичен предыдущему, если подставить e_1 = 0.§.§ ПереходПусть G и w — граф и функция весов на нем. Пусть также в G найдутся два индуцированных подграфа G_1, G_2, в объединении покрывающие G, и некоторые три вершины x_1,x_2,y, принадлежащие обоим подграфам, а так же в G (а значит и в G_1 и в G_2) есть ребра x_1y и x_2y, и пара x_1x_2 стягиваема и в G_1, и в G_2. Тогда пара x_1x_2 стягиваема и в G. Поскольку подграфы G_1 и G_2 содержат все вершины G, вершины x_1 и x_2 графски эквивалентны в G. Рассмотрим произвольную вершину v, соединенную с x_1 и x_2; для нее выполняетсяw(vx_1)/w(vx_2)=w(yx_1)/w(yx_2),так как v принадлежат G_1 или G_2, а в них пара x_1x_2 стягиваема. Таким образом, отношение w(vx_1)/w(vx_2) не зависит от выбора v.Пусть G — граф, и w — весовая функция на нем, такие что P_G,w — стабильный. По лемме <ref> и теореме <ref> граф G — дистанционно-наследуемый. По определению (vi) класса дистанционно-наследуемых графов G можно построить путем копирования и добавления висячих вершин, и в силу двусвязности G последней операцией было копирование. Это значит, что в графе G есть вершины v и v', такие, что их окрестности совпадают (по модулю самих вершин v и v'), то естьN_G(v) ∖{v'} = N_G(v') ∖{v}.Рассмотрим взвешенный граф H = G[V∖{v'}]. По утверждению <ref>(iii) и из-за двусвязности G, граф H — взвешенно-стабильный, так как удаление v' соответствует подстановке x_v' = 0.Случай недвусвязного H.Двусвязность G влечет, что в H не более одной вершины сочленения, и эта вершина это v. Тогда давайте докажем, что v и v' стягиваемы в G с учетом веса. Действительно, пусть окрестности v в разных компонентах двусвязности H это A_1, A_2, …, A_k. Напомним, что операция домножения не влияет на условие стягиваемости вершин и отнормируем все так, чтобы все ребра инцидентные v, кроме, возможно, ребра из v', имели вес 1. Покажем, что все веса для вершины v', кроме, возможно, w(vv'), равны между собой. Для этого достаточно показать, что для произвольных смежных с v и v' вершин x ∈ A_i, y ∈ A_j, i ≠j верно равенство w(v'x) = w(v'y). Для этого рассмотрим четверку v, v', x, y. Тогда в силу того, что x и y в разных компонентах двусвязности, G не содержит ребра xy, а значит эта четверка это либо C_4, либо C_4^+. Оба случая рассмотрены в базе и в них стягиваемые пары это vv' и xy. Из стягиваемости пары xy следует нужное нам равенство. Таким образом для недвусвязных H одну стягиваемую пару в G мы нашли. Пусть компоненты двусвязности в H это H_1, H_2, …, H_m, m ≥ 2 (они все содержат v). Тогда рассмотрим следующие случаи:* Существует такое k, что |H_k| ≥ 4. Тогда поскольку |H_k| < |H| из предположения индукции следует, что в H_k есть две непересекающиеся стягиваемые пары, тогда одна из них не содержит v, значит она стягиваема и в G. Действительно, они остались одинаковыми с точки зрения графа, и даже если при копировании появилась новая вершина в окрестности, то это только v', и в силу того, что vv' — стягиваемая пара в G, по лемме <ref> вершина v' не портит взвешенную стабильность. * Найдется такое k, что |H_k|=3. Тогда H_k — цикл vu_1u_2 на трех вершинах, то есть вершины u_1 и u_2 имеют совпадающую окрестность (за исключением друг друга), состоящую из единственной вершины v. То есть u_1u_2 — стягиваемая пара в H_k, а значит в H и G по тем же причинам.* Каждый из графов H_k содержит ровно две вершины, то есть каждое H_i является ребром; обозначим его u_iv. Тогда в G есть стягиваемая пара u_1u_2, потому что у них совпадают окрестности — это вершины v,v', и w(u_1v)/w(u_2v)=w(u_1v')/w(u_2v') в силу стягиваемости vv' в G. Таким образом, если граф H получился не двусвязным, то теорема доказана. Случай двусвязного H значительно труднее. Утверждение <ref> справедливо для полного взвешенного графа G и функции весов w на нем. Несложно видеть, что все графы на не более чем трех вершинах являются стабильными.Оставшееся доказательство является индуктивным, где база для четырех вершин доказана в разделе <ref>.Переход.Предположим, противное, тогда G содержит не более одной стягиваемой пары.Разберем случай, когда G вообще не содержит стягиваемых пар. Давайте удалим произвольную вершину x.По утверждению <ref>(iii) оставшийся взвешенный полный граф F является вещественно-стабильным, значит по предположению индукции в F есть две непересекающиеся стягиваемые пары, пусть это v_1v_2 и u_1u_2. Применим несколько раз лемму <ref>, так чтобы w(u_1u_2) = w(v_1u_2) = w(v_1u_1) = w(v_2u_2) = 1 (для этого нужно сначала приравнять за три нормировки веса ребер, ведущих в u_2, а потом приравнять к ним вес в оставшемся ребре за счет нормировки в u_2). Тогда w(v_2u_1) = 1 за счет стягиваемости пары u_1u_2 в F. Получаем, что для любой вершины q, которую мы еще не рассмотрели, стягиваемость пар u_1u_2 и v_1v_2 в графе F влечет w(qu_1)=w(qu_2) и w(qv_1)=w(qv_2). Мы предположили, что в исходном графе ни одна из пар u_1u_2 и v_1v_2 не стягивается, а значит w(xv_1) ≠ w(xv_2) и w(xu_1) ≠ w(xu_2). Рассмотрим графы G[{x,v_1,u_1, u_2}] и G[{x, v_2, u_1, u_2}]. Они стабильные, а значит из трех чисел w(xv_1), w(xu_1), w(xu_2) есть два равных, и из w(xv_2), w(xu_1),w(xu_2) тоже. Тогда получается, что либо w(xv_1)=w(xu_1) и w(xv_2)=w(xu_2), либо w(xv_1)=w(xu_2) и w(xv_2)=w(xu_1). Не умаляя общности, можно считать, что мы имеем дело с первым случаем.Давайте покажем, что w(v_1v_2)=1. Предположим, что w(v_1v_2)= t ≠1, и пускай w(xv_1)=w(xu_1)=a и w(xv_2)=w(xu_2)=b. Рассматривая графы G[{x,v_1,v_2, u_1}] и G[{x, v_1, v_2, u_2}] имеем, что среди b, a, at есть равная пара, и среди a, b, bt есть равная пара. Но заметим, что поскольку a ≠b и t ≠ 1, получается, что a = bt и b = at. Но так не бывает. Противоречие, значит w(v_1v_2)=1. Тогда заметим, что поскольку в нашем графе нельзя стянуть u_1v_1, есть вершина y, для которой w(yu_1)≠ w(yv_1). Тогда y это новая вершина, потому что для x, v_2, u_2 эти веса равны. Тогда мы знаем, что w(yv_1)=w(yv_2) и w(yu_1)=w(yu_2). Применяя лемму <ref> в вершинах x и y так, чтобы w(xv_1)=w(xu_1)=1 и w(yu_1)=w(yu_2)=1. Тогда все веса зависят от трех переменных: a=w(xv_2)=w(xu_2), b=w(yv_1)=w(yv_2), c=w(xy), и также мы знаем, что a,b ≠ 1. Рассматривая индуцированные подграфы на всевозможных четверках вершин, состоящих из x, y и еще двух вершин из {u_1,v_1,u_2,v_2}, получаем, что в каждом из следующих мультимножеств есть равные числа:{a,c,1}, {b,c,1}, {a,b,c}, {1,ab,c}, {c,a,ab}, {c,b,ab}.Из первых двух множеств имеем, что либо c=1, либо a=b=c. В первом случае из третьего мультимножества имеем, что a=b, и тогда пятое мультимножество {a, a^2, 1} содержит два равных числа. Во втором случае четвертого мультимножество {1,a,a^2} содержит два равных числа. В обоих случаях мы получаем противоречие, так как a ∉{0,1}. Осталось рассмотреть случай, когда G содержит ровно одну стягиваемую пара v_1v_2.Применяя лемму <ref>, отнормируем ребра инцидентные v_2, так, чтобы w(uv_2)=w(uv_1) для какой-нибудь вершины u. Тогда w(uv_2)=w(uv_1) для любой вершины u, потому что параv_1v_2 стягиваемая. По предположению индукции в графе G[V∖{v_2}] отыщутся две непересекающиеся стягиваемые пары, а значит одна из них не содержит v_1. Пусть эта пара это u_1u_2. Тогда заметим, что она стягиваема и в исходном графе, потому что значение w(u_1x)/w(u_2x) одинаковое при всех x ≠ v_1, а также совпадает при x = v_1 и x = v_2. Тогда мы нашли две непересекающиеся стягиваемые пары в графе G, как и хотели. Перейдем к произвольному G. Сначала найдем в G одну стягиваемую пару. Поскольку H двусвязный, к нему применимо предположение индукции, то есть в H найдутся две непересекающиеся стягиваемые пары x_1x_2 и y_1y_2. Тогда вершина v либо принадлежит одной из этих пар, либо не принадлежит. Случай 1. Вершина v не принадлежит ни одной из двух выделенных стягиваемых пар H. Поскольку N_G(v) ∖{v'} = N_G(v') ∖{v} и N_H(x_1) ∖{x_2} = N_H(x_2) ∖{x_1}, ребра x_1v, x_1v', x_2v, x_2v' либо все есть в G, либо ни одного из них нет в G. Если их всех нет, то стягиваемость x_1x_2 в H влечет стягиваемость в G и искомая пара найдена.В оставшемся случае G содержит эти четыре ребра, и аналогично все ребра y_1v, y_1v', y_2v, y_2v' также есть в G. Если G не содержит ребра vv', то база для графа G[{x_1,x_2,v,v'}] дает стягиваемые пары x_1x_2 и vv', потому что на этой четверке вершин есть цикл, и нет ребра vv'; и тогда по лемме <ref> пара x_1x_2 стягиваема в G. Аналогично, рассматривая графы G[{x_1,x_2,v,v'}] и G[{y_1,y_2,v,v'}], получаем, что ребра x_1x_2 и y_1y_2 есть в G.Теперь покажем, что G содержит все ребра вида x_iy_j, i,j ∈{1,2}. Аналогично предыдущему абзацу G содержит либо их все, либо ни одного из них. Предположим, что данных ребер нет, и рассмотрим граф G[{v,v',x_i,y_j}]. В нем есть цикл на всех вершинах, и нет ребра x_iy_j, а значит в нем стягиваемы пары vv' и x_iy_j. Применяя определение стягиваемости, получаемw(vx_i)/w(v'x_i)=w(vy_j)/w(v'y_j)для i=1, j=1,2 получаем, что y_1y_2 является стягиваемой парой в G[{vv'y_1y_2}], из чего по лемме <ref> пара y_1y_2 стягиваема в G.Значит все ребра вида x_iy_j есть в G.Тогда граф F = G[{x_1,x_2,y_1,y_2,v'}] полный, а значит по лемме <ref> в нем есть две непересекающиеся стягиваемые пары. Тогда если одна из этих пар содержит v', то, не умаляя общности, это пара v'x_1, тогда w(x_1y_1)/w(v'y_1)=w(x_1y_2)/w(v'y_2), откуда w(x_1y_1)/w(x_1y_2)=w(v'y_1)/w(v'y_2), а значит пара y_1y_2 стягиваема в G. Значит в F стягиваемы две непересекающиеся пары из множества {x_1,x_2,y_1,y_2}. Если это пары x_1x_2 и y_1y_2, то w(x_1y_1)/w(x_2y_1)=w(x_1v')/w(x_2v'), и пара x_1x_2 стягиваема в G, что завершило бы разбор случая 1. Не умаляя общности, осталась ситуация, в которой стягиваемыми парами в F являются x_1y_1 и x_2y_2. Тогда в графеG[{x_1,x_2,y_1,y_2}] есть четыре стягиваемые пары вершин, а значит, применяя лемму <ref>, отнормируем веса так, чтобы все шесть ребер среди этих четырех вершин имели вес 1. Из стягиваемости пар x_1x_2 и y_1y_2 в H для любой вершины u ∉{v',x_1,x_2} ребра ux_1 и ux_2 либо оба отсутствуют в G, либо оба есть в G, и тогда w(ux_1) = w(ux_2) и для любой вершины u ∉{v',x_1,x_2} ребра uy_1 и uy_2 либо оба отсутствуют в G, либо оба есть в G, и тогда w(uy_1) = w(uy_2). Также из стягиваемости пар x_1y_1 и x_2y_2 в F следуют равенства w(v'y_1) = w(v'x_1) и w(v'y_2) = w(v'x_2).Поскольку x_1x_2 и y_1y_2 не являются стягиваемыми в G, w(v'x_1) ≠ w(v'x_2). Рассмотрим пару x_1y_1. Она не стягиваема в G, следовательно либо найдется вершина u ∈ (N_G(x_1) Δ N_G(y_1)) ∖{x_1,y_1}, либо найдется вершина z ∈ N_G(x_1) ∩ N_G(y_1), такая что w(zx_1) ≠ w(zy_1) (учитывая нормировку из прошлого абзаца, делающую веса в подграфе G[{x_1,x_2,y_1,y_2}] единичными).В первом варианте, не умаляя общности, ux_1 ∈ E, uy_1 ∉ E.Тогда в силу N_H(x_1) ∖{x_2} = N_H(x_2) ∖{x_1} имеем ux_2 ∈ E, uy_2 ∉ E. Тогда заметим, что если в графе G нет ребра uv', то в графе G[{v',x_1,x_2, u}] есть цикл, и нет ребра uv', а значит стягиваемая пара это x_1x_2, но это противоречит тому, что w(x_1u)=w(x_2u) (т.к. w(y_1x_1)=w(y_1x_2) и x_1x_2 стягиваемо в H) и w(v'x_1) ≠ w(v'x_2).Значит ребро uv' принадлежит G. Тогда нужно посмотреть на графы G[{u, v', x_1, y_1}] и G[{u, v', x_1, y_2}]. В каждом из них стягиваемая пара это v'x_1, потому что есть цикл и нет ребра uy_j. Мы получили противоречие, поскольку w(y_1v')=w(y_1v')/w(y_1x_1)=w(uv')/w(ux_1)=w(y_2v')/w(y_2x_1)=w(y_2v').Во втором варианте ребра zx_2, zy_2 тоже есть в G, и их веса равны соответственно w(zx_1) и w(zy_1), потому что z ≠ v', а значит z ∈ H. Также z ∉{x_1,x_2, y_1,y_2,v'}. Домножим все ребра, инцидентные вершинах v' и z так, чтобы w(v'x_1)=w(zx_1)=1 (это не меняет веса внутри G[{x_1,x_2,y_1,y_2}]), что влечет w(v'y_1)=w(zx_2)=1. Положим a=w(zy_1)=w(zy_2), и b=w(v'x_2)=w(v'y_2). Определение z влечет a ≠ 1, а нестягиваемость x_1 и x_2 в G дает b ≠ 1.Заметим, что если в G нет ребра zv', то в графе G[{x_1,x_2,v',z}] есть пять ребер из шести, а значит пара v'z должна быть стягиваема в этом графе, но w(x_1z)=1=w(x_2z), а w(x_2v') = b ≠1 = w(x_1v'). Противоречие, значит в G есть ребро zv'. Тогда к графу J = G[{x_1,x_2, y_1,y_2,z, v'}] применима лемма <ref> и в нем есть две стягиваемые пары.Ни одна из пар внутри множества U := { x_1,x_2,y_1,y_2} не стягиваема, потому что у любой пары, кроме x_1x_2 и y_1y_2 разные веса на ребрах к вершине z, а у пар x_1x_2 и y_1y_2 разные веса на ребрах к вершине v'.Тогда каждая из стягиваемых пар в J содержит по вершине не из U, значит одна из них v't, где t ∈ U. Тогда t не принадлежит какой-то из пар x_1x_2, y_1y_2 (не умаляя общности, x_1x_2), и тогда рассмотрение индуцированного подграфа G[{t,v',x_1,x_2}] дает стягиваемые в нем пары v't и x_1x_2, а значит по лемме <ref> пара x_1x_2 была стягиваема и в G. Получается, что со случаем, когда v не принадлежит ни одной стягиваемых пар в H мы разобрались.Случай 2. Теперь пусть v принадлежит одной из пар x_1x_2 и y_1y_2; не умаляя общности, v = x_1. Напомним, что по определению N_G(v) ∖{v'} = N_G(v') ∖{v}, а также N_H(x_1) ∖{x_2} = N_H(x_2) ∖{x_1} иN_H(y_1) ∖{y_2} = N_H(y_2) ∖{y_1}. Тогда следующие шесть ребер либо все есть в G, либо их всех нет: x_1y_1,x_1y_2,x_2y_1,x_2y_2,v'y_1,v'y_2. Если их всех нет, то пара y_1y_2 стягиваема в G.Остался случай, когда все эти ребра есть. Рассматривая графы G[{x_1,v',y_1,y_2}] и G[{x_2,v',y_1,y_2}], мы видим, что либо в них стягиваема пара y_1y_2 (а тогда она по лемме <ref> стягиваема в G), либо в G есть ребра v'x_1, v'x_2 и y_1y_2. Аналогично, глядя на граф G[{v',x_1,x_2,y_1}], мы видим, что либо ребро x_1x_2 есть в G, либо x_1x_2 — стягиваемая пара. Таким образом если мы не нашли стягиваемую пару, то подграф G[{x_1,x_2,y_1,y_2,v'}] полный, а значит по лемме <ref> в нем стягиваемы две непересекающиеся пары вершин. Тогда одна из них не содержит v', и при этом она не равна x_1x_2 и y_1y_2, потому что в таком случае эта пара была бы стягиваема в G по лемме <ref>. Более того, ни одна из этих пар не содержит v', потому что иначе можно рассмотреть индуцированный граф на этой паре и непересекающейся с ней паре из x_1x_2, y_1y_2; тогда соответствующая пара из x_1x_2, y_1y_2 окажется стягиваемой в G по лемме <ref>. Не умаляя общности, осталась ситуация, когда стягиваемыми парами в G[{x_1,x_2,y_1,y_2,v'}] являются x_1y_1 и x_2y_2. Дальнейшие рассуждения совпадают с окончанием разбора случая 1: лемма <ref> позволяет за счет четырех стягиваемых пар в G[{x_1,x_2,y_1,y_2}] добиться единичных весов внутри G[{x_1,x_2,y_1,y_2}], а последующее домножение всех ребер, инцидентныхвершине v', дает 1 = w(v'x_1) = w(v'y_1) и a = w(v'x_2) = w(v'y_2) ≠ 1. Пара y_1 и x_1 не стягиваема в G, значит существует вершина z, которая их различает эти вершины. Тогда либо z ∈ (N_G(x_1) Δ N_G(y_1)) ∖{x_1,y_1}, либоz ∈ N_G(x_1) ∩ N_G(y_1) и w(zx_1) ≠ w(zy_1).Предположим, что она делает это графски. Есть два подслучая, в первом z соединена с x_1,x_2,v'; рассмотрим графыG[{z,v',x_1,y_1}] и G[{z,v',x_1,y_2}]. В них стягиваемыми парами являются zx_1 и v'y_i, а тогдаw(y_1v')=w(y_1v')/w(y_1x_1)=w(zv')/w(zx_1)=w(y_2v')/w(y_2x_1)=w(y_2v'),и y_1y_2 стягиваема в G. Во втором подслучае, z соединена с y_1,y_2, и тогда рассмотрим граф G[{z,v',y_2,y_1}]. В нем нет ребра v'z, а значит в этом графе пары v'z и y_1y_2 стягиваемы. По лемме <ref> y_1y_2 стягиваема во всем G.Во втором варианте z ∉{x_1,x_2,y_1,y_2,v'} и в G существуют все ребра, ведущие из z в уже рассмотренную пятерку вершин x_1x_2y_1y_2v', то есть к графу G[{x_1,x_2,y_1,y_2,v',z}] применима лемма <ref>.Завершение доказательства. В обоих случаях мы нашли стягиваемую пару xy в G. Давайте вернемся в начало доказательство и рассмотрим эту пару в качестве пары vv', а именно посмотрим на граф H' = G[V∖{x}].Случай не двусвязного H' разобран в начале доказательства.Если же H' двусвязен, то по предположению индукции в нем найдутся две непересекающиеся стягиваемые пары; рассмотрим такую из них, которая не содержит x и назовем ее uv.Тогда xy и uv — искомые непересекающиеся стягиваемые пары в G. Доказательства утверждения <ref> и теоремы <ref> закончены.§ ВЗВЕШЕННЫЕ ДИСТАНЦИОННО-НАСЛЕДУЕМЫЕ ГРАФЫПо аналогии с теоремой <ref> определим класс взвешенных дис­тан­ци­он­но-наследуемых графов как класс взве­шен­но-стабильных графов. Покажем, что данное определение естественно. По лемме <ref> носитель взвешенного дис­та­ци­он­но-наследуемого графа является дистанционно-наследуемым графом. Далее, покажем, что наше определение совпадает еще с одним алгебраическим определением, данным в статье <cit.>.Обозначим ранг матрицы M через (M). Пусть дан взвешенный граф G, и его матрица смежности — M_G;для произвольных подмножеств A,B ∈ V(G) определим _G(A,B) := (M_G[A,B]), где M[A,B] — подматрица, с множеством столбцов, соответствующим A и множеством строк, соответствующим B. Положим (A)=_G(A, V(G) ∖ A ); ясно что (A) = (V(G) ∖ A), так как матрица M_G является симметричной.Рассмотрим произвольное дерево T, у которого висячими вершинами являются в точности вершины G, а все остальные вершины имеют степень 3.Любое ребро этого дерева e задает разбиение вершин G на два множества A(e) и B(e), соответствующие компонентам связности в T ∖ e. Определим ширину ребра e как (A(e)) = (B(e)). Назовем шириной графа минимум по всем деревьям T максимальной ширины ребра в нем.В статье <cit.> показано, что класс графов (не взвешенных, т.е. в которых все веса равны 1) с шириной 1 совпадает с дистанционно-наследуемыми. Класс взвешенных дистанционно-наследуемых графов совпадают с классом взвешенных графов с шириной 1.Везде далее считаем, что во всех графах хотя бы 3 вершины, и граф связен. Отметим, что ширина связного графа не меньше, чем 1 (иначе существует разбиение его вершин на 2 множества, между которыми нет ребер). В одну сторону. Пусть дан взвешенно-стабильный граф. Покажем, что он ширины 1. Для этого нужно предъявить дерево, в котором ширины всех ребер равны 1.Будем строить его по индукции по n=|V|.Базойявляются случаи n ≤ 3. Если в графе не больше 3 вершин, то дерево всего одно и оно подходит, потому что в любом разбиении хотя бы одна из долей имеет размер не более 1, а значит ранг не больше 1.Переход. В графе G хотя бы четыре вершины, значит по теореме <ref> взвешенно-стабильный граф G либо недвусвязен, либо содержит стягиваемую пару.Недвусвязный случай. В этом случае G содержит точку сочленения u, то есть G [V∖ u] распадается на компоненты связности A_1, …, A_k.Пусть H_1 = G[A_1 ∪ u], H_2 = G[V ∖ A_1]. По утверждению <ref> (iii) H_1,H_2 — взвешенно-стабильные графы с меньшим числом вершин, значит существуют T_1, T_2, реализующие ширину 1. Рассмотрим T=T_1 ∪ T_2. Это дерево, в котором висячие вершины — в точности вершины графа G кроме u, и степени всех вершин, кроме u равны 3 или 1, а вершина u имеет степень 2. Назовем нынешнюю вершину u вершиной v, и повесим на нее вершину u (получилось дерево T'). Теперь все степени 3 или 1, и висячие вершины это вершины G.Осталось проверить, что T' имеет ширину 1.Действительно, пусть e — ребро в T', по определению E(T') = E(T_1) ∪ E(T_2) ∪{uv}.Ширина ребра uv равна 1, потому что размер одной из долей разбиения равен 1. Пусть e является ребром T_i, не умаляя общности, i=1.Значит ребро e разбивает вершины G на два множества, одно из которых (A = A(e)) целиком лежит в H_1 ∖ u. Тогда _G(e)=(M_G[A,G ∖ A])=(M_G[A, H_1 ∖ A]),потому что эти матрицы отличаются на несколько нулевых строчек. С другой стороны(M_G[A, H_1 ∖ A])=(M_H_1[A,H_1 ∖ A])=_H_1(e) = 1 по предположению индукции.Значит ширина e равна 1 для любого ребра e, то есть ширина T' равна 1.В двусвязном случае граф G получен копированием некоторой вершины u из графа G_1, то есть в графе G есть вершины u,u_1, являющиеся стягиваемой парой (иными словами G_1=G[V ∖ u_1]).Поскольку G_1 — взвешенно-стабильный, существует дерево T_1, реализующее ширину 1. Рассмотрим его висячую вершину, соответствующую u, пусть ее единственный сосед z.Удалим из T_1 ребро uz и добавим вершину v вместе с ребрами vz,vu,vu_1; назовем полученное дерево T. Теперь висячими вершинами T являются в точности вершины G, а степени остальных вершин равны 3. Осталось проверить, что у T ширина 1. Рассмотрим произвольное ребро из T; это или ребро vu, или vu_1, или ребро из T_1. Для первых двух случаев очевидно, что их ширина 1, потому что одна из долей разбиения имеет размер 1. Пусть теперь это ребро из T_1. Оно разбивает множество вершин G на два, одно из которых (A = A(e)) не содержит вершин u,u_1 (потому что e не принадлежит пути между u и u_1 в T). Имеем_G=(M_G[A,G∖ A])=(M_G[A,G_1∖ A]),так как столбец, соответствующий вершине u_1, пропорционален столбцу, соответствующему u в силу стягиваемости пары uu_1 (то естьнули расположены в одних и тех же строчках, а отношения ненулевых значений равны, по определению стягиваемой пары). При этом (M_G[A,G_1∖ A])=(M_G_1[A,G_1∖ A])=_G_1(e)=1.Значит и ширина дерева T равна 1.В другую сторону. Покажем, что взвешенный граф ширины 1 является взвешенно дис­тан­ци­он­но-нас­ле­ду­е­мым. Проведем доказательство по индукции по числу вершин. База.Если в графе не больше 3 вершин, то он всегда взвешенный дистанционно-наследуемый. Переход.Пусть дан граф взвешенный G и дерево T со степенями 3 и 1 и имеющее ширину 1.Дерево T содержит вершину v, на которой висят ровно две вершины u и u_1. Утверждается, что эти вершины образуют стягиваемую пару.В самом деле, рассмотрим разбиение по по третьему ребру из e = vz.Части этого разбиения A(e) = {u, u_1}, B(e) = V(G) ∖{u, u_1}. По определению T ширина e равна 1, то есть вектора a_i = w(uv_i) и b_i = w(u_1v_i) пропорциональны.Если коэффициент пропорциональности равен нулю, то одна из вершин u,u_1 не имеет в графе G соседей вне множества u,u_1. Если же коэффициент пропорциональности ненулевой, то w(uv_i)=0 тогда и только тогда, когда w(u_1v_i)=0, а для не ненулевых весов значение w(uv_i)/w(u_1v_i) не зависит от выбора v_i ∈ B(e).Значит в паре u,u_1 либо одна из вершин висячая и висит на другой (не умаляя общности, u_1 висит на u), либо они образуют стягиваемую пару. Теперь рассмотрим граф G ∖ u_1 найдем для него дерево с шириной 1. Для этого заменим в дереве T вершиныv,u,u_1 со всеми инцидентными ребрами на одну вершину u с ребром uz; назовем полученное дерево T'. Заметим, что разбиения множества V ∖{u_1}, задаваемые деревом T' получаются из разбиений множества V деревом T удалением вершины u_1. Значит ширина каждого разбиения 1, и у дерева T' ширина 1. Следовательно граф G[V ∖ u] имеет взвешенную ширину 1; он взвешенный дистанционно-наследуемый по предположению индукции, a граф G был получен из него копированием вершины или добавлением висячей. По теореме <ref> G — взвешенный дистанционно-наследуемый граф.§ ЗАКЛЮЧЕНИЕ Центральная теорема <ref> дает явную характеризацию взвешенных вещественно стабильных графов. Аналогично невзвешенному случаю, из теоремы <ref> и лемм <ref> и <ref> следует, что степенной перечислитель взвешенного стабильного графа раскладывается на линейные множители. Полученный результат позволяет определить класс взвешенных дистанционно-наследуемых графов как графы.Оказывается, что такое определение соответствует естественному обобщению другого алгебраического определения <cit.>. Напомним, что связанные открытые вопросы можно найти в статье <cit.>.Благодарности. Авторы благодарны Федору Петрову за внимание к работе. Работа Данилы Черкашина поддержана грантом Российского научного фонда номер 22-11-00131. plain
http://arxiv.org/abs/2310.18051v1
{ "authors": [ "Danila Cherkashin", "Pavel Prozorov" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20231027105728", "title": "On stability of weighted spanning tree degree enumerators" }
Meredith Stone [email protected]]Meredith A. Stone Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA0000-0002-6221-1829]Jianwei Lyu (UTF8gbsn吕建伟) Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA0000-0003-2303-6519]George H. Rieke Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA0000-0002-8909-8782]Stacey Alberts Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA0000-0003-4565-8239]Kevin N. Hainline Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA We measure host galaxy stellar masses for a sample of five luminous quasars at z∼5-7. Using JWST/NIRCam medium-band images of nearby PSF reference stars, we carefully subtract the contribution from the quasar light to place upper and lower limits on the flux of each host galaxy. We find that the members of our sample of quasar host galaxies have masses of 10^9.7 - 10^10.8 M_⊙, significantly less than expected from their SMBH masses and the local relation. We additionally obtain JWST/NIRSpec IFU spectra of three of our quasars to calculate black hole masses, which we find are consistent with those in the literature, and to search for the presence of a bright but compact galaxy via a Balmer break, which we do not find evidence for. We discuss the potential effects of dust extinction on our measured fluxes and the impact of selection effects on high-redshift quasar samples. We conclude that the masses of the SMBHs relative to the host galaxy stellar masses have a much larger scatter than locally, large enough that these selection effects cannot be responsible. The result is reinforced by other studies. Finally, we explore the potential implications of these results on the picture of SMBH-galaxy coeval growth in the early Universe.§ INTRODUCTIONActive galactic nuclei (AGN) can have great effects on their host galaxies as the two evolve over cosmic time; they have the ability to enhance or halt star formation and even potentially affect the morphology of their hosts <cit.>. However, the precise nature of the interplay between AGN and their host galaxies is still a matter of investigation, especially at high redshift. To build a complete picture of galaxy formation and evolution and understand how AGN are triggered, it is critical to understand how supermassive black holes (SMBHs) and the galaxies in which they reside evolve together with time.We know already that in the local Universe the stellar masses of galaxies, M_*—particularly their centrally concentrated bulge components—and the masses of their supermassive black holes (M_BH) are correlated across a wide range in black hole mass. The relation is well-studied in the local Universe and up to z∼3, both in inactive galaxies <cit.> and AGN <cit.>, where the host galaxies of even bright quasars are easily detectable and their stellar masses can be determined via photometry, and black hole masses can be measured either by reverberation mapping or by the broadening of emission lines from gas orbiting near the black hole. If the central point source outshines the host galaxy, the host galaxy's emission can be revealed via point-spread function (PSF) subtraction, where a theoretical or empirical PSF is scaled to the quasar image and subtracted to remove the contribution from the quasar light and reveal any extended emission from the host galaxy. Using ground-based optical telescopes and the Hubble Space Telescope, large samples of AGN (including bright quasars) at z ≲ 3 have been PSF-subtracted to reveal and characterize their underlying host galaxies <cit.>. Above z∼4, however, the rest-frame optical is shifted out of the range of these telescopes: observations instead probe the rest-frame ultraviolet, which can provide information about the star formation occurring in the host galaxy but is a less robust tracer of mass. The rest-frame ultraviolet is also intrinsically fainter (and more strongly impacted by dust) in a wavelength regime where the quasar is bright; host galaxy emission has, however, been detected or constrained in a small number of high-z systems <cit.>. Alternatively, the dynamical mass, measured via the widths of far-infrared lines like [C II] (158 μm), can be used as an approximation of the stellar mass. However, it has associated uncertainties related to assumptions about the nature of the dynamics of the galaxy and its orientation on the sky. All these studies are subject to various selection biases, but their general conclusion is that up to z ∼ 4, the local relation between stellar and SMBH mass is preserved within the uncertainties caused by these biases. That is, the black holes and stellar populations appear to co-evolve, possibly regulated by feedback processes.JWST is advancing this topic rapidly at z> 4, enabling the discovery of AGN with intermediate-mass (∼ 10^7 M_⊙) black holes at z = 4 - 11.and finding that they generally are overmassive relative to their host galaxies, compared with the local Magorrian Relation <cit.>. Pushing these discoveries to very high redshifts may identify the seeds of SMBHs in the early Universe <cit.>. New observations with JWST also address a related question: how seed black holes can grow to ∼ 10^9 M_⊙ in < 1 Gyr, as found in luminous very high redshift quasars <cit.>.We report observations addressing the latter issue using NIRCam on JWST (which probes the rest-frame optical out to much higher redshift and with greater spatial resolution and sensitivity than HST). To explore the coevolution of host galaxies and their central SMBHs at high-z with NIRCam, we designed a JWST GTO program <cit.> to obtain NIRCam imaging and/or NIRSpec IFU observations of five z ∼ 5-7 quasars at a range of star formation rates, AGN luminosities, and obscurations.Our first quasar analyzed, HSC J2239+0207 (z=6.25), is a sub-Eddington quasar with a high ALMA dynamical mass <cit.>, making its host galaxy theoretically easy to detect in PSF-subtracted NIRCam images. Indeed, we detected the host galaxy of this quasar, but determined its stellar mass to be more than an order of magnitude less than its ALMA [C II] dynamical mass <cit.>. J2239+0207 therefore lies significantly above the local relation. <cit.> have found a similar result, in contrast to other early NIRCam results from <cit.> who detected the host galaxies of two similarly high-mass AGN and found them to be consistent with the local relation.Is J2239+0207 an outlier, coincidentally lying above the local relation and most other z∼6 quasars, or does the relation perhaps exhibit greater scatter at high-z than locally? The remaining quasars in our JWST GTO program will help to expand the number of high-M_BH, high-luminosity (L_AGN) AGN at z∼6 with constraints on the masses of their host galaxies and provide additional evidence for or against the evolution of the relation with cosmic time.For HSC J2239+0207 (z = 6.25) and two additional quasars, J073103.12+445949.4 (z = 5.01) and J134015.03+281328.1 (z = 5.36) (or J0731+4459 and J1340+2813; we use JHHMM+DDMM for brevity) we obtained NIRCam images and NIRSpec spectra. We search for host galaxy emission via PSF subtraction in the images, and in the case that the host galaxy is more compact than the core of the PSF, also set limits on the mass by searching for Balmer breaks in the integrated quasar plus host galaxy spectra.The final two quasars,SDSS J1148+5251 and ULAS J112001.48+064124.3 (J1120+0641) at z∼6.4 and 7.1 respectively, are among the highest-redshift quasars known, and we obtained only NIRCam images to search for hosts via PSF subtraction. However, for J1120+0641 in particular, its small dynamical mass of (4.3 ± 0.9) × 10^10 M_⊙ <cit.>, even considering the small sub-mm size of the continuum and line source, argues persuasively against the possibility of a very massive compact host. For all these quasars, we determine host masses from our derived PSF-subtracted fluxes compared with values from the literature for galaxies without AGN. We will estimate M_BH from a combination of our new spectra and from the literature.In this paper, we describe the process of reducing our NIRCam and NIRSpec data in Section <ref>. In Section <ref>, we outline the steps taken to remove the quasar signal via PSF subtraction of the NIRCam images and place limits on the masses of the host galaxies. We also calculate black hole masses based on the Hβ line widths and constrain Balmer break strengths for the quasars with NIRSpec observations. With these measurements, we discuss the location of our quasar sample relative to the local relation in Section <ref>, and outline the potential implications for the coevolution of SMBHs and galaxies at early times. We summarize our results in Section <ref>. Throughout this work, we assume a flat cosmology with H_0 = 69.6, Ω_M = 0.286, and Ω_Λ = 0.714. § OBSERVATIONS AND DATA REDUCTION §.§ NIRCam All our quasars were observed in three NIRCam filters chosen from F210M, F360M, F410M, F430M, and F480M (see Table <ref>, with exposure times ranging from 265.3 to 2623.8 seconds depending on the quasar and filter. We used Module B SUB400P (FOV 12.5× 12.5, SW; 25× 25, LW) to image the quasar field using either the RAPID or BRIGHT2 readout modes. We adopted 4×4 dithering patterns to improve the PSF sampling and mitigate cosmic rays and detector artifacts, using primary dither type INTRAMODULEBOX and sub-pixel dither type STANDARD. Reference PSFs were obtained for all quasars by observing nearby bright stars. Stars were observed using the same module, dither types, and filter configurations as their corresponding quasars, but all stellar observations used the RAPID readout mode to avoid saturation. All stars were observed for 530.7 seconds in SW bands and 265.3 seconds in LW bands.We processed our NIRCam data using the JWST pipeline version 1.9.6, roughly following the procedures recommended in the STScI JWebbinars.[https://www.stsci.edu/jwst/science-execution/jwebbinars] The pipeline parameter reference file is registered in the JWST Calibration Reference Data System (CRDS) as jwst_1084.pmap. We add a custom step to Stage 2 of the pipeline to characterize and subtract 1/f noise, a striping pattern in the images caused by the detector readout, from each frame. We produce final mosaic images of each quasar and star in Stage 3 of the pipeline by aligning and stacking all frames obtained in Stage 2, and resample the images to smaller pixel scales (0.0147/pixel in SW filters, 0.0300/pixel in LW filters) using the drizzling algorithm in the Resample step of the pipeline to improve the accuracy of the PSF subtraction. Finally, we use the Photutils Background2D function on the mosaiced and resampled images to perform a global background subtraction. §.§ NIRSpec We also obtained NIRSpec/IFU data of three quasars in our sample, with a 3×3 field of view composed of IFU elements of0.1×0.1 on the sky. J0731+4459 and J1340+2813 were observed with the G235M/F170LP disperser-filter combination,offering wavelength coverage from 1.66 μm to 3.07 μm with a nominal spectral resolution of ∼1000; J2239+0207 was observed in the PRISM mode with a wavelength coverage from 0.60 μm to 5.30 μm with a nominal resolution ∼100. All these observations were carried out with the SPARE-CYCLING dither type inLARGE size at points 1, 2, 3 and 4 to mitigate cosmic ray and detector artifacts during data reduction. The NRSIRS2 readout was selected with a total integration time of 5894 seconds for J0731+4459, 7353 seconds for J1340+2813, and 8870 seconds for J2239+0207. We processed the NIRSpec IFU data with JWST pipeline version 1.12.0 (with CRDS pipeline parameter reference file jwst_1088.pmap) following the standard steps. This includes the first stage, Detector1Pipeline, to apply detector-level corrections (e.g., dark current and bias subtraction, persistence correction, cosmic-ray removal) to the raw data of individual exposures and produce the uncalibrated 2D spectra; and Spec2Pipeline to assign world coordinate system (WCS) information to the data, apply flat-field correction, make flux calibration, and construct 3D data cubes from the 2D spectra obtained at each dither location. The third stage, Spec3Pipeline, combines the individual data cubes at different dither positions to produce the final merged data cube with outlier rejections, drizzling, etc. to remove any additional artifacts and improve spatial sampling. We found that the default pipeline did not always clean up all the obvious outliers: we therefore added an additional step to manually mask out the bad pixels in the 2D spectra after a careful visual inspection of individual cubes, and redid the final cube construction. Finally, we extracted the quasar spectrum using a circular aperture at a radius of 0.35 . For the background subtraction, we placed the same circular aperture at random locations of the cube that do not contain real structures, computed the medium background spectrum, and subtracted it from the quasar spectrum. § ANALYSIS AND RESULTS §.§ Point-spread function subtractionOur quasar observations have small fields of view with few bright stars, so building an empirical PSF from stars in the field <cit.> was not a viable option. Instead, for each quasar we obtained a dedicated observation of a nearby PSF star in the same bands, taken immediately after observing the quasar to reduce any temporal variation in the instrument's PSF between the images, and subtracted it from the quasar image to reveal any underlying host galaxy emission. For this method of PSF subtraction to return reliable results, the PSFs of the quasar and star must be nearly identical. The quasar and star are placed at the same location on the detector to minimize the effects from any spatial variation of the PSF across the detector. We additionally chose medium bands for all NIRCam observations to virtually eliminate differences in the profiles of the quasar and stellar PSFs due to their different spectral shapes across the filter (the flat continuum of the quasar versus the declining Rayleigh-Jeans tail of the star).The quasar's redder color in the filter could tend to broaden its PSF; if we then subtracted the narrower PSF of the star, we might observe a spurious galaxy detection.However, we still considered the possibility of a discrepancy in the PSF shapes. We investigated the magnitude of any potential PSF mismatch by generating simulated stellar and quasar PSFs in our observed bands using WebbPSF. We used a Rayleigh-Jeans spectrum to generate the stellar PSFs, and a general quasar template <cit.>, shifted to the appropriate redshift, for the quasars. As in S23, we find no significant difference between the simulated quasar and stellar PSFs in any band; on average, the deviation between the stellar and quasar PSF at any given radius is significantly less than 1% in all bands. We therefore do not expect any significantresiduals to be introduced in the PSF subtraction as a result of the different spectral shapes of the quasars and their PSF stars.The process of PSF subtraction begins by aligning the background-subtracted star and quasar images in a given band, and normalizing the stellar PSF image to match the flux of the quasar in the center of the image (where the quasar light is most dominant). The alignment is performed by hand, as this provides better results than calculating the necessary shifts from centroiding algorithms. We examine the azimuthally-averaged radial profiles of the star and quasar to ensure that the star is centered and scaled correctly. If the star and quasar are properly aligned, their profiles will exhibit identical shapes at small radii, where the point source dominates. The radial profiles of each quasar and its corresponding PSF star (after scaling) are shown in Figure <ref>. In most cases, the profile of the quasar is virtually identical to the profile of its PSF star in all bands at radii ≲ 0.4. At greater radii, deviations between the stellar and quasar PSFs are due to noise. The exception is J2239+0207, where the quasar exhibits slight excess emission compared to the star in the F360M and F480M bands: as discussed in S23, this is evidence of a host galaxy detection. The radial profiles of the other four quasars in the sample do not display any obvious excess relative to the PSF star, implying that the host galaxy signal will likely not be obvious in the PSF-subtracted images. Moreover, the excellent agreement in the shape of the star and quasar radial profiles underscores the results from our WebbPSF simulations: despite their different spectral shapes, the quasars and their corresponding PSF stars exhibit nearly identical PSFs. That is, any deviations between the PSF shapes of a quasar and star within ∼13 pixels (<0.4, LW, <0.2, SW) of the PSF center are likely caused by extended emission from the host galaxy, rather than a PSF mismatch. We note that this method of normalizing the reference PSF necessarily removes any flux from the host galaxy near the center of the image, and may lead to oversubtraction. We examine this possibility further in Section <ref>.After scaling the stellar PSF to the quasar image, we subtract the stellar PSF and examine the residuals for evidence of extended emission. The PSF-subtracted images of all quasars are shown in Figure <ref> in order of ascending redshift. The cleanliness of the images, with excess noise under the core of the PSF but few other residuals, demonstrates the quality of the subtractions. Most of the images do not display evidence of extended emission, with the possible exceptions of J0731+4459 in F430M, J1340+2813 in F410M and F430M, and—as discussed in S23—J2239+0207 in F360M and F480M.We first measure the remaining flux for each quasar in each band by placing a 0.4 radius aperture on each PSF-subtracted image (we do not apply aperture corrections to these measurements, but based on the measured effective radii of typical z∼6 galaxies from JWST <cit.> this aperture should capture ≳80% of the galaxies' flux). The host galaxy of J0731+4459 (z=5.01, top row of Figure <ref>) is undetected in F410M (measured flux at the <3σ level) and marginally detected (∼4σ) in F430M. J1340+2813 (z=5.36, second row) is undetected in F210M and marginally detected (4-5σ) in F410M and F430M. J2239+0207 (z=6.25, third row) is undetected in F210M but returns fluxes at the 6.8 and 5.5σ level in F360M and F480M respectively (as discussed in S23), our only apparently unambiguous detection with this PSF normalization method. J1148+5251 and J1120+0641 (bottom two rows, the two highest-redshift quasars in our sample) are undetected in all bands. That is, the only images in Figure <ref> where the host galaxy emission can be unambiguously seen are the F360M and F480M filters of J2239+0207. These results are consistent with the radial profiles (Figure <ref>), where only J2239+0207 shows obvious deviations between the radial profiles of the quasar and its PSF star.We report the flux in all images within this 0.4radius aperture in Column 5 of Table <ref>. The errors quoted in the preceding paragraph are based only on statistics measured from the noise in the images away from the quasar. There are also systematic errors associated with the PSF subtraction, which we discuss in the next section. §.§ Accounting for oversubtracted fluxBecause the host galaxies of our quasars are very faint, the usual practice of subtracting the PSF down to the projected galaxy emission is not feasible. Instead, as described in the preceding section, we scaled the stellar PSF to match the quasar PSF at the center, which necessarily reduces the residual flux to zero near the center of the image. Placing an aperture on the PSF-subtracted image therefore returns essentially a lower limit on the total galaxy flux, because any flux at the very center of the galaxy (behind the PSF artifacts) has been subtracted. To estimate the amount of “missing" flux and provide an alternate, more conservative measurement of the galaxy flux and therefore its mass, we iterate the normalization of our reference PSF and fit Sérsic profiles to the residual flux in the PSF-subtracted images.We begin by modifying the normalization of the reference PSF. When we normalize the reference PSF to the quasar PSF, the resulting profile is broadly consistent with a Sérsic profile for all of our quasars, especially at r>0.2(see Figure <ref>). Early JWST results <cit.> have found that Sérsic indices of n∼1 are typical for galaxies at z∼6: indeed, we find that this is the Sérsic index most consistent with the profiles of our PSF-subtracted images. While a higher Sérsic index may provide a slightly better fit to the very inner, extremely noisy part of the image (at r<0.1), any index above n∼1.5 does not fit the region from 0.15to 0.4well. The quasar PSF, on the other hand, is highly inconsistent with a Sérsic profile. Therefore, by decreasing the flux of the reference PSF before subtracting and examining the resulting modified PSF-subtracted image and radial profiles until the PSF pattern becomes visible in the “subtracted" image and the radial profile is no longer well-fit by an n=1 Sérsic profile, we can place an upper limit on the flux attributable to the galaxy rather than to the PSF. We adjust the normalization to a range of values, such that the measured flux in the 0.4aperture returns from two to twenty times the flux obtained when normalizing the stellar PSF to the quasar PSF at the center of the image. An example of this process is shown in Figure <ref>, for the F360M image of J1340+2813.We examine the radial profiles of these adjusted PSF-subtracted images, and fit the region of the profile relatively unaffected by central PSF artifacts (at radii ≳0.15), with an n=1 Sérsic profile. We set the effective radius of the model Sérsic using equations 5 and 6 of <cit.>, determining an “expected" mass from the local relation and the black hole masses of our quasar sample: these effective radii are between 1 and 1.5 kpc for our quasars.We find only small adjustments to the normalization are needed to account for oversubtraction. For most bands, when tuned to return more than 4-6 times the original, normalized-to-zero-at-center flux, the PSF pattern begins to become apparent in the subtracted images, and the core of its radial profile becomes inconsistent with a Sérsic profile (see Figure <ref>). The exception is the F410M image of J0731+4459, which displays worse stellar-quasar PSF agreement than the rest of the sample and therefore a noisier radial profile; we can increase its flux by approximately a factor of 20 before the radial profile becomes inconsistent with the Sérsic. We report the upper limit on the J0731+4459 F410M flux in Table <ref>, but do not use this band to constrain the galaxy mass. We take the maximum normalization that returns a) a subtracted image without obvious residual PSF patterns, and b) a radial profile that can be fit by an n=1 Sérsic profile, as the upper limit on the flux of the galaxy as listed in column 6 of Table <ref>. §.§ BH masses and Balmer breaks from NIRSpec spectra We obtained NIRSpec spectra of three of our quasars: J0731+4459 (z=5.01), J1340+2813 (z=5.36), and J2239+0207 (z∼6.25) to constrain the strength of the Balmer break and measure black hole masses from Hα and Hβ. All three quasar spectra are shown in Figure <ref> in order of increasing redshift. The prism spectrum of J2239+0207 (bottom row of Figure <ref>) is extensively discussed in S23: the spectral resolution and signal-to-noise ratio are low, but we are nonetheless able to use Hα to place a lower limit on the black hole mass of log (M_BH[M_⊙]) > 8.5, consistent with previous measurements from Mg II and C IV. The redshifts of both J0731+4459 and J1340+2813 place Hβ within the NIRSpec wavelength range, which we use to calculate an additional black hole mass estimate.We do not observe an obvious narrow component of Hβ in the spectra of J0731+4459 and J1340+2813. However, both spectra display a [Fe II] emission feature blended with Hβ, which must be fit simultaneously with the Hβ line. We therefore fit two Gaussian profiles—for the broad component of Hβ and for the [Fe II] feature—superimposed on a local linear continuum. We fix the central wavelengths to the line centers and allow the amplitudes and widths of both lines to vary. For J0731+4459, we obtain a FWHM of ∼3400 km/s. This line width can be converted to a black hole mass using Equation 5 in <cit.>: logM_BH = log{ [ FWHM(Hβ)/1000km s^-1 ]^2 [ λ L_5100 /10^44 erg s^-1 ]^0.5}+ (6.91 ± 0.02) where we use the flux density in F410M (Table <ref>) as a proxy to determine the continuum luminosity at 5100Å. We find a black hole mass log (M_BH[M_⊙]) = 9.3^+0.2_-0.3.For J1340+2813, we measure a FWHM of 5100 km/s, and using Equation <ref> obtain a mass log (M_BH[M_⊙]) = 9.4^+0.2_-0.4. These M_BH measurements for J0731+4459 and J1340+2813 are both consistent with existing measurements from the literature <cit.>.J0731+4459 and J1340+2813 also have rest-frame 3650 positioned within the NIRSpec medium wavelength range. While their host galaxies are only marginally detected in the NIRCam images, their spectra might reveal a Balmer break if there is a massive host galaxy sufficiently compact to hide behind the PSF subtraction artifacts.To test for this possibility, we fit a Balmer break spectrum tothe quasar spectra, varying the strength of the break. We created a model of the Balmer break using the PopStar models <cit.> assuming a Chabrier initial mass function. The strength of any Balmer break depends on the age of the stellar population dominating the SED; a very young population will have a very weak feature. For a baseline model, we have assumed constant star formation for 500 Myr. To fit the resulting complex spectrum to the quasar spectrum, we first mask all absorption and emission features in both spectra and fit the model PopStar spectrum with two lines, shortward and longward of the Balmer break, and a cubic function connecting the two to represent the break itself. We fit thisparameterization of the Balmer break to the spectra of our quasars, with the normalization of the model, the magnitude of the break, and an overall slope (to represent any effects of dust extinction) as free parameters.The best-fit strengths of the Balmer break are approximately 0.8% and 1.4% the flux of the quasar continuum in J0731+4459 and J1340+2813, respectively, as shown in Figure <ref>, and both are negative. In both cases, no break at all is consistent with the spectra. We take upper limits to the nominal break values as the negatives of the best fits (so the breaks are positive); they permit galaxy flux densities in J0731+4459 and J1340+2813 of 2.5 and 5.4 μJy respectively in the 3 to 4 μm bands, values consistent with the upper limits in Table <ref> for the same quasars. This result makes it unlikely that galaxies sufficiently massive to preserve the local relation are so compact they can hide in the PSF. ccccccc Measured properties of the high-z quasar sample0ptQuasar Redshift Band Quasar Galaxy Flux Galaxy Flux InferredFlux Lower Limit Upper Limit Galaxy Mass(μJy) (μJy) (μJy) log(M_⊙)J0731+4459 5.01 F210MF410M 194.0±0.2 0.24 4.32 F430M 112.0±0.2 0.85 5.12 10.0-10.7 F210M 95.1±0.3 0.78 1.56 J1340+2813 5.36 F410M 173.2±0.2 0.88 5.28 10.0-10.8 F430M 192.7±0.2 1.06 6.36F210M 6.81±0.04 0.12 0.60 J2239+0207 6.25 F360M 7.43±0.05 0.34 1.36 9.8-10.4 F480M 11.32±0.09 0.44 1.32F210M 120.5±0.2 0.39 0.78 J1148+5251 6.40 F360M 117.1±0.2 0.45 2.70 9.9-10.7 F410M 103.2±0.1 0.48 1.92F210M 42.0±0.1 0.12 0.48 J1120+0641 7.09 F360M 53.2±0.1 0.20 0.80 9.7-10.3 F480M 48.3±0.1 0.31 0.62 The measured quasar fluxes, and lower and upper limits on the galaxy fluxes, of our five high-z quasars. The lower limits on the galaxy flux were obtained by normalizing our reference PSF to match the flux of the quasar at the center of the image before subtracting, therefore subtracting the flux to zero near the center of the host galaxy (see Section <ref>). We then adjusted the normalization of the reference PSF to determine how much flux could be attributed to the host galaxy near the image center, and adopted the maximum flux with subtracted image and radial profile consistent with a Sérsic profile rather than a PSF as the upper limit on the galaxy flux. We then calculate lower and upper limits on the masses, in Column 7, by applying Equation <ref> to the limits on the host galaxy flux.§.§ Galaxy MassesOur five quasars possess high-mass supermassive black holes (log (M_BH[M_⊙]) = 8.8 to 9.8): the local relation therefore predicts large host galaxy masses, between log(M_*[M_⊙])=11 and 12. However, obtaining host galaxy masses is nontrivial, as each quasar has been observed in no more than three closely spaced bands. It is therefore not feasible to determine the host masses by performing e.g. spectral energy distribution (SED) fitting because of the degeneracies in models due to our small number of data points. We must therefore turn to another method.The Spitzer Space Telescope observed hundreds of high-redshift AGN and inactive galaxies during its mission, and we can use these observations to construct an empirical relationship between the IRAC Band 1 (3.6 μm) flux, whose bandpass closely resembles NIRCam F360M, and galaxy mass from SED fitting. At the redshifts of our quasars, F360M probes the rest-frame optical and should therefore scale with the galaxy mass.<cit.>, <cit.>, and <cit.> model massive z∼5-8 galaxies with high-quality observations including IRAC Band 1 measurements.[We use the exponentially decaying star formation models of <cit.>, models E of <cit.>, and the <cit.> models including nebular emission.] These samples include high-mass, highly star-forming galaxies similar to those we expect to host our luminous quasars. To apply these models to estimate the masses of our high-z quasar hosts, we must correct all IRAC measurements to a single redshift: we choose z=6. At a different redshift, the filter will probe a different rest-frame wavelength; to explore the consequences of this effect, we use a model SED of a galaxy after 500 Myr of constant star formation to relate the measurements at these different rest wavelengths to each other. At the small levels of reddening (E(B-V) ∼ 0.05) typical of these galaxies, the SED is virtually flat (in frequency units) in the rest-frame optical, the wavelength of interest, and the K-corrections applied are therefore small. Figure <ref> shows the masses of the <cit.>, <cit.>, and <cit.> galaxies after correcting the IRAC Band 1 flux densities to z=6. The errors on the <cit.> and <cit.> galaxies are directly from the references, but the quoted errors in <cit.> are much smaller and appear not to include systematic uncertainties. We therefore increase the error bars on the <cit.> points by 0.15 dex, making them similar to the errors quoted in the other references to ensure those points do not dominate our fits. We fit the relationship between the 3.6 μm flux density (adjusted to z = 6) and galaxy mass measurements by least squares minimization, holding the slope to 1 (i.e., the mass should be proportional to the 3.6μm flux density). If all the measurements are included in the fit, we reproduce the result for the host galaxy of J2239+0207 reported in S23. This may be an appropriate average value if the modeled galaxies represent the true distribution of star forming properties in the high redshift population. However, it is clear that the points in Figure <ref> fall into two populations. Very young galaxies (dominant stellar population age from SED fitting <10 Myr) and more typical, slightly older galaxies lie on different tracks, with very young galaxies displaying stronger 3.6 μm flux at a given mass. Including the very young galaxies increases our χ^2 and biases our fit, tending to return a lower galaxy mass for a given measured flux: using this fit may lead us to mistakenly conclude that our quasar host galaxies lie above the local relation even if they do not. A more conservative result is obtained by rejecting these very young galaxies: the resulting fit is shown by the solid line in Figure <ref>. This fit estimates masses 0.2 dex higher than a fit including all the galaxies. The reduced χ^2 for this fit is 1.98; if the single most discordant point (the value at log(F_ν[μJy]) = -0.31, log(M_* [M_⊙]) = 9.17) is rejected, the reduced χ^2 is 1.50. The equation of this best-fit line is log(M_* [M_⊙]) = log(F_ν[μJy]) + 10.20, where F_ν is the IRAC Band 1 flux at z=6 in μJy and the galaxy mass is in solar units. With this empirical relation, we can place lower and upper limits on the masses of our quasar host galaxies from the lower and upper limits on their fluxes. As with the galaxies in the models, we begin by correcting our lower limits (from aperture photometry, column 5 of Table <ref>) and upper limits (from varying the normalization in Section <ref>, column 6 of Table <ref>) on the galaxy fluxes to z=6; we again make the assumption that the galaxy SED is flat at the wavelengths of interest, around rest-frame 500 nm. We can then use these K-corrected fluxes and Equation <ref> to obtain a mass range for each quasar host galaxy. For the galaxies with an F360M flux (J2239+0207, J1148+5251, and J1120+0641), we utilize that flux to calculate the mass. For J1340+2813, we use the band at the most similar wavelengths, F410M. J0731+4459 also has an F410M flux, but because its F410M PSF-subtracted image displays more severe PSF subtraction artifacts than its F430M image, suggesting some degree of mismatch between its quasar and stellar PSF in that band, we use F430M to calculate its mass instead. Again, the rest-frame optical SED is very flat for these galaxies and using F410M or F430M instead of F360M does not probe a significantly different part of the SED. The masses are reported in Column 7 of Table <ref>.§ DISCUSSIONThe existence of the supermassive black holes in very high redshift, very luminous quasars is a severe test of conventional theories that supermassive black holes and their host galaxies grow symbiotically from very early times. This issue is most challenging for the most massive SMBHs, e.g. those in luminous quasars. Our quasar host galaxy masses, reported above, appear to challenge this hypothesis, and imply that the local relation is not well established at z∼5-7. This section explores this possible contradiction in more depth. We begin by discussing theoretical predictions of host galaxy masses for quasars similar to our sample and comparing these predictions to our measurements. We then discuss whether our results could be missing the host galaxies or underestimating their masses because of extinction by interstellar dust. We conclude by showing that the deviation of our and other z∼5-7 quasar samples from the local relation is unlikely to be a consequence of selection effects. §.§ Testing Quasar-Galaxy Symbiotic Evolution A theoretical example of the growth of a galaxy and SMBH together is provided by <cit.> (see their Figure 1). In these particular models, the central supermassive black hole and its host galaxy have reached their final masses by z=5, with nearly all growth (particularly of the host galaxy) complete even earlier, by z=6. As such, between z=6 and z=5, the host galaxy is slightly overmassive compared to the black hole. <cit.> discuss the results of six large cosmological simulations with different approaches to SMBH-galaxy coevolution, and show that in the two with adequate statistics, high-luminosity quasars at z=5 reside in host galaxies in the 0.3 to 1 × 10^11 M_⊙ range, with relatively little scatter. Since the quasars we have studied are well above the minimum luminosity for this range, we expect them to lie in hosts of 1 × 10^11 M_⊙ (and larger); significantly more massive than the upper limits we have calculated for our quasar sample. We now consider whether the apparent contradiction to these predictions we have found might arise from other effects than the intrinsic quasar/host behavior. §.§.§ Effect of Intense Nuclear Star Formation Three members of our sample, J1148+5251, J1340+2813, and J2239+0207, have very large far-infrared luminosities indicative of star formation rates >> 1000 M_⊙ yr^-1 <cit.>. Such star forming regions are expected to be centrally concentrated within the host galaxy <cit.> and their frequent occurrence around high-redshift quasars may be directly related to the growth of the SMBHs and the energy output from the quasar <cit.>. The regions are sufficiently luminous and compact to approach the Eddington limit, where the radiation pressure from the young, hot stars blows the dust and with it the interstellar gas out of the region and quenches the star formation <cit.>. As a result, these extremely high levels of star formation observed in J1148+5251, J134+2813, and J2239+0207 are likely very short-lived; nonetheless, they can contribute significantly to the mass of the host galaxy (e.g., 5 Myr of creation of 2000 M_⊙/yr^-1 of stars yields 10^10 M_⊙ of stars for a conventional, i.e. Chabrier or Kroupa, initial mass function, although the actual mass function in these regions is probably more top-heavy and the mass yield therefore lower). Repetitive short bursts of this nature quenched by the Eddington limit process are proposed to be a mechanism for feeding black holes and activating AGN <cit.>.Although important to understand the co-evolution of the quasars and their host galaxies, the gas, dust, and star formation described above is centrally concentrated within a radius ≲ 1 kpc <cit.>, in the region made invisible in our images by the quasar and the PSF subtraction artifacts. Our galaxy masses and upper limits are derived for the extended, older, more quiescent part of the galaxy outside this central region. §.§.§ Host Galaxy Extinction Our failure to detect host galaxies might be attributed to galaxy-scale obscuration by interstellar dust. Our quasars fall into two broad categories. In J0731+4459 and J1120+0641 there is no evidence for ongoing strong nuclear star formation and the accompanying gas and dust and the extinction should be typical of z∼6 field galaxies. J1340+2813, J2239+0207, and J1148+5251, however, have very high far infrared luminosities <cit.> and should have significant obscuration in their central regions. We consider the first category first.Extinction in typical z∼6 galaxies: The galaxies in Figure <ref>, used to derive Equation <ref> and determine host galaxy masses, in general show very low levels of extinction. Is this a reasonable assumption to make for typical z∼6 galaxies in general, and the subset of our quasar sample without high levels of nuclear star formation? The galaxies from <cit.>, <cit.>, and <cit.>, predating JWST, were by necessity selected on the basis of the visible and near infrared measurements and therefore may be biased toward low extinctions. To first order, this selection does not undermine their use as mass indicators, but it could affect the application of Equation <ref> to other galaxies at z∼6, since they might have systematically higher extinction.<cit.>, using a mix of direct-temperature and strong-line metallicity calibrations, find typical metallicities of 12 + log(O/H) ∼ 8.2 for galaxies at z∼6 with masses of ∼ 10^10 M_⊙ (i.e., similar to the quasar hosts); this is a metallicity four times lower than the solar value <cit.>. The ratio of dust to gas mass is then expected to be an order of magnitude lower than in galaxies of approximately solar metallicity <cit.>. It is therefore not surprising that the extinctions are low in galaxies at z∼6, and the sources in Figure <ref> are likely not significantly biased towards uncommonly low extinction. The levels of extinction for infrared-selectedgalaxies (i.e., observed with JWST NIRCam) are discussed in <cit.> for a sample at 3 < z < 7.5. They find that about 76% of their sample has A_V < 0.5 (probed by the F150W-F444W color) and another 17% has 0.5 < A_V < 1. These extinction levels are only slightly higher than those for the sample shown in Figure <ref>. Assuming that the two quasars in our sample without large far-infrared excesses lie in relatively typical z∼6 galaxies, these low extinction levels support our assumption that the galaxy-scale interstellar extinction is low in J0731+4459 and J1120+0641. Extinction in highly star-forming z∼6 galaxies: However, the three other quasars in our sample (J1340+2813, J2239+0207, and J1148+5251) with huge far-infrared luminosities may be highly-dusty exceptions to the rule of generally low interstellar dust and extinction at z∼6. We examine this possibility using a sample of local LIRGs and ULIRGs, which, like our three quasars, exhibit large far-infrared excesses compared to typical galaxies at their redshift <cit.>. We use the J-K color difference to judge the extinction: these longer-wavelength bands trace old stars and this color is therefore relatively immune to differences in star formation history. We draw examples from the multi-aperture JHK photometry in <cit.>, selecting galaxies with infrared luminosity > 3 × 10^11 L_⊙ <cit.>. We calculate the color difference in all cases for annuli ranging from 5 to 20, away from the galaxy centers; these annuli contain half or slightly more of the total K-band flux within 20, so 5 is analogous to r_e in our fitting of the quasars. For comparison with typical galaxies (i.e. without significant far-infrared excesses), we have taken typical J-K colors in local bulges from <cit.>, and disk colors from <cit.>. To guard against cases with exceptional reddening or measurement errors, we quote the median values from these two references. Table <ref> summarizes the results; the extranuclear colors of these LIRGs and ULIRGs are indistinguishable from those of normal, non-infrared-excess galaxies.That is, despite the extreme levels of star formation in their cores, there is no evidence for significant extinction in the outer parts of the surrounding galaxies. This behavior is consistent with the theoretical predictions at the redshifts of our quasars by <cit.>, which predict that the gas, dust, and star formingregions are all centrally concentrated in high redshift extreme star forming galaxies. In summary, our view of the central regions of these galaxies is obliterated by the quasars, so by necessity we are determining their masses (or upper limits to the masses) from the outskirts of the galaxy (regions outside ∼ r_e). We therefore expect little extinction in the regions we are using to constrain the PSF normalization and therefore the galaxy fluxes and masses, and conclude that the galaxies from <cit.> and <cit.> used to construct Equation <ref> are a reasonable comparison to our quasar hosts even in these three cases. lcccSummary of Observations of Outer Zones of Local ULIRGs0ptgalaxy J-K error refbulge 0.94– <cit.>disk 0.90– <cit.>UGC 1315 0.980.09<cit.>UGC 4509 0.750.09<cit.>UGC 8387 0.790.09<cit.>UGC 86960.850.09<cit.>UGC 9913 (Arp 220)0.900.09<cit.>UGC 12332 (NGC 7469)1.010.09<cit.>§.§ Potential bias in quasar selections Even with our conservative upper mass limits, all five of our quasars lie significantly above the local relation: the central SMBHs are overmassive compared to their host galaxies. Similar behavior is found for the EIGER quasar sample, whose members also lie above the local relation <cit.>. In contrast, the two quasars whose host galaxy emission was detected via subtraction of an empirical PSF by <cit.> (purple diamonds in Figure <ref>) have host galaxies that are roughly as massive as predicted from their black hole mass by the local relation of <cit.>.The spread of values from the black holes similar to the local relation to cases where they are an order of magnitude more massive than it predicts aligns with existing measurements of [C II] dynamical masses using ALMA and other sub-mm telescopes (see the small grey diamonds in Figure <ref>). Indeed, these ALMA results contradict the possibility that the local relation holds for gas masses rather than stellar masses at z∼6: ALMA dynamical masses at this redshift tend to fall above the local relation. However, because the gas dynamics can be influenced by non-gravitational forces and the mass estimates are dependent on assumptions about the gas distribution around the SMBH, caution has been urged in the interpretation of these results until stellar masses could be determined <cit.>.Now that stellar masses are becoming available, their potential biases must also be considered.We used photometry-based estimates of host galaxy stellar masses for the low-redshift PG quasars in S23 to evaluate the effect of such biases in measuring M_BH/M_*. This approach should reproduce the biases proposed formeasurements of host galaxy masses for high redshift quasars <cit.>.This PG quasar sample is shown in Figure <ref> as open triangles. Although they show a large scatter, none of these quasars lie in the region of overly massive SMBHs occupied by the majority of the high-redshift quasars with measured host galaxy stellar masses, i.e., they do not show the expected bias to an extent that could explain the behavior of the high redshift quasars. ALMA and other sub-mm arrays have been used to determine dynamical masses in z>4 quasars from the width of the [C II] line. Potential biases in this approach were also studied in S23 using the PG quasars; the PG quasars used CO measurements, while high-z dynamical masses are typically measured with [C II], but the two lines are expected to have very similar behavior. Again, as shown in Figure <ref>, the results show a large scatter, but none of the PG quasars lie in the zone of overmassive SMBHs occupied by many high redshift systems. §.§ Implications for Co-Evolution of SMBHs and Host Galaxies It is therefore unlikely that the position of our quasars above the local relation is only a consequence of extinction effects on our estimates of the host masses, or of selection biases. Instead, we find that even the most massive SMBHs at z ∼ 6 share the behavior found for lower mass ones <cit.>, namely to show a large scatter in M_BH/M_* with a strong tendency toward over-massive SMBHs compared with the local Magorrian Relation. The small number of SMBHs discovered so far at even higher redshifts <cit.> share this behavior.It is often assumed that the dynamical mass is a good proxy for the stellar mass in these objects. This may not be the case: the host galaxy of J2239+0207, for example, has a stellar mass more than an order of magnitude less than its dynamical mass (S23). Such galaxies are not unheard of in the high-redshift Universe <cit.>. That is, results from both the far-IR (with sub-millimeter telescopes) and the rest-frame optical (with JWST) so far imply that the relationship between galaxy mass and black hole mass is not nearly as tight at high-z as it is in the local Universe.Specifically, the results appear to contradict predictions that the growth of galaxy mass will precede that of SMBH mass such as in <cit.>. They are also problematic for the some of the simulations summarized by <cit.>, which predict reasonably coordinated co-evolution. These results suggest that the growth of the SMBHs is not purely due to galaxy mergers, where the overall host galaxy mass grows and much of the interstellar gas falls to the center where it can be accreted by the SMBH. Nonetheless, the presence of metals and dust around these quasars <cit.> shows that they must lie in evolved populations of stars.The mystery of the rapid SMBH growth therefore persists <cit.>. Solutions may well require ways that the growth of SMBHs could be turbocharged to proceed much more rapidly than in the conventional major merger scenario, as discussed for example by <cit.>. § SUMMARY AND CONCLUSIONSWe obtained NIRCam images and NIRSpec IFU spectra of a sample of high-luminosity z∼5-7 quasars, and perform careful point-spread function subtraction to detect emission from their host galaxies. * Utilizing NIRCam medium bands and reference PSFs from dedicated reference star observations provides excellent quasar-star PSF agreement, allowing us to probe to within less than a kiloparsec of the galaxies' centers before PSF artifacts dominate.* We place lower and upper limits on the rest-frame optical fluxes of the host galaxies by iterating the normalization of the reference PSF, allowing us to precisely determine the maximum flux attributable to the host rather than the quasar. By relating the F360M flux to galaxy mass via Spitzer/IRAC Band 1 observations of galaxies from 5<z<8 in the literature, we find that our entire sample lies significantly above the local relation, with mass upper limits <10^11 M_⊙.* We calculate black hole masses using Hα and Hβ for the three quasars for which we obtained spectra, and find masses consistent with those from the literature.* We explore the possibility that our host galaxies are highly extincted and therefore appear to be less luminous than expected: based on the metallicities and extinctions of typical z∼6 galaxies and the dust morphology of local extremely star-forming galaxies analogous to the highly star-forming galaxies in our sample, we conclude that extinction at the radii we are able to probe with PSF subtraction is likely not significant. We also examine the possibility of selection biases driving the apparent deviation from the local relation observed at z∼6, and conclude that even bias towards finding overmassive black holes associated with luminous quasars cannot explain the abundance of overmassive SMBHs observed at high redshift.* Analysis of mm-wave gas dynamics measured for low-redshift PG quasars indicates that selection biases also do not influence these results sufficiently to account for the behavior of the high-z quasar hosts.* Our results appear to contradict evolutionary models where the SMBH growth at very high redshift proceeds much as it does locally, through the mergers of galaxies and injection of gas into the nucleus where it is accreted by the SMBH, with co-evolution of the SMBH and its host galaxy linked from early times. JWST's NIRCam and NIRSpec instruments are rapidly expanding the number of quasar host galaxies detected at z≳4, demonstrating that the relationship between SMBH and galaxy masses displays much more scatter at high-redshift than locally, and may in fact evolve with time.As the theory of SMBH-galaxy coevolution evolves, such high-redshift luminous quasars will continue to provide a challenge for models to explain the apparent disconnect between SMBH and galaxy masses at z∼6 now being uncovered with JWST. We thank Junyao Li for bringing a calculation error to our attention. MS, SA, JL, GR, and KH acknowledge support from the JWST Mid-Infrared Instrument (MIRI) grant 80NSSC18K0555, and the NIRCam science support contract NAS5-02105, both from NASA Goddard Space Flight Center to the University of Arizona. The JWST data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The observations can be accessed via[DOI.]http://dx.doi.org/10.17909/4j9v-9q21 JWST(NIRCam, NIRSpec)Astropy <cit.>,Matplotlib <cit.>, NumPy <cit.>, photutils <cit.>, aasjournal
http://arxiv.org/abs/2310.18395v2
{ "authors": [ "Meredith A. Stone", "Jianwei Lyu", "George H. Rieke", "Stacey Alberts", "Kevin N. Hainline" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231027180001", "title": "Undermassive Host Galaxies of Five z~6 Luminous Quasars Detected with JWST" }
Isotropic 3D topological phases with broken time reversal symmetry Hélène Spring1*, Anton R. Akhmerov1, Dániel Varjas2,3,41 Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 4056, 2600 GA Delft, The Netherlands 2 Department of Physics, Stockholm University, AlbaNova University Center, 106 91 Stockholm, Sweden 3 Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Strasse 38, 01187 Dresden, Germany 4 IFW Dresden and Würzburg-Dresden Cluster of Excellence ct.qmat, Helmholtzstr. 20, 01069 Dresden, Germany*[email protected]§ ABSTRACT Axial vectors, such as current or magnetization, are commonly used order parameters in time-reversal symmetry breaking systems. These vectors also break isotropy in three dimensional systems, lowering the spatial symmetry. We demonstrate that it is possible to construct a fully isotropic and inversion-symmetric three-dimensional medium where time-reversal symmetry is systematically broken. We devise a cubic crystal with scalar time-reversal symmetry breaking, implemented by hopping through chiral magnetic clusters along the crystal bonds. The presence of only the spatial symmetries of the crystal—finite rotation and inversion symmetry—is sufficient to protect a topological phase. The realization of this phase in amorphous systems with average continuous rotation symmetry yields a statistical topological insulator phase. We demonstrate the topological nature of our model by constructing a bulk integer topological invariant, which guarantees gapless surface spectrum on any surface with several overlapping Dirac nodes, analogous to crystalline mirror Chern insulators. We also show the expected transport properties of a three-dimensional statistical topological insulator, which remains critical on the surface for odd values of the invariant. § INTRODUCTION In 3D, breaking time-reversal symmetry without breaking isotropy is difficult A three-dimensional (3D) isotropic medium has the highest degree of spatial symmetry. Unless they are explicitly broken, non-spatial symmetries like time-reversal symmetry (TRS) are also present in isotropic systems. Removing TRS typically also breaks isotropy, for example ferromagnets break TRS but also break rotation symmetry along the axes which are not parallel to the magnetization. Antiferromagnets restore some spatial symmetries such as the product of inversion and TRS, but also break rotation symmetry <cit.>. The spatial symmetries are partially restored in altermagnets <cit.>—a recently proposed class of materials combining lack of net magnetization with a spin splitting away from away from high-symmetry momenta, however even in these materials the magnetic order is incompatible with full isotropy.Isotropic systems without TRS are topological The spatial symmetries of a system are relevant both for defining and protecting topological phases <cit.>. While initially considered to be susceptible to disorder, topological systems relying on spatial symmetries were later shown to be protected from localization as long as the disordered ensemble respects the spatial symmetries <cit.>. This protection by an `average' symmetry, a hallmark of statistical topological insulators, is especially powerful in isotropic amorphous media. In an earlier work we demonstrated that unlike their crystalline counterparts—where the spatial symmetry is only preserved by certain crystal terminations—it is possible to utilize the isotropy of a 2D amorphous medium to extend the topological protection to any edge of the system <cit.>.we devise isotropic models that break TRS and are topological Motivated by the two above considerations, we ask whether it is possible to find a model hosting a topological phase protected only by spatial symmetries. Because both TRS and average TRS protect topological phases, we additionally require that the desired model also breaks TRS on average. By designing a scalar, rather than a vector TRS breaking order, we answer positively to the above question. Specifically we demonstrate that the spatial symmetries present in 3D isotropic media protect topological phases, and that the amorphous realization of such a system is a statistical topological insulator phase.The organization of the manuscript is as follows. In Sec. <ref> we formulate an isotropic continuum model where TRS is systematically broken. We present a microscopic Hamiltonian that replicates this model when assembled into a crystal structure, and we present results for the amorphous realization of this model. In Sec. <ref> we demonstrate the topological nature of our models by formulating bulk invariants, examining surface dispersions, and analyzing transport of the topologically protected surface modes. As established in the study of statistical topological insulator phases, we show that the model localizes when its degrees of freedom are doubled. We conclude in Sec. <ref>.§ SYMMETRY ANALYSIS §.§ Continuum modelwe construct an isotropic continuum model using Qsymm In order to guide the construction of a microscopic model, we begin from developing a minimal continuum model with the desired symmetries using the software package Qsymm <cit.>. We follow the procedure outlined in Ref. <cit.>. We start by generating a minimal 2D Dirac Hamiltonian. The mass terms present in this minimal Hamiltonian are capable of gapping out the spectrum. We then search for all of the symmetry representations of inversion and continuous rotation symmetry that remove the mass terms of the minimal Hamiltonian, thereby ensuring that the spatial symmetries prevent a gap from opening. These 2D Hamiltonians correspond to the surfaces of 3D topological bulk models in the same symmetry class. By utilizing the isotropy, we extend the symmetry representations from 2D to 3D to obtain the 3D bulk phases. The symmetry representations of the spatial symmetries are listed in App. <ref>, Eq. (<ref>) and (<ref>). The resulting k-space model is of the form:H_4×4(*k)=(μ_1+t_2k^2)σ_0(τ_0+τ_z)/2 +(μ_2+t_3k^2)σ_0(τ_0-τ_z)/2 +(-t_1+t_4k^2)*σ*kτ_y + (-t_0+t_5k^2)*σ*kτ_x,where μ_i are chemical potential terms, t_i are the hopping terms, σ and τ are the Pauli matrices, with τ representing the orbital space and σ representing spin space, *k=(k_x,k_y,k_z), and k^2=*k*k.Limiting the model to terms quadratic in k means a k-dependent transformation of the form exp(iσ_zϕ) is capable of removing the relative hopping phases and restoring a TRS-like symmetry. Therefore, the model includes terms up to k^3 in order to remove this residual symmetry.Despite lacking TRS, the high degree of spatial symmetry of this model protects the twofold spin degeneracy of all bands. For a fixed *k, the eigenstates of (<ref>) are eigenstates of the angular momentum operator in the direction parallel to *k. Mirror symmetry exchanges states with opposite angular momentum, thereby ensuring the degeneracy of the spin bands. §.§ Microscopic implementationWe devise a bond that breaks TRS but preserves inversion Based on the symmetry-allowed terms of the continuum model (<ref>), we now construct a microscopic model that preserves isotropy while breaking TRS. The minimal model contains two orbitals that have opposite inversion eigenvalues, which we choose as an s and a p orbital. We choose the σ degree of freedom to correspond to the electron spin, which makes the last four terms of Eq. (<ref>) spin-orbit-like, although with an additional k-dependent phase shift necessary to break TRS. In order to realize these spin-orbit-like hoppings in a microscopic model, we therefore consider two separate atoms that host spinful s and p_x,y,z orbitals respectively, as illustrated in Fig. <ref>(a). For the purpose of obtaining a minimal model, we separate the p orbitals into p_3/2 and p_1/2 orbitals with an atomic spin-orbit coupling, and consider only the lower-energy p_1/2,↑↓ subspace.In order to break TRS, we introduce magnetic atoms between the s and p orbitals. Hopping between the two atoms occurs through a virtual process via four s orbitals on a plane perpendicular to the s–p bond axis, located on the middle of the bond [Fig. <ref>(a)]. These intermediate s orbitals each host a magnetic moment, such that together they form a chiral magnetic texture in the plane that contains them. The curl of the magnetic texture defines a TRS-odd vector, that combined with the hopping vector *r, defines a scalar quantity (*M)*r. This is the desired source of scalar TRS breaking. Tiling the space with such s–p bonds restores spatial symmetries, while keeping TRS broken.The Hamiltonian of an x-aligned s–p bond is:H_m= E_s ∑_σ|s_σ⟩⟨s_σ| + E_p ∑_i,σ|p_iσ⟩⟨p_iσ| + ∑_n,σ(Δ|s_nσ⟩⟨s_nσ| + t_s|s_σ⟩⟨s_nσ| + h.c.) + ∑_i,n,σ(t_in|p_iσ⟩⟨s_nσ| + h.c.) + α*̂L̂_p*̂σ̂_p + ∑_n*B_n*̂σ̂_n,where σ∈{↑,↓}, i ∈{x,y,z}, n ∈{1,2,3,4}, |s_σ⟩ are the spinful s orbital states, |s_nσ⟩ are the mid-bond magnetic s_n orbitals, |p_iσ⟩ are the p_x,y,z orbitals, E_s/p are the onsite energies of the s and p orbitals, Δ is the onsite energy of the mid-bond s_n orbitals, α is the magnitude of the atomic spin-orbit coupling splitting on the p orbitals, *̂σ̂_p/n are the spin operators on the p and s_n orbitals, *̂L̂_p are the orbital angular momentum operators on the p-orbitals, *B_n are the magnetic moments of the s_n orbitals. Finally, t_in are the amplitudes of the s_n– p_i hopping, determined by whether the hopping between the p_x,y,z orbitals and the s_n orbitals takes place via the positive or negative lobes of the p orbitals:t_in=t_xδ_ix+t_yzδ_iysgn(y_n)+t_yzδ_izsgn(z_n)where y_n and z_n are the y and z coordinates of the s_n orbitals and sgn(0)=0.We consider certain limits and obtain an expression for the effective hopping We use the Python software package Pymablock <cit.> to obtain the effective hopping t_sp between the s and p_1/2 orbitals as a second-order perturbation. We find that the resulting terms have the desired symmetries by substituting in arbitrary parameters. We demonstrate this result in a limiting case defined by the set of inequalities α≫Δ+B ≫Δ-B ≫ E_s,E_p-α,t_s,t_x/y/z, which holds when spin-orbit coupling is large, and hopping only occurs via the lower-energy virtual level Δ-B. The resulting expression for the effective hopping amplitude is:t_sp=t_s(2t_x-it_yz)/√(3)(Δ-B)iσ_x.This hopping has a complex hopping phase, which breaks TRS. In order to ensure that the hopping phase cannot be removed by a global basis-transformation introducing a relative phase between the s and p wavefunctions, the hopping phase must be distance dependent. This arises naturally due to the different distance dependence of the microscopic hopping amplitudes from the p_x and p_y,z orbitals. Hopping terms along directions other than x follow from applying rotation operators, resulting in hopping terms proportional to *d·*σ where *d is the hopping vector. §.§ Spin splitting in a crystalwe construct a crystal from the bond, and it breaks TRS Because the scalar TRS breaking is insufficient to cause a spin splitting in an isotropic medium, we demonstrate the spin splitting in a crystal structure. We use the s and p atoms as the basis of the rock salt crystal structure [Fig. <ref>(b)] with full cubic (O_h) symmetry. In this model, orbitals of the same type are connected by normal hopping, and orbitals of different types are connected by the complex spin-orbit hopping of (<ref>), resulting in terms off-diagonal in the orbital (τ) space. Because the symmetry-breaking mechanism relies on the nontrivial distance-dependence of the hopping phase, we include both nearest-neighbor as well as third-nearest-neighbor s–p hopping [Fig. <ref>(b)]. The tight-binding Hamiltonian thus takes the form:H_salt =(μ_1+t_1∑_*d_2e^i *k*d_2)σ_0(τ_0+τ_z)/2 + (μ_2+t_2∑_*d_2e^i *k*d_2)σ_0(τ_0-τ_z)/2 +i/a(∑_*d_1e^i *k*d_1*d_1 *σ)( t_3 τ_+ + t_3^* τ_-) +i/a(∑_*d_3e^i *k*d_3*d_3 *σ)( t_4 τ_+ + t_4^* τ_-),where a is the cubic cell lattice constant, σ_±=1/2(σ_x± iσ_y), and similarly for τ_±. *d_1 runs over the six nearest-neighbor bonds symmetry-equivalent to a/2(1,0,0), *d_2 over the twelve next-nearest neighbor bonds symmetry-equivalent to a/2(1,1,0), and *d_3 over the eight next-next-nearest neighbor bonds symmetry-equivalent to a/2(1, 1, 1). The terms of Eq. (<ref>) proportional to t_1 and t_2 are the next-nearest neighbor s-s and p-p normal hoppings respectively [dashed lines of Fig. <ref>(b)], where t_1 and t_2 are both real. The terms proportional to t_3 and t_4 are the nearest and next-next-nearest neighbor s–p hoppings respectively [solid lines of Fig. <ref>(b)], with t_3 and t_4 complex. This Bloch Hamiltonian reproduces the symmetry-allowed terms of the continuum model (<ref>) in the long-wavelength limit, aside from an additional cubic anisotropy term and a slight change of parametrization.The lattice has the spatial symmetries we expect and breaks TRS The tight-binding model (<ref>) preserves the space group of the rock salt crystal structure [see App. <ref>]. The spin-orbit-like s–p hopping terms alternate in sign along the hopping axes in order to preserve inversion symmetry. We select the parameters μ_1=0.1,μ_2=0.2,t_1=0.3,t_2=-0.4,t_3 = exp(0.3i),t_4=0.2iexp(0.3i). The dispersion relation shows that the spin bands are split away from high-symmetry points and lines that have at least a rotation and a mirror symmetry, demonstrating that TRS is broken [Fig. <ref>(c)]. The TRS-breaking terms of our model are next-next-nearest neighbor terms, which leads to linear TRS-breaking terms intrinsically cancelling out and only cubic terms remaining. The surface dispersion shows gapless, propagating surface modes within the bulk gap [Fig. <ref>(d)]. §.§ Amorphous realizationthe amorphous model is also topological, like the continuum model Amorphous systems possess average continuous rotation symmetry, average reflection and average inversion <cit.>. Since the scalar TRS-breaking mechanism is independent of bond orientation, an amorphous realization of the crystal model (<ref>) possesses ensemble isotropy while also systematically breaking time-reversal.We construct amorphous systems using the same procedure as in Ref. <cit.>, treating system sites as hard spheres. Rather than simulating an amorphous version of the crystal defined in Sec. <ref>, with two families of atoms and two degrees of freedom per atom, for simplicity and without loss of generality we simulate one type of atom with four degrees of freedom. We define a minimal real-space model using Qsymm. To further examine the extent of topological protection, we also define a model with twice the degrees of freedom and two protected Dirac cones on the surface in the continuum limit (see App. <ref> for the full definition of both models). We examine the spectral functions of the minimal model, and confirm the joint presence of a spectral gap and the lack of spin splitting [Fig. <ref>(a)], as expected from the symmetry analysis of the continuum model. The surface spectral function confirms the presence of gapless surface modes within the bulk gap [Fig. <ref>(b)].§ TOPOLOGICAL PROPERTIES §.§ Bulk invariantsThe class A model has 3 equivalent invariants based on inversion sectors, the berry curvature and mirror and rotation sectors To define the topological invariants, we observe that the high spatial symmetry guarantees that the protected band gap closings only occur at high symmetry momenta: *k=*0 and *k=*∞ for the amorphous system. To compute the k-space topological invariant we use an effective k-space Hamiltonian H_eff that we obtain by inverting the single-particle Green's function that we project onto the plane wave basis, as described in Refs. <cit.>.The invariants of 3D statistical topological insulators are constructed from the invariants of 2D strong topological phases <cit.>. The invariant of 2D class A systems is the Chern number, given by the integral of the Berry curvature over the 2D Brillouin zone at the Fermi energy. Our 3D class A model relies on mirror symmetry to protect its surface modes. Therefore a possible bulk invariant of this model is a mirror Chern number, given by the difference in Chern numbers of opposite mirror sectors:C_M = 1/2(C_+-C_-), C_±=ℱ_±(*k)d^2*k,where the integral runs over a compactified mirror-invariant plane ℝ^2∪{*∞} <cit.> (e.g. k_z=0, invariant under the mirror operator k_z→-k_z with U_M_z = ℐexp(iπ S_z)), and ℱ_± is the Berry curvature of the even/odd (± i eigenvalue) mirror sub-blocks of the Hamiltonian. The invariant for crystal systems has the same form for a mirror-invariant plane in the crystal Brillouin zone<cit.>. However, because both the systems have inversion and rotation symmetries, the mirror Chern number can also be expressed in terms of rotation and inversion eigenvalues at high-symmetry momenta. Numerical results and a further discussion of invariants of the amorphous system are found in App. <ref>. §.§ Surface spectrum although the invariants are Z2, the doubled model is still topologically protected As demonstrated in Fig. <ref>(d) for the crystalline system, the high-symmetry surface of the C_M = 1 model hosts a single Dirac cone, and multiple Dirac cones remain protected for C_M > 1. We expect that the high degree of ensemble averaged spatial symmetry of the amorphous Hamiltonian prevents surface states from being gapped out on any surface both for the single and doubled model (C_M = 1 and 2 respectively). We confirm this by numerically computing the surface spectral functionA(E, *k) = ∑_l ⟨*k, l|δ(H - E) |*k, l⟩,using the Kernel polynomial method<cit.>. Here H is the real-space Hamiltonian of a finite slab, l runs over the internal degrees of freedom, and |*k, l⟩ is a plane-wave state localized on one surface.Both the original and doubled amorphous models have a nonzero surface density of states in the bulk gap, with one or two Dirac nodes located at zero momentum. [Fig. <ref>(b,c)]. This is a consequence of the nontrivial topology of the effective Hamiltonian, or equivalently, of the disorder-avaraged Green's function. The surface spectral function in the k_x direction probes the topology of the k_y = 0 cut of the bulk effective Hamiltonian, which is invariant under M_y in the thermodynamic limit. This allows decomposition into two mirror sectors, each of which is a Chern insulator, resulting in an edge spectrum with C_M pairs of counter-propagating chiral edge states crossing the bulk gap. The modes with different chirality correspond to different mirror sectors, hence they are protected from gapping out by disorder that respects the mirror symmetry on average. The surface states are insensitive to the details of the boundary, and only gap out when the symmetries protecting the topological phase (rotations and mirrors normal to the surface) are broken on average [Fig. <ref>(d,e)]. §.§ Surface transport Reference <cit.> conjectures that only the ℤ_2 part of the invariant provides topological protection, or in other words, that only the surface states of systems with odd C_M are protected from localization. In a crystalline system, the surface has an ensemble point group symmetry, and its localization properties are therefore equivalent to a doubled Chalker-Coddington network model, which has a localized phase with an anomalously large localization length <cit.>. The conjecture, however, was not confirmed for 3D phases with continuous rotation symmetries, such as our amorphous model. To confirm the conjecture, we simulate the surface transport properties using amorphous network models.we confirm the conjecture for the regular system We first simulate the transport properties of the regular network model as a baseline for the comparison. In the presence of disorder that preserves the spatial symmetries on average, the surface of the crystalline phase is equivalent to a critical Chern insulator. We simulate its transport properties with the Chalker-Coddington network model on the square lattice <cit.>. We fix the aspect ratio of the network to 1 and impose periodic boundary conditions along the y direction [Fig. <ref>(a)]. The scattering matrices at each node of the network are random 2×2 matrices sampled from a Haar-distributed U(2) ensemble. The conductance through the system is:G = e^2/h∑_i T_i,where T_i are the transmission probabilities from the modes entering one side of the network to the modes exiting on the other side. Since the aspect ratio equals to 1, the system conductivity g = G. We calculate the average conductivity ⟨ g ⟩ as a function of system size L and reproduce the known result ⟨ g ⟩≈ 0.50.6e^2/ħ <cit.> [Fig. <ref>(d)], with the slow increase as a function of L due to finite-size effects. We investigate the localization properties of the double Dirac cone model by doubling the number of modes on each link, as shown schematically in Fig. <ref>(c). This system is expected to localize, based on both numerical <cit.> and analytical <cit.> studies. We draw the 4×4 scattering matrices of the doubled networks from the circular unitary ensemble and confirm localization at system sizes of several thousand sites [Fig. <ref>(d)].we check the conjecture for an amorphous network model We now simulate the conductance of our amorphous model, in order to determine whether the average continuous rotation symmetry has an effect on the conductance properties of the system. We define an amorphous 2D network model in order to simulate the average rotation symmetry using a fourfold coordinated random graph <cit.>, for details of the construction of the amorphous network see App. <ref>. We use an annulus geometry in order to avoid issues constructing the network with periodic boundary conditions, and numerically calculate the conductance through the bulk from the modes entering the outer edge to the modes exiting the inner edge of the annulus [Fig. <ref>(b)]. The conductance G is calculated using (<ref>), and the conductivity of the annulus equals:g = 1/2πGlog(R/r),where R and r are the outer and inner radii of the annulus respectively. The results for the amorphous network closely follow the results for the regular network: the single Dirac cone conductivity falls within the 0.5-0.6e^2/ħ range for small L and increases due to finite-size effects, and the double Dirac cone network localizes [Fig. <ref>(d)]. These observations confirm that a doubled phase transition is not protected from localization, even in the presence of average isotropy.§ CONCLUSION AND DISCUSSIONwe found that isotropic media with only spatial symmetries are topological In this work, we found that 3D isotropic systems breaking all non-spatial symmetries host topologically protected phases of matter. We devised a rotation- and inversion-symmetric continuum model with broken time-reversal symmetry, and presented a microscopic realization of this model in amorphous matter with average isotropy. We constructed a bulk ℤ invariant—expressible both in terms of symmetry eigenvalues and mirror Chern numbers—corresponding to the number of protected ungappable surface Dirac cones, which we numerically demonstrated.transport results We simulated the transport of our models using both regular and amorphous network models with random scattering at each node. We found results consistent with critical scaling, deviations from which are likely due to finite-size effects. Upon doubling the degrees of freedom in both the regular and amorphous networks, the modes localize as conjectured in Refs. <cit.>. Even though any number of surface Dirac cones are protected from gapping out, only an odd number are protected from localization.the symmetries of the isotropic/amorphous model can be tested by applying a low frequency field to the system Due to the combination of average continuous rotation symmetry and inversion symmetry, the spin bands in the bulk of the amorphous system are doubly degenerate. This raises the question whether the systematic breaking of TRS leads to a macroscopic change in the material properties. Enumerating the possible non-dissipative electromagnetic responses compatible with isotropy and inversion-symmetry, but forbidden by TRS, we find *P∝*E*B, electrical polarization parallel to the Poynting vector. This second-order response is distinct from the circular photogalvanic effect <cit.>, which only manifests in systems with broken inversion symmetry, and should therefore be absent in our system. The combination of these two responses therefore serve as a probe of the scalar TRS breaking.we looked at class A, but there are other symmetry classes to study in 3D A natural further quesion is, what is the classification of isotropic three-dimensional media with or without inversion symmetry in the other Altland-Zirnbauer symmetry classes<cit.>. The topological invariants outlined in this work remain valid if we also include TRS besides isotropy and inversion symmetry. Our models are compatible with prescribing TRS with the usual representation 𝒯 = exp(iπ S_y) 𝒦, which fixes some parameters, but does not forbid any topological phases. In this case odd values of C_M correspond to an amorphous strong topological insulator <cit.>, however, the gapless surface Dirac cones remain protected by mirror symmetry for even values as well. To our knowledge, TRS does not enrich the classification in the presence of isotropy and inversion symmetry; and the classification with isotropy, broken inversion and unbroken TRS is the same as the strong ℤ_2 classification with TRS only. There is, however an interesting possibility that isotropy and the protection of the surface density of states in a doubled phase prevents the surface conductivity from going below the metal-insulator critical point, and because of that guaranteeing that the surface stays metallic. We leave an investigation of these properties to future work.It would be interesting to look at the field theory of such systems Our microscopic model—relying on orbital-selective hoppings through chiral magnetic molecules—demonstrates the difficulty of constructing a time-reversal odd, inversion even, scalar order parameter. In our case the order parameter is *P (*M), electric polarization times bound current. Analyzing an effective field-theory displaying such order paramater without other symmetry breaking would shed further light on the properties of this class of isotropic magnetic materials.§ DATA AVAILABILITYThe data shown in the figures, as well as the code generating all of the data is available at <cit.>.§ AUTHOR CONTRIBUTIONSD. V. proposed the initial project idea, all authors contributed to creating the research plan and later refining it. D. V. formulated the bulk invariants. A. A. and D. V. devised the microscopic system and the scalar time-reversal breaking mechanism. D. V. wrote the code generating amorphous structures and computing the spectral functions. A. A. wrote the code for constructing and solving the network models. H. S. performed the numerical simulations and wrote the manuscript with input from all authors. A. A. managed the project with input from all authors.§ ACKNOWLEDGMENTSD. V. thanks Roderich Moessner for useful discussions. The authors thank Elizabeth Dresselhaus and Bjorn Sbierski for sharing their network model code. The authors thank Isidora Araya Day for helping to set up and perform Pymablock calculations. A. A. and H. S. were supported by NWO VIDI grant 016.Vidi.189.180 and by the Netherlands Organization for Scientific Research (NWO/OCW) as part of the Frontiers of Nanoscience program. D.V. was supported by the Swedish Research Council (VR), the Knut and Alice Wallenberg Foundation, and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy through the Würzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter – ct.qmat (EXC 2147, project-id 57002544).equationsection figuresection§ MODEL HAMILTONIANSWe use Qsymm to generate 3D class A models that respect inversion symmetry and isotropic continuous rotation symmetry, whose symmetry representations are:U_ℐ=σ_0τ_z,S_x=1/2σ_xτ_0,S_y=1/2σ_yτ_0,S_z=1/2σ_zτ_0,where U_ℐ is the unitary part of the inversion operator, S_x,y,z are the generators of continuous spin rotations around the x, y, and z axes, and the unitary part of the corresponding rotation operator is given by U = exp(i *n*S) with *n the axis and angle of rotation, and τ,σ are the Pauli matrices. τ represents the orbital component, and σ the spin component of the Hilbert space. The resulting model also has reflection symmetry on any 2D plane,U_ℳ_x= iσ_xτ_z,U_ℳ_y= iσ_yτ_z,U_ℳ_z= iσ_zτ_z,where U_ℳ_x,y,z is the unitary part of the reflection operators on the planes perpendicular to the x, y and z axes, or in general,U_ℳ_*̂n̂= exp(i π*̂n̂*S)τ_z,where *̂n̂ is a unit vector defining the mirror normal. Because of the full rotation invariance, prescribing one mirror symmetry results in mirror symmetry with respect to any plane.The generated k-space model is listed in the main text in Eq. (<ref>). In real-space, the model is of the form: H^onsite_4×4 =μ_1σ_0(τ_0+τ_z)/2 +μ_2σ_0(τ_0-τ_z)/2, H^hopping_4×4(*d) =(tn_1+t_2d^2)σ_0(τ_0+τ_z)/2 +(tn_2+t_3d^2)σ_0(τ_0-τ_z)/2 +(t_0-t_5d^2)*σ*dτ_y + (t_1+t_4d^2)*σ*dτ_x,where tn_i are normal hopping terms, *d=(d_x,d_y,d_z), with d_i the bond lengths along axis i∈{x,y,z} that connect neighboring sites, and d^2=*d*d.When demonstrating that symmetry-breaking gaps out the surface Dirac-nodes, we introduce a mass term that breaks all symmetries except for continuous rotation around the x axis:λ=(σ_0+σ_x)τ_y.We also construct a doubled model. In k-space, this model takes the form:H_8×8(*k) =1/2(ρ_0+ρ_z)σ_0(μ_1(τ_0+τ_z)/2+μ_2(τ_0-τ_z)/2) + 1/2(ρ_0-ρ_z)σ_0(μ_3(tτ_0+τ_z)/2+μ_4(τ_0-τ_z)/2)+(t_0(ρ_0+ρ_z)/2+t_3(ρ_0-ρ_z)/2)*σ*kτ_x -(t_4(ρ_0+ρ_z)/2+t_7(ρ_0-ρ_z)/2)*σ*kτ_y +(t_1+it_5)ρ_-*σ*kτ_- +(t_1-it_5)ρ_+(*σ*kτ_-)^†+(t_2+it_6)ρ_-*σ*kτ_+ +(t_2-it_6)ρ_+(*σ*kτ_+)^†,where μ_i are chemical potential terms, t_i are the hopping terms, ρ, σ and τ are the Pauli matrices, *k=(k_x,k_y,k_z), and k^2=*k*k. In real space, the model takes the form:H^onsite_8×8 =1/2(ρ_0+ρ_z)σ_0(μ_1(τ_0+τ_z)/2+μ_2(τ_0-τ_z)/2), + 1/2(ρ_0-ρ_z)σ_0(μ_3(τ_0+τ_z)/2+μ_4 (τ_0-τ_z)/2)H^hopping_8×8(*d) =1/2(ρ_0+ρ_z)σ_0(tn_1(τ_0+τ_z)/2+tn_2(τ_0-τ_z)/2) + 1/2(ρ_0-ρ_z)σ_0(tn_3(τ_0+τ_z)/2+tn_4(τ_0-τ_z)/2)+(it_0(ρ_0+ρ_z)/2+it_3(ρ_0-ρ_z)/2)*σ*dτ_x -(it_4(ρ_0+ρ_z)/2+it_7(ρ_0-ρ_z)/2)*σ*dτ_y +(-t_5+it_2)ρ_-*σ*dτ_- +(t_5+it_2)ρ_+(*σ*dτ_-)^†+(-t_6+it_1)ρ_-*σ*dτ_+ +(t_2+it_6)ρ_+(*σ*dτ_+)^†,where tn_i are normal hopping terms, *d=(d_x,d_y,d_z), with d_i the bond lengths along axis i∈{x,y,z} that connect neighboring sites, and d^2=*d*d. The symmetry-breaking term for the doubled model isλ^'= [ 1 1; 1 1 ]⊗[ 1 1; 1 1 ]⊗τ_y. § MODEL AND PLOTTING PARAMETERSIn this section additional details of the plots are listed in order of appearance.For panel (c) of Fig. <ref> the Hamiltonian (<ref>) was simulated using kwant <cit.> on a translationally invariant 3D face-centered cubic (FCC) lattice. Its eigenvalues were obtained along the high-symmetry points of the FCC lattice, using the parameters μ_1=0.1,μ_2=0.2,t_1=0.3,t_2=-0.4,t_3 = exp(0.3i),t_4=0.2iexp(0.3i). For the dispersion shown in panel (d), a slab was simulated, periodic along the vectors [1,0,0] and [0,1,0], and with a width of 20 sites in the [0,0,1] direction. The parameters used are the same as for panel (c).For panel (a) of Fig. <ref>, the Chalker-Coddington network is composed of four unit cells in both x and y. For panel (b), the amorphous network was created with an outer radius of R=20, an inner radius of r=4, and a density of 1. The positions of the nodes of the network underwent a relaxation step where the position of each node is sequentially averaged over the position of all neighboring nodes. For panel (d), the results for single-mode Chalker-Coddington network were obtained for 249 different random scattering matrix configurations, for network sizes of 36, 72, 144, 288, 576, 1152, 2304 and 4608 unit cells, with an aspect ratio of 1. The results for the two-mode Chalker-Coddington network were obtained for the same network sizes and aspect ratio, and for 269 different scattering matrix configurations. For the amorphous network, the results were obtained for 50 outer radii sizes between 10^1.5 and 10^2.5, with a fixed outer radius over inner radius ratio of 1.5, and a density of 0.7. Results for the single mode network were obtained for 500 different amorphous network and scattering matrix configurations, and 300 different configurations for the two-mode amorphous network. Additional results for the single mode network were obtained for 5 outer radii sizes between 10^2.5 and 10^3, for 100 different network configurations and scattering matrices.For Fig. <ref>(a), single-Dirac cone model as defined in Eq. (<ref>) was used. Its parameters were set to μ_1=-1, μ_2=1,tn_1=0, tn_2=0, t_0=0.5, t_1=0.4, t_2=1, t_3=-1, t_4=0.3, t_5=0.8 and the additional symmetry-breaking term λ from Eq. (<ref>) is set to 0. For panels (b) and (d) the same model as panel (a) was used. Its parameters were set to μ_1=1, μ_2=-1,tn_1=-2, tn_2=2, t_0=1, t_1=1, t_2=1.1, t_3=1.2, t_4=1.3, t_5=1.25 and the additional symmetry-breaking term λ from Eq. (<ref>) is set to 0. The results were obtained obtained for k-points between -π and π. For panel (d) and (e), λ is set to 0.3. For the doubled model as defined in Eq. (<ref>), the parameters were set to μ_1=1, μ_2=-1, μ_3=1, μ_4=-1, tn_1=-2, tn_2=2, tn_3=-2, tn_4=2, λ_1=0.1, λ=0.11, λ_3=0.12, λ_4=0.123. The amorphous slab was generated in a box of dimensions 200×50×50 and density 0.4.For panel (a) of Fig. <ref>, the model (<ref>) was used. For all results, the hopping parameters were set to t_0=1,t_1=1.2,t_2=0,t_3 = 0,t_4 = 0,t_5=0,tn_1=-2, tn_2=2 (terms proportional to k to the power of 2 and higher are set to 0). Since the only hopping terms are linear in d, in order to ensure that TRS is broken for this model, a different distance dependence is given for the t_1 and t_2: t_1exp(-0.3d) and t_2exp(-d), where d=√(d^2) is the bond length. The amorphous samples are all contained within a cube of 30 x 30 x 30 sites, with a density of 0.7, and the crystal samples are all 10 x 10 x 10 sites. For the invariant ν_M (<ref>) the numerical integration over the Brillouin zone of the effective Hamiltonian was done over a grid of 15 x 15 points.For panel (b) of Fig. <ref>, the model (<ref>) was used. The parameters were set to t_1=0.3,t_2 = -0.4,t_3 = exp(0.3i),t_4 = iexp(0.3i). The Γ and X points of the model are (0,0,0) and (0,2π,0).§ ALTERNATIVE BULK INVARIANTS In addition to the bulk invariant given in Sec. <ref>, we identify two alternative expressions. §.§ Inversion eigenvalues The inversion operator commutes with the spins at the rotation-invariant points *k=*0 and *k=*∞. Since the SU(2) rotation symmetry commutes with the inversion operator, the inversion eigenvalues come in degenerate pairs in the case of a spin-1/2 representation, and in degenerate groups of 2s+1 for spin-s representations. The difference in parity of the inversion eigenvalue pairs at these rotation-invariant points characterizes the topological phase:ν_I = 1/2[ ι_-(*∞)-ι_-(*0)], ι_-(*k) = μ_-1(⟨n(*k)|ℐ|m(*k)⟩),where |n(*k)⟩ are the occupied states of the effective Hamiltonian H_eff, and μ_λ(A) indicates the multiplicity of the eigenvalue λ in the spectrum of A. We note that in the case of an operator that only has ± 1 eigenvalues, the multiplicity can be expressed through the trace as Tr A = N - 2 μ_-1(A), allowing to rewrite the invariant asν_I = -1/4∑_n∈occ(⟨n(*∞)|ℐ|n(*∞)⟩ - ⟨n(*0)|ℐ|n(*0)⟩),where we used that the total number of occupied bands is the same at *k=*0 and *∞.While we only consider spin-1/2 representations in the main text, in the general case it is possible to resolve the eigenstates at *k = *0 and *∞ based on the spin-representation *S. All states along a line *̂n̂ k connecting *0 and *∞ have continuous rotation symmetry along the *̂n̂ axis, hence the eigenvalues of *̂n̂*S in the occupied subspace are well-defined throughout, and the total number of various spin representations cannot change. The inversion eigenvalues, however, can change in the process, so we can define the set of invariantsν_I^s = 1/2s + 1[ ι_-^s(*∞)-ι_-^s(*0)], ι_-^s(*k) = μ_-1(⟨n_s(*k)|ℐ|m_s(*k)⟩),where we restrict the inversion operator to the subspace corresponding to the spin-s representation spanned by the states |n_s(*k)⟩. This results in a ℤ^ℕ classification, of which the invariant (<ref>) only probes a ℤ subset,ν_I = ∑_s (s+1/2) ν_I^s.This relation also shows that, depending on the spin representation content of the model, not all values of ν_I may be realizable. A remaining question is, whether for general s, ν_I or the set of ν_I^s has a bulk-boundary correspondence in amorphous systems. As we show in the next section (see (<ref>)), it is a different combination of ν_I^s that the mirror Chern invariant probes, nontrivial values of which we expect to protect robust surface states. The simplest continuum model with trivial ν_I (or C_M) and nontrivial ν_I^s has 16 on-site degrees of freedom (4 spin-1/2 and 2 spin-3/2 representations, half of which is inversion-odd), we leave analysis of the surface physics to future work.For the crystalline system described in Sec. <ref> we calculate the analogous eigenvalue parity invariant given by:ν̃_I = 1/2[ ι_-(Γ)+ι_-(X)]mod 4,where ι is the same as in (<ref>). The 4 results from factoring out atomic insulators located at other Wyckoff positions. We note that (<ref>) does not give the full symmetry indicator classification in space group 225 <cit.>, and the ℤ invariant given by the mirror Chern number also remains well defined and contains additional information. §.§ Rotation eigenvalues Another way to formulate the bulk invariant relies on the Chern-number being expressible through the difference in the occupied rotation eigenvalues at the rotation-invariant points *k=*0 and *k=*∞ <cit.>:C = ∑_n∈occ(⟨n(*∞)|S_z|n(*∞)⟩ - ⟨n(*0)|S_z|n(*0)⟩),where S_z is the generator of rotations around the z axis and the Chern-number is calculated in the k_z = 0 plane (other orientations give equivalent results). To formulate the mirror Chern number, we insert -iM_z, which adds a ± 1 prefactor to the mirror-even/odd states:C_M = -1/2∑_n∈occ(⟨n(*∞)|iM_z S_z|n(*∞)⟩ - ⟨n(*0)|iM_z S_z|n(*0)⟩).In general M_z = ℐexp(iπ S_z), in the spin-1/2 case this simplifies to M_z = i ℐσ_z, hence -iM_z S_z = 1/2ℐ. Substituting this, we findC_M = 1/4∑_n∈occ(⟨n(*∞)|ℐ|n(*∞)⟩ - ⟨n(*0)|ℐ|n(*0)⟩) = -ν_I.For general spin, using that ℐ commutes with the spin operators, after some algebra we findC_M = 1/4∑_s (-1)^s-1/2∑_n_s∈occ_s(⟨n_s(*∞)|ℐ|n_s(*∞)⟩ - ⟨n_s(*0)|ℐ|n_s(*0)⟩)= ∑_s (-1)^s+1/2(s+1/2)ν_I^s. As we saw, in the spin-1/2 case studied in detail, Eqs. (<ref>, (<ref>), and (<ref>)) are all equivalent formulations of the same invariant, as demonstrated by their equivalence for different values of the chemical potential [Fig. <ref>(a)].§ AMORPHOUS NETWORK MODELIn order to ensure four-fold coordination of each node of the amorphous network, we generate the network following the method described in Refs. <cit.>, which creates a graph by generating N random lines on a plane, with N chosen from a Poisson distribution whose mean is set to 2 R √(πρ), with ρ the chosen density of the graph and R the outer radius of the network. The angle and offset of the lines is uniformly distributed in [0, 2π) and [0, R] respectively. We define the intersections of each pair of lines as a network node. We ensure the two-in-two-out pattern of propagating modes at each node by orienting the links in an alternating fashion along each of the straight lines. There is no dependence of the scattering matrices on the length of the network links.The graph is cut into an annulus shape by removing all of the nodes beyond the outer radius R and within the inner radius r. This ensures periodic boundary conditions along the polar angle coordinate. In order to maintain four-fold connectivity in the bulk of the graph, the nodes outside of the network that are connected to nodes inside of the network are changed into sinks or sources, that either absorb modes from the network or emit modes to the network. The conductivity of the amorphous network is calculated by g=Gln(R/r)/2π, with G=(e^2/h)∑_i,j| S_ij|^2, S_ij being the matrix element of the scattering matrix that connects the incoming modes originating from external sources beyond the network's outer edge to the outgoing modes exiting the network from its inner edge. A relaxation of the graph for visual clarity is optionally performed by averaging each node position to the center of its neighbors' positions.
http://arxiv.org/abs/2310.18400v1
{ "authors": [ "Helene Spring", "Anton R. Akhmerov", "Daniel Varjas" ], "categories": [ "cond-mat.mes-hall", "cond-mat.dis-nn" ], "primary_category": "cond-mat.mes-hall", "published": "20231027180004", "title": "Isotropic 3D topological phases with broken time reversal symmetry" }
Gate-tunable topological superconductivity in a supramolecular electron spin latticeRémy Pawlak,^1∗† Jung-Ching Liu,^1† Chao Li,^1† Richard Hess,^1† Hongyan Chen,^2 Carl Drechsel,^1, Ping Zhou,^3 Robert Häner,^3 Ulrich Aschauer,^3,4Thilo Glatzel,^1 Silvio Decurtins,^3Daniel Loss,^1 Jelena Klinovaja,^1 Shi-Xia Liu,^3∗ Wulf Wulfhekel,^2 & Ernst Meyer^1^1Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland^2Physikalisches Institut, Karlsruhe Institute of Technology,Wolfgang-Gaede-Str. 1, 76131 Karlsruhe, Germany^3Department of Chemistry, Biochemistry and Pharmaceutical Sciences,University of Bern, Freiestrasse 3, 3012 Bern, Switzerland^4Department of Chemistry and Physics of Materials, University of Salzburg,Jakob-Haringer-Strasse 2A, 5020 Salzburg, Austria^†These authors equally contributed;^∗To whom correspondence should be addressed;E-mails:[email protected], [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ We introduce a categorical approach to classifying actions of C^*-tensor categorieson -algebras up to cocycle conjugacy. We show that, in this category, inductive limits exist and there is a natural notion of approximate unitary equivalence. Then, we generalise classical Elliott intertwining results to the -equivariant case, in the same fashion as done by Szabó for the group equivariant case in <cit.>.theoremsection § INTRODUCTIONThe study of existence and classification of symmetries on operator algebras has been a ubiquitous theme in the field. In the case of von Neumann algebras, this can be traced back to the work of Connes (<cit.>), where he classified automorphisms of the hyperfinite _1 factor ℛ up to outer conjugacy. Subsequently, Jones classified actions of finite groups on ℛ (<cit.>), while Ocneanu generalised these results to the case of amenable groups acting on ℛ (<cit.>). Later, Popa classified amenable subfactors N⊂ℛ by their standard invariant (<cit.>). Following subsequent reformulations (<cit.>), the standard invariant of a finite index subfactor N⊂ℛ can be understood as a pair (F,Q) with F an action of a rigid C^*-tensor category 𝒞 on N and Q a special object inthat allows one to recover the inclusion N⊂ℛ as a generalised crossed product.[Precisely, Q is a Q-system as defined by Longo in <cit.>.] The general setting of -tensor categories represents a unifying framework for studying group actions and more general quantum symmetries arising from subfactor theory. Following the recent spectacular classification results for group actions on Kirchberg algebras by group equivariant KK-theory (<cit.>), and the development of a -equivariant KK-theory (<cit.>), it is natural to consider to what extent one can classify actions of more general C^*-tensor categories on simple, amenable C^*-algebras. Prior and closely linked to the classification of symmetries of operator algebras was the classification of the operator algebras themselves. First, Connes classified amenable factors in <cit.> with the exception of one type which was later completed by Haagerup (<cit.>). In the case of C^*-algebras, Elliott classified C^*-inductive limits of finite dimensional C^*-algebras by their ordered K-theory in <cit.>. Elliott's methods laid the foundations for a roadmap to a classification of more general simple, amenable C^*-algebras. Following Elliott's strategy and building on decades of work by many mathematicians, the classification programme of simple, amenable -algebras successfully culminated in <cit.>. The general strategy for a classification of operator algebras is to first achieve existence and uniqueness results for morphisms with respect to the proposed classification invariant. Loosely speaking, to construct an isomorphism between two objects A and B in the category of -algebras, by virtue of an existence type statement, one obtains morphisms ϕ:A→ B and ψ:B→ A which induce mutually inverse elements at the level of the chosen abstract invariant. Then, a suitable uniqueness result will give that ψ∘ϕ is equivalent to 𝕀_A and ϕ∘ψ is equivalent to 𝕀_B. In the category of separable -algebras, the required notion of equivalence is approximate unitary equivalence. The final step is toperform a so-called approximate intertwining argument. Essentially, one can tweak the morphisms ϕ and ψ by unitaries until they become mutually inverse isomorphisms (see <cit.> and the introduction of <cit.> for a more detailed breakdown of this roadmap to classification). Furthermore, in <cit.>, Elliott proposes a similar classification strategy for arbitrary objects in more general categories that have a suitable notion of inner automorphisms.Instances of the implementation of Elliott's strategy for the classification of compact (quantum) group actions on C^*-algebras appear in <cit.>, where the actions are assumed to have the Rokhlin property. In <cit.>, Szabó articulates Elliott's proposed strategy in the generality of actions of locally compact groups as an alternative to the Evans–Kishimoto intertwining type arguments (see <cit.>). In his work, Szabó defines an appropriate category of Γ-C^*-algebras, that is C^*-algebras carrying an action of Γ, and produces a vast collection of intertwining results for this category. Szabó's construction is a key ingredient facilitating the recent groundbreaking classificationof amenable actions on Kirchberg algebras in <cit.>.In this paper, we will develop the necessary techniques to perform approximate intertwining arguments for C^*-algebras carrying an action of a tensor category(--algebras). Glimpses of these techniques can be found in <cit.>, where exact intertwining arguments are performed to classify inductive limit actions of fusion categories on AF-algebras. However, for classification, it is often necessary to develop a more general framework which allows to perform approximate intertwining arguments. In the setting of -algebras, Elliott used exact intertwining arguments to classify AF-algebras (<cit.>), but more refined approximate versions were needed to classify simple A𝕋-algebras in <cit.>. Approximate intertwining arguments in the setting of actions of tensor category have already appeared in the literature; see <cit.> for an adaptation of the Evans-Kishimoto intertwining argument introduced in <cit.>.Primarily, the categorical framework developed by Szabó in <cit.> provides the conceptual skeleton for our construction. However, unlike in the group action case, a tensor category might not act by automorphisms, so our techniques differ. In <cit.>, Szabó defines a cocycle morphism as a pair consisting of an extendible ^*-homomorphism and a unitary cocycle. In the setting of tensor categories, the action on a -algebra is given by a family of bimodules acting compatibly with respect to the tensor product. Therefore, the cocycles will be given by certain bimodule maps satisfying a family of commuting diagrams. Hence, it is not apparent how to adapt Szabó's arguments. To perform intertwining techniques, we first need to introduce a category whose objects are -C^*-algebras. Multiple notions of morphisms between -C^*-algebras have appeared in the literature (see <cit.> or <cit.> for example). Any such notion has a common flavour: Given a C^*-tensor categoryacting on -algebras A and B via tensor functors (F,J):→_0(A) and (G,I):B→_0(B) respectively, a morphism between (A,F,J) and (B,G,I) is given by an A-B correspondence E and a family of bimodule maps v_X: F(X)⊠ E→ E⊠ G(X) satisfying coherence diagrams.[_0(A) is the category of non-degenerate A-A correspondences.] The intricacies between the different definitions lie in how general correspondences we allow and what structure we expect the maps v_X to have. We choose to work with correspondences which arise from (possibly degenerate) ^*-homomorphisms φ:A→ B and with (possibly non-adjointable) isometries v_X. We will say this data yields a cocycle morphism and we denote the category consisting of --algebras and cocycle morphisms by _. When the acting category is (Γ), and the action factors through automorphisms, we give an explicit formula for the family of bimodule maps {v_X}_X∈ in Example <ref>. In _, we are able to construct inductive limits and define a suitable topology on the space of morphisms. An important challenge in constructing a category of cocycle morphisms is the composition. Composing two cocycle morphisms might not give a cocycle morphism. Therefore, we need to introduce a slightly different composition which will allow us to obtain a category. If the morphisms are non-degenerate, this new composition agrees with the canonical one, and with the one introduced by Szabó in the group action case (<cit.>). Moreover, to avoid the need of restricting to non-degenerate morphisms, the maps v_X are assumed to be isometries. This assumption allows us, unlike in <cit.>, to work in the possibly non-extendible setting, generalising the notion of a cocycle morphism from <cit.> when restricting to twisted group actions. An essential observation for our construction is that we can encode the information of a cocycle morphism into a family of linear maps satisfying certain conditions (see <cit.> where this is done for unital injective cocycle morphisms). Therefore, a cocycle morphism can be equivalently defined as follows. Letbe a -tensor category acting on -algebras A and B via (A,F,J) and (B,G,I) respectively. Then a cocycle morphism (ϕ,h): (A,F,J)→ (B,G,I)is given by a ^*-homomorphism ϕ:A→ B and a family of linear maps: h= {h^X: F(X)→ G(X)}_X∈such that for any X,Y∈ * h^X(a xa')=ϕ(a) h^X(x)ϕ(a') for any a,a'∈ A and any x∈ F(X); * for any morphism f∈(X→ Y), G(f)∘ h^X=h^Y∘ F(f);* ϕ(⟨ x , y⟩_A)=⟨ h^X(x) , h^X(y)⟩_B for any x,y∈ F(X); * the following diagram commutes:F(X)⊠ F(Y) [swap]dh^X⊠ h^YrJ_X,YF(X⊗ Y)dh^X⊗ YG(X)⊠ G(Y) rI_X,YG(X⊗ Y);* h^1_:A→ B is given by h^1_(a)=ϕ(a) for any a∈ A. [Note thatdenotes an action on the left, whilean action on the right.] In the case of a group action, we give an explicit formula for the family of linear maps {h^X}_X∈ in Example <ref>. Compared with the usual definition of a cocycle morphism given by pairs (ϕ,v) consisting of a ^*-homomorphism and a family of bimodule maps {v_X}_X∈ (see Definition <ref>), Definition <ref> provides a more direct framework for setting up the intertwining arguments. In particular, to compose two cocycle morphisms, one composes the ^*-homomorphisms, as well as the corresponding linear maps. Furthermore, the natural topology on cocycle morphisms is obtained by considering pointwise differences of the linear maps. When the acting category is semisimple and has countably many isomorphism classes of simple objects (which is assumed for the rest of the introduction), we can phrase convergence in this topology purely in terms of the linear maps. Let (ϕ_λ,h_λ), (ϕ,h):(A,F,J)→ (B,G,I) be cocycle morphisms. Then (ϕ_λ,h_λ) converges to (ϕ,h) if and only if h_λ^X→ h^X pointwise in the norm induced by the right inner product and uniformly over finite sets of () containing 1_.[The collection (𝒞) is a choice of representatives for isomorphism classes of simple objects in .] In general, this topology is coarser than the one used by Szabó in the group action case (<cit.>) (see Example <ref>). However, it is the same when restricted to non-degenerate cocycle morphisms. With this topology in hand, we can then define approximate unitary conjugation for cocycle morphisms. Moreover, unlike Szabó's definition in <cit.>, this is a symmetric relation (see Lemma <ref>), which simplifies the intertwining arguments. This can be formulated as follows.If (ϕ,h),(ψ,l):(A,F,J)→ (B,G,I) are cocycle morphisms, then (ψ,l) is approximately unitarily equivalent to (ϕ,h) if and only if there exists a net of unitaries u_λ∈𝒰(ℳ(B)) such thatmax_X∈ Kl^X(x)-u_λ h^X(x) u_λ^*λ⟶ 0for any finite K⊆() containing 1_, and every x∈ F(X). With the introduced topology at the level of morphism spaces and notion of unitary equivalence, we can place the subcategory of _ consisting of separable --algebras and extendible cocycle morphisms in Elliott's abstract classification framework from <cit.>. Precisely, the quotient category of _ by approximate unitary equivalence is a classification category in the sense of <cit.>. Hence, we get the following theorem.Let (F,J): ↷ A and (G,I): ↷ B be actions on separable -algebras. Let(ϕ, h): (A,F,J) → (B,G,I)and(ψ, l): (B,G,I) → (A,F,J)be two extendible cocycle morphisms such that the compositions (ψ,l)∘ (ϕ, h) and (ϕ,h)∘ (ψ, l) are approximately inner. Then (ϕ, h) and (ψ, l) are approximately unitarily equivalent to mutually inverse cocycle conjugacies. Along with Theorem <ref>, in Section <ref> we use our construction to obtain various -equivariant versions of classical intertwining arguments. For instance, we establish a general Elliott two-sided intertwining argument (see <cit.> for a non-equivariant version) as well as an intertwining through reparametrisation in Theorem <ref>.In Section <ref>, the techniques we develop allow us to perfom a -equivariant one-sided intertwining argument (see Theorem <ref>), generalising the classical one-sided intertwining arguments of <cit.> and <cit.>. This result is a key ingredient in forthcoming work of Evington, C. Jones, and the first named author in obtaining a McDuff-type characterisation of equivariant 𝒟-stability of an action of a rigid C^*-tensor category, for strongly self-absorbing 𝒟 (<cit.>). Precisely, -stability of an action is equivalent to the existence of a unital embedding ofinto an appropriate subalgebra of the Kirchberg central sequence algebra of A (in the case of a group action, this subalgebra coincides with the fixed point subalgebra of Kirchberg's central sequence algebra). This generalises the classical result of <cit.> (see also <cit.>) and its group equivariant counterpart (see <cit.>). The remaining part of Section <ref> is concerned with asymptotic versions of the results in Section <ref>.§.§ AcknowledgementsWe would like to thank Stuart White and Samuel Evington for their supervision on this project. We would also like to thank George Elliott and Gabór Szabó for useful discussions on the topic of this paper. Part of this work was completed during the authors' stay at the Fields Institute for Research in Mathematical Sciences for the 'Thematic Program on Operator Algebras and Applications' in Autumn 2023. We thank the Fields Institute and the organisers for the hospitality.The first named author was supported by the Ioan and Rosemary James Scholarship awarded by St John's College and the Mathematical Institute, University of Oxford, as well as by project G085020N funded by the Research Foundation Flanders (FWO). The second named author was supported by the EPSRC grant EP/R513295/1. The authors' stay at the Fields Institute was partially funded by a Special Grant awarded by St John's College, Oxford. The first named author's stay was also partially supported by the Fields Institute while the travel costs of the second named author were supported by the EPSRC grant EP/X026647/1.§ PRELIMINARIEStheoremsection §.§ Hilbert bimodules In this subsection, we collect a few basics on the theory of Hilbert bimodules. We refer the reader to <cit.> for a more detailed exposition. First, we start by recalling the definition of a Hilbert module as introduced by Paschke in <cit.>.Let X be a vector space over ℂ and B be a -algebra. We say that X is a (right-)Hilbert B-module if X is a right B-module equipped with a function ⟨·, ·⟩_B : X × X → B satisfying the following properties: * ⟨·, ·⟩_B is left conjugate linear and right linear.* For any x,y ∈ X and b ∈ B, one has that ⟨ x,yb⟩_B = ⟨ x,y⟩_Bb.* For any x∈ X, ⟨ x,x⟩_B ≥ 0 and ⟨ x,x⟩_B =0 if and only if x=0.* For any x,y∈ X, ⟨ x,y⟩_B = ⟨ y,x⟩_B^*. * X is complete with respect to the norm induced by ⟨ x,x⟩_B^1/2.If X only satisfies the properties <ref>-<ref> above, then we say X is a pre-Hilbert B-module. For x∈ X, we denote by |x|^2 the positive element of B given by ⟨ x,x⟩_B. To obtain a right-Hilbert A-B-bimodule, we need a compatible left action of a -algebra A. (<cit.>) Let A and B be -algebras and X=_AX_B a bimodule over the complex algebras A and B. We say that X is a (right-)Hilbert A-B-bimodule if X is a right-Hilbert B-module, and for all a∈ A, the map ϕ(a): x∈ X ↦ ax ∈ X is adjointable, with adjoint ϕ(a)^*=ϕ(a^*). In particular, ϕ : a∈ A→ϕ(a)∈ℒ(X) is a ^*-homomorphism from A to the C^*-algebra ℒ(X) of adjointable maps on X. The map ϕ will be referred to as the left action of A on X. For a Hilbert A-B-bimodule X, we often write the left action of an element a∈ A on a vector x∈ X by a x and the right action of an element b∈ B on x by x b. For a C^*-algebra B, the vector space X=B is a Hilbert B-module with the right B-module structure given by multiplication and the inner product ⟨ a,b⟩_B=a^*b for a,b∈ B. In this case, any ^*-homomorphism ϕ:A→ M(B) induces a right-Hilbert A-B-bimodule which we denote by _ϕB.Let A,B,C be -algebras. If X is a Hilbert A-B-bimodule and E is a Hilbert B-C-bimodule we may form their internal tensor product X⊗ Y that is an Hilbert A-C-bimodule. We sketch this construction and refer to <cit.> for details. To perform the internal tensor product one starts by considering the algebraic tensor product of vector spaces X⊙ E. We identify the elements of the form x b⊙ y with x⊙ b y to form the quotientV=X⊙ Y/span{x b⊙ y-x⊙ b y: x∈ X, y∈ E, b∈ B}.We denote the image of the elementary tensor x⊙ y of X⊙ E in V under the canonical quotient map by x⊠ y. One may define a right C-action and a right C-inner product on V by(x⊠ y) c=x⊠ (yc), ⟨ x⊠ y, z⊠ w⟩_C=⟨ x,⟨ y,z⟩_Bw⟩_C,for any x,z∈ X and z,w∈ E. It follows that V equipped with this C-action and C-valued inner product satisfies <ref>-<ref> of Definition <ref>. We produce a Hilbert C-module X⊠ E by completing V under the norm defined by the inner product. Moreover, one can induce a left A action on X⊠ E througha (x⊠ y)=(a x)⊠ yfor all a∈ A, x∈ X and y∈ E. This equips X⊠ E with the structure of a Hilbert A-C-bimodule. If the bimodules are given by ^*-homomorphisms into the multiplier algebra as in Example <ref>, one may get greater insight into the structure of their tensor product. First we recall the definition of non-degenerate and extendible ^*-homomorphisms. Let A and B be -algebras and ϕ:A→ℳ(B) a ^*-homomorphism. Then ϕ is said to be non-degenerate if ϕ(A)B is dense in B.If ϕ:A→ℳ(B) is non-degenerate, then there exists a unital ^*-homomorphism ϕ̃:ℳ(A)→ℳ(B) extending ϕ (<cit.>). Let A and B be -algebras. A ^*-homomorphism ϕ:A→ℳ(B) is called extendible if for any increasing approximate unit e_λ∈ A, the net ϕ(e_λ)∈ℳ(B) converges strictly to a projection p∈ℳ(B).[Any non-degenerate ^*-homomorphism ϕ: A→ M(B) is extendible as for an approximate unit e_λ∈ A the net ϕ(e_λ) converges strictly to 1_ℳ(B) (see e.g. <cit.>).] As discussed in <cit.>, ϕ then factorises through ℳ(pBp)≅ pℳ(B)p⊆ℳ(B), with the ^*-homomorphism ϕ_p:A→ℳ(pBp) now non-degenerate. In this case, we let ϕ^†:ℳ(A)→ℳ(B) be the unital ^*-homomorphism defined by ϕ^†(a)=ϕ̃_̃p̃(a)+(1_M(B)-p) for all a∈ℳ(A).The proposition below yields a more explicit form for the tensor products of bimodules arising from ^*-homomorphisms into the multiplier algebra as in Example <ref>. If the homomorphisms are non-degenerate, then the result is folklore (see <cit.>).Suppose that A,B, and C are -algebras and ϕ:A→ℳ(B), ψ:B→ℳ(C) are ^*-homomorphisms with ψ extendible, but possibly degenerate. If we denote the unital extension of ψ by ψ^†:ℳ(B)→ℳ(C), then _ϕB⊠_ψC≅_ψ^†∘ϕψ(B)C=_ψ^†∘ϕψ(B)C.First, note that ψ(B)C=ψ(B)C by Cohen's factorisation (see <cit.>). Let T:_ϕB⊠_ψC→_ψ^†∘ϕψ(B)C be the continuous linear map given by T(b⊠ c)=ψ(b)c for any b∈ B and c∈ C. We claim that T is a bimodule isomorphism. Using the definition of the inner product, a standard check shows that T is a well-defined bimodule map. Taking (η_λ)_λ∈Λ to be an approximate unit for B let S:_ψ^†∘ϕψ(B)C→_ϕB⊠_ψC be given by S(c)=lim_λη_λ⊠ c for any c∈ C. The map S is well-defined precisely because ψ is extendible. Indeed, for λ,μ∈Λ|(η_λ-η_μ)⊠ c|^2 =⟨ c, ⟨η_λ-η_μ,η_λ-η_μ⟩_Bc⟩_C=⟨ c, (η_λ-η_μ)^*(η_λ-η_μ) c⟩= ⟨ψ(η_λ-η_μ)c,ψ(η_λ-η_μ)c⟩.Therefore, S is well defined if and only if ψ(η_λ-η_μ)c converges to 0 for any c∈ C, which is equivalent to saying that ψ(η_λ) converges strictly in ℳ(C). Hence S is well-defined. Moreover, it can be seen that S is the adjoint of T and hence continuous. Now for any b∈ B and c∈ C,S(T(b⊠ c))=lim_n→∞η_n⊠ψ(b)c=lim_n→∞η_n⊠ b c=lim_n→∞η_nb⊠ c=b⊠ c.Then, by continuity, S∘ T is the identity on the domain of T. Therefore, T is injective. As T is surjective by construction, it follows that T is a bimodule isomorphism.By uniqueness of the adjoint, the map S:_ψ^†∘ϕψ(B)C→_ϕB⊠_ψC of the proof of Proposition <ref> given by S(c)=lim_λη_λ⊠ c for any c∈ C and a choice of approximate unit (η_λ)_λ∈Λ for B, does not depend on the choice of the approximate unit η_λ. Let A be a C^*-algebra. A Hilbert A-B-bimodule X is called non-degenerate if XB is dense in X and AX is dense in X. Note that if X is a right-Hilbert B-module then XB is always dense in X by the argument in <cit.>. Therefore, a Hilbert A-B-bimodule E may only fail to be non-degenerate if AX is not dense in X.We end this subsection by showing that if A is a -algebra and E is a non-degenerate Hilbert A-bimodule, then we can obtain a well-defined action of ℳ(A) on E. Let (η_λ)_λ∈Λ be an approximate unit of A and let L_E:E→ A⊠ E and R_E:E→ E⊠ A be the A-bimodule maps given by L_E(x)=lim_λη_λ⊠ x and R_E(x)=lim_λx⊠η_λ for all x∈ E. Note that, since E is non-degenerate, L_E and R_E are unitary bimodule isomorphisms. This follows similarly to the proof of Proposition <ref>. The inverses of L_E and R_E are given by their adjoints L_E^-1(a⊠ x)=a x and R_E^-1(x⊠ a)=x a, for all a∈ A and x∈ E respectively. Let A be a -algebra and E be a non-degenerate Hilbert A-bimodule. Then one may extend the left and right actions of A on E to left and right actions of ℳ(A) on E. These extended actions equip E with the structure of a Hilbert ℳ(A)-bimodule.For any v∈ℳ(A) and any x∈ E, let us define v x=L_E^-1(v L_E(x))x v = R_E^-1(R_E(x) v).We claim that these formulae define left and right actions of ℳ(A) on E. Moreover, it is clear that (<ref>) restricted to A coincides with the A-bimodule structure of E.First, note that the left action of ℳ(A) on A⊠ E is given by left multiplication on A. Similarly, the right action of ℳ(A) on E⊠ A is given by right multiplication on A. Since L_E, R_E, and their inverses are bimodule maps, it is straightforward to see that the formulae in (<ref>) define left and right actions of ℳ(A) on E. To see that E with its right A-valued inner product is a Hilbert ℳ(A)-bimodule it suffices to check that the right ℳ(A)-action commutes with the inner product and the left ℳ(A)-action consists of adjointable operators. First, for x,y∈ E and v∈ℳ(A) we have that⟨ y,x v⟩_A=⟨ y, R_E^-1(R_E(x) v)⟩_A=lim_λlim_μ⟨ y⊠η_λ,x⊠η_μv⟩_A=lim_λ⟨η_λ,⟨ y,x⟩_Av⟩_A=⟨ y,x⟩_A v, so the right ℳ(A)-action commutes with the right inner product. Moreover, the operator of left multiplication by v∈ℳ(A) has as an adjoint the operator of left multiplication by v^*. Indeed left multiplication by v^* is the adjoint of the operator of left multiplication by v on the Hilbert A-module A so for x,y∈ E⟨ v x,y⟩_A=⟨ L_E^-1(v L_E(x)),y⟩_A=⟨ L_E(x),v^* L_E(y)⟩_A =⟨ x,v^* y⟩_Aas required. §.§ CorrespondencesA reformulation of the theory of Hilbert bimodules is the language of C^*-correspondences.Let A,B be -algebras. An A-B correspondence is a ^*-homomorphism ϕ:A→ℒ(X_B)into the adjointable operators of a Hilbert B-module X_B. We denote the collection of A-B correspondences by (A,B). Note that any A-B correspondence induces a right-Hilbert A-B-bimodule and vice versa. If ϕ:A→ℒ(X_B) is an A-B correspondence, then X_B becomes a right-Hilbert A-B-bimodule, with the left action given by ϕ. We will often denote this bimodule by _ϕX, forgetting the right B-action. Conversely, given a right-Hilbert A-B-bimodule X, the left action by A induces an A-B correspondence. Therefore, we will freely flip between the two pictures. One may compose correspondences through the tensor product of bimodules. Precisely, let X be aHilbert A-B bimodule inducing ϕ∈(A,B), and let E be a Hilbert B-C bimodule inducing ψ∈(B,C). Then X⊠ E gives a Hilbert A-C bimodule which induces an element in (A,C) denoted by ψ∘ϕ. Although the theories of bimodules and correspondences are equivalent, we sometimes choose to work with correspondences as the composition resembles composition of ^*-homomorphisms between -algebras in a covariant manner.§.§ -tensor categoriesThroughout this section we will assume that the reader is familiar with the standard language of category theory. For a categorywe will use capital letters e.g. X,Y, and Z to denote objects of the category. The space of morphisms between two objects X,Y∈ will be denoted by (X,Y). All categories in this section will be ℂ-linear, that is that the space of morphisms (X,Y) between any two objects X,Y is a ℂ-vector space and that the composition of morphisms yields a bilinear map (X,Y)×(Y,Z)→(X,Z) for any X,Y,Z∈. We start by defining -categories. (see <cit.>) A ℂ-linear category 𝒞 is said to be a C^*-category if it is equipped with conjugate linear mappings ^*:(X,Y) →(Y,X) for every X,Y ∈𝒞 such that * ϕ^** = ϕ for all ϕ∈(X,Y); * (ϕ∘ψ)^* = ψ^* ∘ϕ^* for all ψ∈(X,Y), ϕ∈(Y,Z);* The function ·:(X,Y) → [0,∞] given byϕ^2 = sup{λ > 0: ϕ^* ∘ϕ - λ𝕀_X }is a complete norm on (X,Y); * ϕ∘ψ≤ϕψ for all ψ∈(X,Y), ϕ∈(Y,Z);* ϕ^* ∘ϕ = ϕ^2 for all ϕ∈(X,Y);* For all ϕ∈(X,Y), ϕ^* ∘ϕ is a positive element of the C^*-algebra (X,X). Letand 𝒟 be -categories. A functor F:→ D is called a -functor if the induced mappings(X,Y)↦(F(X),F(Y)) are ℂ-linear and ^*-preserving. A natural transformation ν:F→ G between -functors is called an isometry if ν_X^*ν_X=𝕀_F(X) for all X∈. Moreover, ν is called a unitary if it is a surjective isometry in which case ν_Xν_X^*=𝕀_G(x) for all X∈. We are interested in C^*-categories that admit tensor product structures. (see for example <cit.>) A C^*-tensor category is a C^*-category 𝒞 together with a -linear bifunctor -⊗-:𝒞×𝒞→𝒞, a distinguished object 1_𝒞∈ and unitary natural isomorphisms α_X,Y,Z : (X ⊗ Y) ⊗ Z → X ⊗ (Y ⊗ Z), λ_X : (1_𝒞⊗ X) → X, ρ_X : ( X ⊗ 1_𝒞) → X, such that (ϕ⊗ψ)^* = (ϕ^* ⊗ψ^*) and the following diagrams commute for any X,Y,Z,W∈ [column sep=0.3em] ((W⊗ X)⊗ Y)⊗ Z dlα_W,X,Y⊗id_Zdrrα_W⊗ X,Y,Z (W⊗(X⊗ Y))⊗ Z dα_W,X⊗ Y,Z(W⊗ X)⊗ (Y⊗ Z) dα_W,X,Y⊗ ZW⊗ ((X⊗ Y)⊗ Z) rrrid_W⊗α_X,Y,ZW⊗ (X⊗(Y⊗ Z)),(X⊗ 1_𝒞)⊗rrα_X,1,Ydr[swap]ρ_X⊗𝕀_Y Y X⊗ (1_𝒞⊗ Y)dl𝕀_X⊗λ_YX⊗ Y.Moreover,is said to be semisimple if any object X∈ can be decomposed uniquely as a finite direct sum X≅⊕_i X_i with X_i∈ satisfying (X_i,X_i)≅ℂ. We call the structure morphism α as in Definition <ref> associated to a C^*-tensor category its associator and the maps λ and ρ the unitors. We call an object X∈ such that (X,X)≅ℂ irreducible or simple. For a C^*-tensor category , we will denote by () a collection of isomorphism class representatives for simple objects in . Let Γ be a countable, discrete group. We denote by (Γ) the semisimple C^*-tensor category whose objects are finite-dimensional Γ-graded Hilbert spaces, i.e. finite-dimensional Hilbert spaces ℋ endowed with a decomposition ℋ=⊕_g∈Γℋ_g. The morphisms are linear maps that preserve the Γ-grading. The tensor product is the usual Hilbert space tensor product with the grading defined by(ℋ⊗𝒦)_g=⊕_h∈Γℋ_g⊗𝒦_g^-1h. The isomorphism classes of simple objects in this category are indexed by group elements in Γ. We denote these graded Hilbert spaces by ℂ_g where(ℂ_g)_h=ℂ if g=h,0 otherwise.For ω∈ Z^3(Γ,) a normalised 3-cocycle, the category (Γ,ω) is defined exactly as is (Γ) but with associators now given byα_ℋ,𝒦,ℒ:(ξ⊠η)⊠μ↦ω(g,h,k) ξ⊠(η⊠μ)for ξ∈ℋ_g,η∈𝒦_h and μ∈ℋ_k.We now introduce the most important example for our purposes.Let A be a C^*-algebra. We denote by _0(A) the C^*-tensor category whose objects are non-degenerate A-A correspondences and whose morphisms are adjointable intertwiners of the underlying Hilbert bimodules. The tensor product of two A-A correspondences φ with ψ is given by their composition φ∘ψ. The tensor identity of _0(A) is given by the identity homomorphism 𝕀_A, the associator is given by the rebracketing morphism associated to the underlying tensor product of Hilbert bimodules. In general, (A) is not a C^*-tensor category as there is no tensor unit on non-degenerate correspondences. It is a non-unital C^*-tensor category, this is a weakening of Definition <ref> which omits the necessity of a tensor unit.§.§ Szabó's cocycle categoryBefore we begin our discussion on actions of -tensor categories, we recall the case of a twisted action by a second-countable locally compact group Γ. We shall later identify this, in the case when Γ is countable discrete, with actions of the C^*- tensor category (Γ).Let Γ be a locally compact group and A be a -algebra. A twisted action of Γ on A is a pair (α,𝔲), where α:Γ→(A) is a point-norm continuous map, and 𝔲:Γ×Γ→𝒰(ℳ(A)) is a strictly continuous map satisfying α_1=𝕀_A, (𝔲_s,t)∘α_s∘α_t= α_stand𝔲_s,1=𝔲_1,s= 1, 𝔲_r,stα_r(𝔲_s,t)=𝔲_rs,t𝔲_r,sfor all r,s,t∈Γ.Note that the formulae above differ slightly from the definition of a twisted action in <cit.>. In fact, the sole difference is that our unitary cocycles 𝔲_s,t for s,t∈Γ are the adjoints of the cocycles in <cit.>. The reason for this change of conventions will be discussed in Section <ref>.A triple (A,α,𝔲) as above is called a twisted Γ--algebra. If the cocycle is trivial i.e. 𝔲_s,t=1 for all s,t∈Γ, then (A,α) is said to be a Γ--algebra.Let (α,𝔲): Γ↷ A and (β,𝔳): Γ↷ B be two twisted actions on -algebras A and B respectively.* A cocycle representation from (A,α,𝔲) to (B,β,𝔳) is a pair (ψ,v), where ψ: A→ℳ(B) is an extendible ^*-homomorphism and v: Γ→𝒰(ℳ(B)) is a strictly continuous map such that β_g∘ψ = (v_g)∘ψ∘α_gandψ^†(𝔲_g,h) = v_gh^*𝔳_g,hβ_g(v_h)v_gfor all g,h∈Γ. * A cocycle morphism from (A,α,𝔲) to (B,β,𝔳) is a cocycle representation (ψ,v) as above, with the additional requirement that ψ(A)⊆ B. Due to our change of conventions when defining twisted actions we need to change the definition of a cocycle representation. The formula in (<ref>) differs from <cit.> precisely by taking adjoints. As shown in <cit.>, there exists a category with objects being twisted Γ--algebras and morphisms being cocycle morphisms with composition defined in <cit.>. This category is denoted by _Γ,t (<cit.>).Our attention will now focus on generalising this construction to actions of semisimple -tensor categories.§ ACTIONS OF -TENSOR CATEGORIES For -tensor categories 𝒞 and 𝒟, F:𝒞→𝒟 is said to be a -tensor functor if it is a ^*-functor such that F(1_)=1_𝒟 there exists a unitary natural isomorphism J_X,Y:F(X)⊠ F(Y)→ F(X⊗ Y) such that[column sep=5em] (F(X)⊠ F(Y))⊠ F(Z)[r,"α_F(X),F(Y),F(Z)"][d,"J_X,Y⊠𝕀_F(Z)"]F(X)⊠ (F(Y) ⊠ F(Z)) [d,"𝕀_F(X)⊠ J_Y,Z"] F(X ⊗ Y)⊠ F(Z) [d,"J_X ⊗ Y,Z"] F(X)⊠ F(Y ⊗ Z) [d,"J_X,Y ⊗ Z"]F((X ⊗ Y) ⊗ Z )[r,"F(α_X,Y,Z)"] F(X ⊗ (Y ⊗ Z))commutes for all X,Y,Z∈𝒞. In the following definition we denote by _0^(A) the full subcategory of _0(A) consisting of bimodules with countable dense subsets.A -tensor category 𝒞 is said to act on a -algebra A if there exists a -tensor functor F:𝒞→_0(A). If A is separable we further impose that F is valued in _0^(A). We will often denote this by 𝒞F↷ A or by the triple (A,F,J), where J={J_X,Y}_X,Y∈ is the natural isomorphism associated with the functor F. In this case, we say that the triple (A,F,J) is a --algebra.In the literature, the main interest is actions of rigid C^*-tensor categories on C^*-algebras (see e.g. <cit.>). This is because rigid C^*-tensor categories axiomatise the standard invariant in subfactor theory and can be thought of as the mathematical objects encoding the symmetry in finite index inclusions of C^*-algebras. Ifis a rigid C^*-tensor category, A is separable, and (F,J):→_0(A) is a C^*-tensor functor, then the bimodule associated to the correspondence F(X) for any X∈ has a countable dense subset by <cit.> and hence (F,J) automatically falls into _0^(A).Let Γ be a a countable discrete group and let (α,𝔲):Γ→(A) be a twisted action. The pair (α,𝔲) will induce a C^*-tensor functor (α,𝔲):(Γ)→_0(A) by setting α(ℂ_g)=_α_gA,𝔲_ℂ_g,ℂ_h(a⊠ b)=𝔲_g,h^*α_g(a)b.The functor may then be extended by linearity to all of (Γ) in a similar manner to <cit.>. In general, actions of (Γ) on A correspond to cocycle actions of Γ on A⊗𝕂. If A is a -algebra, then its sequence algebra A_∞ is defined byA_∞=ℓ^∞(ℕ,A)/{(a_n)_n≥ 1 : lim_n→∞a_n=0}. We will end this section by showing that if 𝒞F↷ A is an action on a separable -algebra A, then we can induce an action ofon its sequence algebra A_∞. Suppose that F:→_0^(A) is a -tensor functor. Our goal is to build a -tensor functor F_∞:→_0(A_∞).For any X∈, we can view F(X) as a non-degenerate Hilbert A-bimodule. Define F_∞^(0)(X)= ℓ^∞(ℕ,F(X))/ {(ξ_n)_n≥ 1 : lim_n→∞ξ_n=0},with the left and right actions of A_∞ given pointwise. Precisely, for any a∈ A_∞ and any ξ∈ F_∞^(0)(X) represented by (a_n)_n≥ 1 and (ξ_n)_n≥ 1, we letaξ = (a_nξ_n)_n≥ 1ξ a=(ξ_n a_n)_n≥ 1.Similarly, for any ξ,η∈ F_∞^(0)(X) we define ⟨ξ,η⟩_A_∞ = (⟨ξ_n,η_n⟩_A)_n≥ 1.F_∞^(0)(X) is a right pre-Hilbert A_∞-module.First, we need to check that the formulae in (<ref>) are well-defined. For this, suppose the sequences (a_n)_n≥ 1 and (a_n')_n≥ 1 induce the same element in A_∞, and let ξ∈ F_∞^(0)(X) be represented by the sequence (ξ_n)_n≥ 1. Then, a direct calculation shows that⟨ξ_n (a_n-a_n'), ξ_n (a_n-a_n')⟩_A≤⟨ξ_n,ξ_n⟩_Aa_n-a_n'^2_A,which converges to 0 as n→∞. Exactly the same calculation shows that if (ξ_n)_n≥ 1 and (ξ_n')_n≥ 1 induce the same element in F_∞^(0)(X) and (a_n)_n≥ 1∈ A_∞, then (ξ_n a_n)_n≥ 1=(ξ_n' a_n)_n≥ 1 as elements in F_∞^(0)(X). Thus, (<ref>) gives a well-defined right action of A_∞. We now check that (<ref>) gives a well-defined right inner product. First, if (ξ_n)_n≥ 1 and (ξ_n')_n≥ 1 induce ξ, and (η_n)_n≥ 1 and (η_n)_n≥ 1' induce η, then⟨ξ_n,η_n⟩_A- ⟨ξ_n',η_n'⟩_A= ⟨ξ_n-ξ_n',η_n⟩_A+⟨ξ_n',η_n-η_n'⟩_A,which converges to 0 by Cauchy-Schwarz. Moreover, the sequences (⟨ξ_n,η_n⟩_A)_n≥ 1 and (⟨ξ_n',η_n'⟩_A)_n≥ 1 are bounded, so induce elements in A_∞.Therefore, the function ⟨·, ·⟩_A_∞: F_∞^(0)(X)× F_∞^(0)(X)→ A_∞ is well-defined. It is now straightforward to check that this function is right linear, left conjugate linear, and antisymmetric since all this properties are satisfied pointwise for each n∈ℕ. Finally, it is clear that ⟨ξ,ξ⟩_A_∞≥ 0 and that ⟨ξ,ξ⟩_A_∞ = 0 if and only if ⟨ξ_n,ξ_n⟩_A converges to 0 i.e. ξ=0 in F_∞^(0)(X). Thus, F_∞^(0)(X) is a right pre-Hilbert A_∞-bimodule.We define F_∞(X) to be the completion of F_∞^(0)(X) with respect to the norm induced by the right inner product.F_∞(X) is a non-degenerate Hilbert A_∞-bimodule.As in the proof of Lemma <ref> we have that (<ref>) gives a well-defined left action by A_∞. Since the left action of A on F(X) is adjointable, the left action of (a_n)_n≥ 1∈ A_∞ is an adjointable operator with adjoint given by the action of (a_n^*)_n≥ 1. Moreover, F_∞(X) is non-degenerate for any X∈ as F(X) is.Consider the functor F_∞:→_0(A_∞) defined by sending any X∈ to the correspondence induced by the Hilbert A_∞-bimodule F_∞(X), and any f∈(X,Y) for X,Y∈ to the intertwiner defined on F_∞^(0)(X) by F_∞(f)((ξ_n)_n≥ 1)=(F(f)(ξ_n))_n≥ 1 (this has a unique extension to F_∞(X) as it is bounded on F_∞^(0)(X)). Also, one can define J_X,Y^∞:F_∞(X)⊠ F_∞(Y)→ F_∞(X⊗ Y) as the unique continuous extension of the mapJ_X,Y^∞(ξ⊗η)=J_X,Y(ξ_n⊗η_n),for any ξ∈ F_∞^(0)(X), any η∈ F_∞^(0)(Y) (that this is well defined follows in a similar fashion as the arguments in the proof of Lemma <ref>). The pair (F_∞, J_∞) yields the desired action ofon A_∞. The map F_∞:→_0(A_∞) defined above is a C^*-tensor functor and soacts on A_∞ via the triple (A_∞, F_∞, J^∞).It follows from construction that F_∞ is a C^*-functor. As J_X,Y are unitary isomorphisms for X,Y∈, it follows that J_X,Y^∞ are unitary isomorphisms with adjoint the continuous extension of the map(ξ_n⊗η_n)_n≥ 1↦ (J_X,Y^*(ξ_n⊗η_n))_n≥ 1for (ξ_n)_n≥ 1∈ F_∞^(0)(X) and (η_n)_n≥ 1∈ F_∞^(0)(Y). The naturality of J^∞ and that J^∞ satisfies commuting diagrams as in (<ref>) follows from direct computations when restricting to the dense pre-Hilbert bimodules F_∞^(0)(X). A density argument shows that these are satisfied on F_∞.§ THE GENERALISED COCYCLE CATEGORY In this section we introduce the category of --algebras for which we will later perform intertwining arguments. Throughout the rest of this paperis always assumed to be a -tensor category. Recall the definition of an action ofon a -algebra A from Definition <ref>.Let 𝒞F↷ A and 𝒞G↷ B be actions of 𝒞 on -algebras A and B.* A correspondence morphism from (A, F, J) to (B, G, I) is a pair (ϕ,{v_X}_X∈𝒞), where ϕ:A→ℒ(X_B) is an A-B correspondence and {v_X:ϕ∘ F(X)→ G(X)∘ϕ}_X∈ is a natural family of A-B-bimodule maps such that v_X is an isometry (not necessarily adjointable) for any X∈.[By an isometry, we mean a map which preserves the norm. By the polarisation identity, it is equivalent to assume that it preserves the inner product.] Moreover, for all X,Y∈𝒞 the following pentagon diagram commutes max width= ϕ∘ F(Y)∘ F(X)drJ_X,Y⊠𝕀_ϕ[swap]dl𝕀_F(X)⊠v_YG(Y)∘ϕ∘ F(X) [swap]ddv_X⊠𝕀_G(Y)ϕ∘ F(X⊗ Y)ddv_X⊗ Y G(Y)∘ G(X)∘ϕrr𝕀_ϕ⊠ I_X,Y G(X⊗ Y)∘ϕ,and v_1_:A⊠_ϕX_B→_ϕX_B⊠ B≅_ϕX_B is given by v_1_(a⊠ x)=ϕ(a)x for any a∈ A and x∈ X_B. For convenience, we write (ϕ,v), where v denotes the collection of maps {v_X}_X∈. [The notation of the maps in (<ref>) denotes the tensor product of bimodules, which is equivalent to composition of correspondences.] * A cocycle representation (ϕ,{v_X}_X∈𝒞):(A,F,J)→ (B,G,I) is a correspondence morphism for which we further require that ϕ:A→ℳ(B) is a ^*-homomorphism. * A cocycle morphism (ϕ,{v_X}_X∈𝒞):(A,F,J)→ (B,G,I) is a cocycle representation for which we further require that ϕ:A→ B is a ^*-homomorphism.By naturality, if 𝒞 is semisimple, any -equivariant structure is uniquely determined by its values on (). In particular, for any cocycle morphism, the family of maps {v_X}_X∈ is uniquely determined by the family of maps {v_X}_X∈().In the case of group actions, Definition <ref> recovers Szabó's notion of a cocycle morphism (see Definition <ref>). Suppose (A,α) and (B,β) are actions of a group Γ on -algebras A and B. Consider them as actions of (Γ) as in Example <ref>. Let (ϕ,v):(A,α)→ (B,β) be an extendible cocycle morphism as in Definition <ref> with v_g being adjointable for all g∈Γ. Fix g∈Γ and let f_g:=T_g'∘v_ℂ_g∘ S_g: _ϕ∘α_gB→_β_g∘ϕB, where S_g:_ϕ∘α_gB→_α_gA⊠_ϕB is given by S_g(b)=lim_λξ_λ⊠ b for any b∈ B, where ξ_λ is an approximate unit for A and T_g':_ϕB⊠_β_gB→_β_g∘ϕB is given by T_g'(b⊠ c)=β_g(b)c for b,c∈ B.[Note that S_g is well-defined by Proposition <ref>.] Moreover, let T_g:_α_gA⊠_ϕB→_ϕ∘α_gB be given by T_g(a⊠ b)=ϕ(a)b for any a∈ A and any b∈ B.Since f_g is an adjointable bimodule map, it follows that f_g(b)=u_gb for any b∈ B, for some u_g∈ℳ(B).[Note that T_g is adjointable as ϕ is extendible (see Proposition <ref>). Thus, f_g is a composition of adjointable maps.] In particular, the equality f_g(a b)=a f_g(b) implies thatβ_g(ϕ(a))u_g=u_gϕ(α_g(a))for all a∈ A. This gives (<ref>).As S_g∘ T_g is the identity map, it follows that the diagram_α_gA⊠_ϕB [swap]dT_grv_ℂ_g _ϕB⊠_β_gB _ϕ∘α_gB rf_g _β_g∘ϕBu(T_g')^-1 commutes. Hence,v_ℂ_g(a⊠ b)=lim_λη_λ⊠u_gϕ(a)b,for any a∈ A, b∈ B and η_λ an approximate unit for B. Following the pentagon diagram for v, one gets (<ref>). Conversely, if (ϕ,u):(A,α)→ (B,β) is a cocycle morphism as in Definition <ref>, then define f_g:_ϕ∘α_gB→_β_g∘ϕB by f_g(b)=u_gb for all b∈ B and v_ℂ_g be given by (<ref>). Note that (<ref>) implies that v_ℂ_g is a bimodule map, while (<ref>) gives the pentagon diagram for v_ℂ_g. Hence (ϕ, v) yields a cocycle morphism in the sense of Definition <ref>.Note that, unlike in <cit.>, we do not require ϕ∈(A,B) to be extendible. In fact, this is precisely the reason why we consider the maps v_X to be isometries (possibly non-adjointable) instead of unitaries. For example, the map v_1_:A⊠_ϕB→_ϕB⊠ B≅_ϕB is given by v_1_(a⊠ b)=ϕ(a)b for all a∈ A and b∈ B. Therefore, it is not surjective unless ϕ is non-degenerate. Moreover, it might not be adjointable if ϕ is not extendible as seen in the proof of Proposition <ref>.Furthermore, the morphisms of Definition <ref> fit into the -equivariant -theory developed in <cit.> (see <cit.>). In <cit.> a correspondence morphism is instead called a -Hilbert A-B-bimodule and a cocycle morphism is called a cocycle--*-homomorphism. We now define composition formulae for the various notions of morphisms in Definition <ref>. Using the standard composition between correspondences, we can define composition of correspondence morphisms in the obvious way. Let 𝒞F↷ A, 𝒞G↷ B, and 𝒞H↷ Cbe actions of 𝒞 on -algebras A, B, and C respectively. If (ϕ,v):(A,F,J)→ (B,G,I) and (ψ,w):(B,G,I)→ (C,H,K) are correspondence morphisms, their composition is denoted by (ψ∘ϕ, w∘v), where (w∘v)_X=(𝕀_ϕ⊠w_X)∘(v_X⊠𝕀_ψ).By combining the pentagon diagrams for v and w, one obtains the following lemma. With the notation above, (ψ∘ϕ,w∘v):(A,F,J)→ (C,H,K) is a correspondence morphism which denotes the composition of (ϕ,v) and (ψ,w). It is immediate to see that (w∘v)_X is a natural A-C bimodule map, that it is an isometry, and that (w∘v)_1_(a⊠ x)=ψ(ϕ(a))x. According to Definition <ref>, it remains to check that (ψ∘ϕ,w∘v) satisfies the diagram max width= ψ∘ϕ∘ F(Y)∘ F(X)drJ_X,Y⊠𝕀_ψ∘ϕ[swap]dl𝕀_F(X)⊠(w∘v)_YH(Y)∘ψ∘ϕ∘ F(X) [swap]dd(w∘v)_X⊠𝕀_H(Y)ψ∘ϕ∘ F(X⊗ Y)dd(w∘v)_X⊗ Y H(Y)∘ H(X)∘ψ∘ϕrr𝕀_ψ∘ϕ⊠ K_X,Y H(X⊗ Y)∘ψ∘ϕ. Since (ϕ,v) and (ψ,w) are correspondence morphisms, we can combine their respective diagrams. Precisely, we will tensor on the right with 𝕀_ψ the diagram corresponding to (ϕ,v)and we will tensor on the left with 𝕀_ϕ the diagram of (ψ,w). Putting them together we get the following commuting diagrammax width= ψ∘ϕ∘ F(Y)∘ F(X)drJ_X,Y⊠𝕀_ϕ⊠𝕀_ψ[swap]dl𝕀_F(X)⊠v_Y⊠𝕀_ψψ∘ G(Y)∘ϕ∘ F(X) ddv_X⊠𝕀_G(Y)⊠𝕀_ψψ∘ϕ∘ F(X⊗ Y)ddv_X⊗ Y⊠𝕀_ψψ∘ G(Y)∘ G(X)∘ϕ[swap]dd𝕀_ϕ⊠𝕀_G(X)⊠w_Yrr𝕀_ϕ⊠ I_X,Y⊠𝕀_ψψ∘ G(X⊗ Y)∘ϕdd𝕀_ϕ⊠w_X⊗ Y H(Y)∘ψ∘ G(X)∘ϕ[swap]dr𝕀_ϕ⊠w_X⊠𝕀_H(Y) H(X⊗ Y)∘ψ∘ϕ H(Y)∘ H(X)∘ψ∘ϕ[swap]ur𝕀_ψ∘ϕ⊠ K_X,Y.It now becomes apparent that the right half of (<ref>) is the same as the composition of the two most right arrows in (<ref>). Moreover, the map from ψ∘ϕ∘ F(Y)∘ F(X) to H(Y)∘ H(X)∘ψ∘ϕ obtained by following the downward maps in the left half of (<ref>) is precisely the map obtained by doing 𝕀_F(X)⊠(w∘v)_Y followed by (w∘v)_X⊠𝕀_H(Y). To see this, using the definitions of the maps (w∘v)_X and (w∘v)_Y, it suffices to check that the following diagram commutesψ∘ G(Y)∘ϕ∘ F(X) [swap]dd𝕀_F(X)⊠𝕀_ϕ⊠w_Yrrv_X⊠𝕀_G(Y)⊠𝕀_ψψ∘ G(Y)∘ G(X)∘ϕdd𝕀_ϕ⊠𝕀_G(X)⊠w_Y H(Y)∘ψ∘ϕ∘ F(X) rrv_X⊠𝕀_ψ⊠𝕀_H(Y) H(Y)∘ψ∘ G(X)∘ϕ.This follows since the tensor product is a bifunctor. Hence we reach the conclusion. However, if ϕ and ψ are possibly degenerate cocycle morphisms, the composition formula above will not give a cocycle morphism. Therefore, to form a category, we introduce a slightly different composition on cocycle morphisms. Let (ϕ,v):(A,F,J)→ (B,G,I) and (ψ,w):(B,G,I)→ (C,H,K) be cocycle morphisms. Let w*v be the collection of isometries {(w*v)_X}_X∈ given byF(X)⊠_ψ∘ϕC [swap]dS_Xr(w*v)_X _ψ∘ϕC⊠ H(X)F(X)⊠_ϕB⊠_ψC r(w∘v)_X _ϕB⊠_ψC⊠ H(X)uT⊠𝕀_H(X).Here S_X(x⊠ c)=lim_λx⊠η_λ⊠ c and T(b⊠ c)=ψ(b)c for any X∈, x∈ F(X), b∈ B, and c∈ C, with η_λ being an approximate unit for B.[We have not shown that S_X is well-defined at this point.]The continuous linear map S_X: F(X)⊠_ψ∘ϕC→ F(X)⊠_ϕB⊠_ψC given by S_X(x⊠ c)=lim_λx⊠η_λ⊠ c for any x∈ F(X), any c∈ C, and some approximate unit η_λ of B is a well-defined isometric bimodule isomorphism for any X∈.We will show that for any x∈ F(X) and any c∈ C, the net x⊠η_λ⊠ c is Cauchy with respect to the norm induced by the right inner product. By definition, we have that ⟨ x⊠(η_λ-η_μ)⊠ c, x⊠(η_λ-η_μ)⊠ c⟩_C= ⟨ c, ⟨ x⊠(η_λ-η_μ), x⊠(η_λ-η_μ)⟩_B c⟩_C. Then, a direct computation shows that ⟨ x⊠(η_λ-η_μ), x⊠(η_λ-η_μ)⟩_B= ⟨η_λ-η_μ, ⟨ x,x⟩_A (η_λ-η_μ)⟩_B = (η_λ-η_μ)ϕ(⟨ x,x⟩_A)(η_λ-η_μ).Therefore, ⟨ x⊠(η_λ-η_μ)⊠ c, x⊠(η_λ-η_μ)⊠ c⟩_C=⟨ c, ψ((η_λ-η_μ)ϕ(⟨ x,x⟩_A)(η_λ-η_μ))c⟩_C. By the Cauchy-Schwarz inequality and since (η_λ-η_μ)ϕ(⟨ x,x⟩_A)^1/2 converges to 0, it is readily seen that ⟨ x⊠(η_λ-η_μ)⊠ c, x⊠(η_λ-η_μ)⊠ c⟩_C converges to 0, so S_X is well-defined. Moreover, commutation with the left and right actions are immediate.Let T_X:F(X)⊠_ϕB⊠_ψC→ F(X)⊠_ψ∘ϕC be the continuous linear map given by T_X(x⊠ b⊠ c)=x⊠ψ(b)c. Then, S_X(T_X(x⊠ b⊠ c))= S_X(x⊠ψ(b)c) = lim_λx⊠η_λ⊠ψ(b)c = lim_λx⊠η_λ⊠ b c = lim_λx⊠η_λ b⊠ c = x⊠ b⊠ c. Therefore, by linearity and continuity of both S_X and T_X, it follows that S_X is surjective and T_X is injective. It now suffices to show that T_X is surjective. This will imply that S_X is invertible with T_X being the inverse. Let x∈ F(X) and c∈ C. Since F(X) is a non-degenerate A-bimodule, recall from Lemma <ref> that the map R_X^-1:F(X)⊠ A→ F(X) given by R_X^-1(x⊠ a)=x a is a bimodule isomorphism. Then, let y∈ F(X) and a∈ A such that y a=x. A straightforward calculation shows that T_X(y⊠ϕ(a)⊠ c)=x⊠ c, so T_X is surjective. Hence, S_X=T_X^-1 is an isomorphism for any X∈. Finally, for any X∈, T_X=𝕀_F(X)⊠ T, where T:_ϕB⊠_ψC→_ψ∘ϕC is given by T(b⊠ c)=ψ(b)c for any b∈ B and c∈ C. Since T is an isometry, we conclude that T_X, and hence S_X are isometric maps. Note that the proof of Lemma <ref> also shows that the map S_X does not depend on the choice of approximate unit. Let (ϕ,v):(A,F,J)→ (B,G,I) and (ψ,w):(B,G,I)→ (C,H,K) be cocycle morphisms. Then (ψ∘ϕ, w*v) is a cocycle morphism.Note that ψ∘ϕ:A→ C is a ^*-homomorphism. Then, (w*v)_X is an isometry as it is a composition of isometries, and it is also an A-C-bimodule map as it is a composition of bimodule maps. Moreover, naturality of w*v follows from naturality of w∘v. Let us now check that the family of maps {(w*v)_X} satisfies the diagrammax width=F(X)⊠ F(Y)⊠_ψ∘ϕCdrJ_X,Y⊠𝕀_C[swap]dl𝕀_F(X)⊠(w*v)_YF(X)⊠_ψ∘ϕC⊠ H(Y) [swap]dd(w*v)_X⊠𝕀_H(Y)F(X⊗ Y)⊠_ψ∘ϕCdd(w*v)_X⊗ Y_ψ∘ϕC⊠ H(Y)⊠ H(X) rr𝕀_C⊠ K_X,Y_ψ∘ϕC⊠ H(X⊗ Y),for all X,Y∈.Fix X,Y∈. By Definition <ref>, composing the two rightmost arrows in (<ref>) yields that(w*v)_X⊗ Y∘ (J_X,Y⊠𝕀_C)=(T⊠𝕀_H(X⊗ Y))∘(w∘v)_X⊗ Y∘ S_X⊗ Y∘(J_X,Y⊠𝕀_C)=(T⊠𝕀_H(X⊗ Y))∘(w∘v)_X⊗ Y∘ (J_X,Y⊠𝕀_B⊠𝕀_C)∘ (𝕀_F(X)⊗ S_Y). Then, by Lemma <ref>, (w∘v)_X⊗ Y∘ (J_X,Y⊠𝕀_B⊠𝕀_C)= (𝕀_B⊠𝕀_C⊠ K_X,Y)∘((w∘v)_X⊠𝕀_H(Y))∘(𝕀_F(X)⊠ (w∘v)_Y).Substituting (<ref>) into (<ref>), and using Definition <ref>, it follows that the diagram in (<ref>) commutes.It remains to check that (w*v)_1_:A⊠_ψ∘ϕC→_ψ∘ϕC⊠ C is given by (w*v)_1_(a⊠ c)=ψ(ϕ(a))⊠ c. By definition,(w*v)_1_=(T⊠𝕀_C)∘ (𝕀_B⊠w_1_)∘ (v_1_⊠𝕀_C)∘ S_1_.Let η_λ be an approximate unit for B. Then, a direct calculation shows that (w*v)_1_(a⊠ c) =lim_λ(T⊠𝕀_C)∘ (𝕀_B⊠w_1_)∘ (v_1_⊠𝕀_C)(a⊠η_λ⊠ c)= lim_λ(T⊠𝕀_C)∘ (𝕀_B⊠w_1_)(ϕ(a)⊠η_λ⊠ c) = lim_λ(T⊠𝕀_C)(ϕ(a)⊠ψ(η_λ)⊠ c) = lim_λψ(ϕ(a))ψ(η_λ)⊠ c = ψ(ϕ(a))⊠ c,for any a∈ A and any c∈ C. Hence, (ψ∘ϕ, w*v) is a cocycle morphism.With the same notation as in Definition <ref>, if (ϕ,v) and (ψ,w) are extendible cocycle representations, then it also follows that (ψ^†∘ϕ, w*v) is a well-defined cocycle representation.If the cocycle morphisms in Definition <ref> are assumed to be non-degenerate, then the maps S_X and T are bimodule isomorphisms for any X∈. Therefore, the composition in Definition <ref> corresponds canonically to the composition of correspondence morphisms. The composition considered in Definition <ref> defines a category. To show this we first reformulate the notion of cocycle morphism. Roughly speaking, all the information carried by the collection of isometries {v_X}_X∈ can be encoded into a collection of linear maps {h^X:F(X)→ G(X)}_X∈𝒞 satisfying some conditions. This viewpoint will facilitate our constructions and proofs in later sections. Our approach is motivated by <cit.> which introduces this alternative viewpoint in the unital setting. First, we recall a way of extending an action (F,J) of a C^*-tensor categoryon A to its matrix amplification M_n(A). Consider the functor F^(n):→(M_n(A)) that maps objects X∈ to the correspondence with underlying bimodule F(X)⊗ M_n(ℂ) with the right inner product defined by ⟨ (x_ij),(y_ij)⟩_M_n(A)=(∑_l ⟨ x_li,y_lj⟩_A), right M_n(A) action given by (x_ij) (a_ij)=(∑_l x_il a_lj) and with the left action given by (a_ij)(x_ij)=(∑_l a_il x_lj) for a_ij∈ A and x_ij, y_ij∈ F(X) for 1≤ i,j≤ n.[This bimodule can be identified with the external tensor product of F(X) and M_n(ℂ). Hence the inner product defines a Hilbert M_n(A)-bimodule (see <cit.>).] For a morphism T∈(X,Y) we let F^(n)(T)=T⊗𝕀_M_n(ℂ). Moreover, letting J_X,Y^(n)((x_ij)⊠ y_i,j))=(∑_l J_X,Y(x_il⊠ y_lj)) it is a straightforward calculation that (F^(n),J^(n)) is an action ofon M_n(A).Letbe a -tensor category acting on -algebras A and B via (A,F,J) and (B,G,I) respectively, and let ϕ: A→ B be a ^*-homomorphism. Then there is a bijection between the families {v_X}_X∈ corresponding to a cocycle morphism (ϕ,v):(A,F,J)→ (B,G,I) and families of linear maps: {h^X: F(X)→ G(X)}_X∈such that for any X,Y∈ * h^X(a xa')=ϕ(a) h^X(x)ϕ(a') for any a,a'∈ A;* for any morphism f∈(X→ Y), G(f)∘ h^X=h^Y∘ F(f);* ϕ(⟨ x , y⟩_A)=⟨ h^X(x) , h^X(y)⟩_B for any x,y∈ F(X); * the diagram:F(X)⊠ F(Y) [swap]dh^X⊠ h^YrJ_X,YF(X⊗ Y)dh^X⊗ YG(X)⊠ G(Y) rI_X,YG(X⊗ Y)commutes;* h^1_:A→ B is given by h^1_(a)=ϕ(a) for any a∈ A. Suppose we are given a collection of linear maps {h^X} satisfying the conditions listed above. Fix ζ_λ an approximate unit for A. For all X∈ let v_X:F(X)⊠_ϕB→_ϕB⊠ G(X) be given byv_X(x⊠ b)=lim_λϕ(ζ_λ)⊠ h^X(x) b,x∈ F(X), b∈ B. First, we need to show that v_X is well-defined. It suffices to show that the net ϕ(ζ_λ)⊠ h^X(x) b is Cauchy. A straightforward computation using the definition of the inner product and <ref> gives that ϕ(ζ_λ-ζ_μ)⊠ h^X(x) b^2= ϕ(ζ_λ-ζ_μ) h^X(x) b^2= h^X(ζ_λ x-ζ_μ x) b^2.Since the bimodule F(X) is non-degenerate, denoting the left action by σ:A→ℒ(F(X)), we have that σ(ζ_λ) converges strictly to 1_ℒ(F(X)). Then, for any x∈ F(X), ζ_λ x=σ(ζ_λ)x converges to x. Moreover, condition <ref> implies that h^X is continuous, so the right hand side of(<ref>) converges to zero and hence ϕ(ζ_λ-ζ_μ)⊠ h^X(x) b^2 converges to 0. Therefore the formula in (<ref>) gives a well-defined map v_X for all X∈. The maps v_X are linear by linearity of h^X and naturality follows by using naturality of the family {h^X} given by condition <ref>. It is straightforward to see that v_X commutes with the right B-action, and for any a∈ A, b∈ B, X∈ and x∈ F(X)v_X(a (x⊠ b)) =lim_λϕ(ζ_λ)⊠ h^X(a x) b = lim_λϕ(ζ_λ)⊠ϕ(a) h^X(x) b = lim_λϕ(ζ_λ)ϕ(a)⊠ h^X(x) b = ϕ(a)⊠ h^X(x) b = av_X(x⊠ b).Therefore, v_X is an A-B-bimodule map. Moreover, using <ref>,v_1_(a⊠ b)=lim_λϕ(ζ_λ)⊠ϕ(a) b = lim_λϕ(ζ_λ)ϕ(a)⊠ b= ϕ(a)⊠ b, ∀ a∈ A, b∈ B. To prove that (ϕ,v) defines a cocycle morphism, it remains to show that each map v_X is an isometry and the family {v_X} is such that diagram (<ref>) commutes. Let us first show that each map v_X is an isometry. For any x∈ F(X) and any b∈ B, using <ref>, we have|v_X(x⊠ b)|^2= lim_λ⟨ϕ(ζ_λ) h^X(x) b, ϕ(ζ_λ) h^X(x) b⟩= lim_λ⟨ h^X(ζ_λ x) b, h^X(ζ_λ x) b⟩= lim_λ b^* ⟨ h^X(ζ_λ x), h^X(ζ_λ x)⟩ b.On the other hand, <ref> yields that|x⊠ b|^2= ⟨ b, ⟨ x,x⟩_A b⟩= ⟨ϕ(|x|)b, ϕ(|x|)b⟩= b^*ϕ(⟨ x,x⟩_A)b= b^*⟨ h^X(x),h^X(x)⟩_Bb.Similarly, as h^X is continuous by <ref> and F(X) is non-degenerate, it follows that h^X(ζ_λ x) converges to h^X(x), which shows that v_X is an isometry when restricted to elementary tensors. To show that v_X acts as an isometry on sums of the form ∑_i=1^n x_i⊠ b_i for x_i∈ F(X), b_i∈ B and X∈ we consider the amplified actions (F^(n),J^(n)) and (G^(n),I^(n)) on M_n(A) and M_n(B) respectively. It follows from a direct computation that the family of linear maps h^X,(n):F^(n)(X)→ G^(n)(X) defined by (x_ij)↦ (h^X(x_ij)) for X∈ and x_ij∈ F(X) satisfies conditions <ref>-<ref> with the amplified homomorphism ϕ:M_n(A)→ M_n(B). Therefore v_X^(n) defined as in (<ref>) but instead with the pair (ϕ,h^X,(n)) is an isometry when restricted to elementary tensors. Choose 𝐗 in F^(n)(X) with first row given by the vector (x_1,x_2,… ,x_n) and zero elsewhere and 𝐁 in M_n(B) have first column (b_1,b_2,…, b_n) and zero elsewhere. Now, by definition𝐗⊠𝐁^2 =⟨𝐁,⟨𝐗,𝐗⟩_M_n(A)𝐁⟩=∑_i=1^n x_i⊠ b_i^2and similarly through a direct computationv_X^(n)(𝐗⊠𝐁)^2=v_X(∑_i=1^n x_i⊠ b_i)^2.As v_X^(n) is an isometry when restricted to elementary tensors, it follows that v_X is an isometry. It remains to check that the diagram max width=F(X)⊠ F(Y)⊠_ϕBdrJ_X,Y⊠𝕀_B[swap]dl𝕀_F(X)⊠v_YF(X)⊠_ϕB⊠ G(Y) [swap]ddv_X⊠𝕀_G(Y)F(X⊗ Y)⊠_ϕBddv_X⊗ Y_ϕB⊠ G(X)⊠ G(Y) rr𝕀_B⊠ I_X,Y_ϕB⊠ G(X⊗ Y).commutes for all X,Y∈.Starting with an elementary tensor x⊠ y⊠ b with X,Y∈,x∈ F(X), y∈ F(Y) and b∈ B and following the two rightmost maps of the diagram, we get thatx⊠ y⊠ b↦ J_X,Y(x⊠ y)⊠ b↦lim_λϕ(ζ_λ)⊠ h^X⊗ Y(J_X,Y(x⊠ y)) b.Moreover, using that the family of linear maps satisfies condition <ref>, this composition coincides with the mappingx⊠ y⊠ b↦lim_λϕ(ζ_λ)⊠ I_X,Y(h^X(x)⊠ h^Y(y)) b.Again, starting with x⊠ y⊠ b but now following the three leftmost arrows in diagram (<ref>) we get x⊠ y⊠ b↦lim_λx⊠ϕ(ζ_λ)⊠ h^Y(y) b ↦lim_λlim_μϕ(ζ_μ)⊠ h^X(x)ϕ(ζ_λ)⊠ h^Y(y) b = lim_μϕ(ζ_μ)⊠ h^X(x)⊠ h^Y(y) b ↦lim_μϕ(ζ_μ)⊠ I_X,Y(h^X(x)⊠ h^Y(y) b)= lim_μϕ(ζ_μ)⊠ I_X,Y(h^X(x)⊠ h^Y(y)) b,where the first equality holds since h^X(x)ϕ(ζ_λ)=h^X(xζ_λ) converges to h^X(x). So (<ref>) commutes and (ϕ,v) is a cocycle morphism. Now, consider the map Ψ:{h^X}→{v_X} given by the formula in (<ref>). We claim that Ψ is independent of the choice of approximate unit. Indeed let ζ_λ and ξ_λ be two approximate units for A. Similarly as in (<ref>) we have that, ϕ(ζ_λ-ξ_λ)⊠ h^X(x) b=h^X(ζ_λ x-ξ_λ x) b, which converges to 0. Hence, the map v_X is independent of the choice of approximate unit, and so Ψ is well-defined. Conversely, suppose we have a cocycle morphism (ϕ,v):(A,F,J)→ (B,G,I) and for each X∈ let h^X:F(X)→ G(X) be given by F(X)rιF(X)⊠_ϕBrv_X _ϕB⊠ G(X)rfG(X),where ι(x)=lim_λx⊠η_λ for some approximate unit η_λ of B and all x∈ F(X), and f is the map given by f(b⊠ y)=b y for all b∈ B and y∈ G(X). Note that f is an A-B-bimodule isomorphism if we see G(X) as a left A-module through ϕ (i.e. ϕ(a)_B f(y)=a_A f(y) for all y∈_ϕ B⊠ G(X) and a∈ A).To check that ι is well-defined, we show that the net x⊠η_λ is Cauchy for all X∈ and x∈ F(X). Precisely, one has that ⟨ x⊠(η_λ-η_μ),x⊠(η_λ-η_μ)⟩_B=⟨η_λ-η_μ, ⟨ x,x⟩_A(η_λ-η_μ)⟩_B= |ϕ(⟨ x,x⟩_A)^1/2(η_λ-η_μ)|^2. This converges to 0 since the image of ϕ is contained in B and η_λ is an approximate unit for B.We now check that the family {h^X} defined above satisfies the required compatibility conditions. Since each of the maps in (<ref>) are linear, we get that h^X is linear. To see <ref>, note that f, v_X and ι are left module maps so ϕ(a) h^X(x)=h^X(a x) for all a∈ A and x∈ F(X). Moreover, as f, v_X and ι are right B-module mapsh^X(x)ϕ(a) =lim_λ f(v_X(x⊠η_λϕ(a)))=h^X(x a).Hence, h^X satisfies <ref>. It is straightforward to see that h^X satisfies <ref> by naturality of v. Note that v_X and f are isometries. So one has that for any x∈ F(X), ⟨ h^X(x) , h^X(x)⟩_B=⟨ι(x),ι(x)⟩_B=lim_λ⟨η_λ,ϕ(⟨ x,x⟩_A)η_λ⟩_B= ϕ(⟨ x,x⟩_A) and <ref> follows from the polarisation identity. Condition <ref> follows from the fact that the maps v_X satisfy the diagram in (<ref>). Finally,h^1_(a) = lim_λ f(v_1_(a⊠η_λ)) = lim_λ f(ϕ(a)⊠η_λ) =ϕ(a).Now, let Φ:{v_X}→{h^X} be the map induced by the formula in (<ref>). Note that ι and hence Φ is independent of the choice of approximate unit. Indeed, let η_λ and ξ_λ be two approximate units for B. We show that the net x⊠(η_λ-ξ_λ) converges to 0 for any x∈ F(X). Note that |x⊠(η_λ-ξ_λ)|^2 =⟨ (η_λ-ξ_λ), ⟨ x, x⟩_A(η_λ-ξ_λ)⟩= |ϕ(⟨ x,x⟩_A)^1/2(η_λ-ξ_λ)|^2,which converges to 0. We claim that Φ and Ψ are inverses to each other. First we show that Φ∘Ψ is the identity map. For any X∈ and x∈ F(X), it follows thatmax width= Φ(Ψ(h^X))(x)=f(Ψ(h^X)(lim_λx⊠η_λ))=f(lim_μlim_λϕ(ζ_μ)⊠ h^X(x)η_λ).Since η_λ is an approximate unit for B and G(X) is non-degenerate, it follows thatΦ(Ψ(h^X))(x)=f(lim_μϕ(ζ_μ)⊠ h^X(x))=lim_μϕ(ζ_μ) h^X(x).As h^X satisfies condition <ref>, Φ(Ψ(h^X))(x)=lim_μh^X(ζ_μ x). Thus, it suffices to show that h^X(ζ_μ x-x)→ 0. This follows by continuity of h^X and that F(X) is non-degenerate. To prove that Ψ∘Φ is the identity map, note thatΨ(Φ(v_X))(x⊠ b) =lim_μϕ(ζ_μ)⊠Φ(v_X)(x) b=lim_μϕ(ζ_μ)⊠ f(v_X(lim_λx⊠η_λ)) b= lim_μϕ(ζ_μ)⊠ f(v_X(lim_λx⊠η_λ b))=lim_μϕ(ζ_μ)⊠ f(v_X(x⊠ b)).Applying f to both sidesf(Ψ(Φ(v_X))(x⊠ b)) =lim_μϕ(ζ_μ)_B f(v_X(x⊠ b))= lim_μζ_μ_A f(v_X(x⊠ b))=lim_μ f(v_X((ζ_μ x)⊠ b))=f(v_X(x⊠ b))as ζ_μ x converges to x. Hence, we reach the conclusion by composing with f^-1: G(X)→_ϕB⊠ G(X) given by f^-1(x)=lim_λη_λ⊠ x.Note that this alternative picture only holds for cocycle morphisms. In the generality of cocycle representations, the maps h^X may not be well-defined. That is because if (ϕ,v) is a cocycle representation, ϕ can land in M(B)∖ B. As η_λ is an approximate unit for B ϕ(⟨ x,x⟩_A)^1/2(η_λ-η_μ)^2 need not converge to 0.As in Remark <ref>, if the acting category 𝒞 is semisimple, then the family of linear maps {h^X}_X∈ is uniquely determined by the family {h^X}_X∈(). Precisely, if X≅⊕_iX_i∈ is the decomposition as a direct sum of elements in (), then F(X) is naturally isomorphic to ⊕_iF(X_i) and G(X) is naturally isomorphic to ⊕_iG(X_i). Then, the map h^X is ⊕_ih^X_i. In particular, it suffices to check that a family of linear maps {h^X: F(X)→ G(X)}_X∈() satisfy the conditions of Lemma <ref> to yield a cocycle morphism. When ϕ:A→ B is extendible and (ϕ,{h^X}_X∈) is a cocycle morphism, condition <ref> also follows for a,a'∈ℳ(A). Precisely, if a,a'∈ℳ(A) then h^X(a x a')=ϕ^†(a) h^X(x)ϕ^†(a') for any X∈ and x∈ F(X). To see this, notice that for an approximate unit e_λ of A, ϕ(e_λ) converges to a projection p. Then,p h^X(x)=lim_λϕ(e_λ) h^X(x)=lim_λh^X(e_λ x)=h^X(x).Suppose =(Γ) and (ϕ,{h^g}_g∈Γ):(A,α,J)→ (B,β,I) is an extendible cocycle morphism. Recall from Example <ref> that v_g(a⊠ b)=lim_λη_λ⊠u_gϕ(a)b, where η_λ is an approximate unit for B and (u_g)_g∈Γ⊆ℳ(B). Then for any g∈Γ and a∈ A,h^g(a)=f(v_g(lim_λa⊠η_λ))=f(lim_λη_λ⊠u_gϕ(a))=u_gϕ(a).For convenience, we will denote a cocycle morphism by (ϕ, h), where h denotes the collection of linear maps {h^X}_X∈. We now discuss their composition. If (ϕ,h):(A,F,J)→ (B,G,I) and (ψ,l):(B,G,I)→ (C,H,K) are cocycle morphisms, then (ψ∘ϕ,l∘ h):(A,F,J)→ (C,H,K) is a cocycle morphism and coincides with the composition of (ϕ,h) and (ψ,l). Clearly ψ∘ϕ:A→ C is a ^*-homomorphism and {l^X∘ h^X: F(X)→ H(X)}_X∈ is a family of linear maps. Conditions <ref>, <ref>, <ref>, and <ref> are immediate. The compatibility with the tensor product follows by stacking the diagrams in <ref> of Lemma <ref> for h and l. Thus, (ψ∘ϕ,l∘ h):(A,F,J)→ (C,H,K) induces a cocycle morphism by Lemma <ref>.Take now a,a'∈ A and x,y∈ F(X). Then, by using the equivariance conditions for h^X and l^X, we see that l^X∘ h^X(a xa')=l^X(ϕ(a) h^X(x) ϕ(a'))=ψ(ϕ(a)) l^X(h^X(x)) ψ(ϕ(a')). Similarly, we get thatψ(ϕ(⟨ x , y⟩_A))=ψ(⟨ h^X(x) , h^X(y)⟩_B)=⟨ l^X(h^X(x)) , l^X(h^X(y)) ⟩_C. Then, for any f∈(X→ Y), one can note thatH(f)∘ l^X∘ h^X=l^Y∘ G(f)∘ h^X=l^Y∘ h^Y∘ F(f). F(X)⊠ F(Y) [swap]dh^X⊠ h^YrJ_X,YF(X⊠ Y)dh^X⊠ YG(X)⊠ G(Y) [swap]dl^X⊠ l^YrI_X,YG(X⊠ Y)dl^X⊠ YH(X)⊠ H(Y) rK_X,YH(X⊠ Y).Therefore, it is not hard to see that F(X)⊠ F(Y) [swap]d(l^X∘ h^X)⊠ (l^Y∘ h^Y)rJ_X,YF(X⊠ Y)dl^X⊠ Y∘ h^X⊠ YH(X)⊠ H(Y) rK_X,YH(X⊠ Y). Suppose that (ϕ,v) and (ψ,w) are cocycle morphisms associated to the families of linear maps {h^X}_X∈ and {l^X}_X∈ respectively. We claim that the cocycle morphism (ψ∘ϕ,w*v) has the associated family of linear maps {l^X∘ h^X}_X∈. Recall from Definition <ref> that(w*v)_X=(T⊠𝕀_H(X))∘(w∘v)_X∘ S_X,where (w∘v)_X=(𝕀_ϕ⊠w_X)∘(v_X⊠𝕀_ψ) (see (<ref>)). Start with an elementary tensor x⊠ c and let ζ_μ be an approximate unit of A and η_λ be an approximate unit of B. Following the composition of maps defining (w*v)_X, we get thatx⊠ c↦lim_λx⊠η_λ⊠ c ↦lim_λlim_μϕ(ζ_μ)⊠ h^X(x)η_λ⊠ c= lim_μϕ(ζ_μ)⊠ h^X(x)⊠ c ↦lim_μlim_λϕ(ζ_μ)⊠ψ(η_λ)⊠ l^X(h^X(x)) c↦lim_μlim_λψ(ϕ(ζ_μ))ψ(η_λ)⊠ l^X(h^X(x)) c= lim_μψ(ϕ(ζ_μ))⊠ l^X(h^X(x)) c,where the last equality follows since η_λ is an approximate unit, and in particular fixes ϕ(ζ_μ) in the limit. But this is precisely the formula in (<ref>) corresponding to the family of linear maps l∘ h, so the two compositions agree.Lemma <ref> shows that any cocycle morphism can be equivalently represented by a pair (ϕ,h). Moreover Lemma <ref> shows that in the latter picture the composition of cocycle morphisms translates to the composition of the underlying ^*-homomorphisms and linear maps. From now on we will freely identify these two pictures.The class of --algebras (A,F,J), together with cocycle morphisms (ϕ,h):(A,F,J)→ (B,G,I) defines a category with respect to the composition in Lemma <ref>. The composition in Lemma <ref> is easily seen to be associative. Moreover, for any cocycle morphism (ϕ,h), the cocycle morphisms (𝕀_A,{𝕀_F(X)}) and (𝕀_B,{𝕀_G(X)}) are left and right identities respectively.In the spirit of <cit.>, we have constructed a category of --algebras. The generalised cocycle category _ is defined as the category whose objects are --algebras and whose morphisms are cocycle morphisms. Composition of morphisms in _ is defined in Lemma <ref>. On any object (A,F,J), the identity morphism in this category is given by (𝕀_A,{𝕀_F(X)}_X∈). A cocycle morphism (ϕ,h):(A,F,J)→ (B,G,I) is invertible in this category if and only if ϕ:A→ B is an isomorphism and h^X is bijective for any X∈, in which case the inverse is given by (ϕ^-1,h^-1). Following the terminology in <cit.>, we say that an invertible morphism in this category is a cocycle conjugacy.§ INDUCTIVE LIMITS In this section, we construct inductive limits in _. This is done in <cit.>, when restricted to unital -algebras and unital, injective connecting maps. Our approach is slightly different and does not need these assumptions. Before starting our construction, let us set up some notation.Letbe a -tensor category, and A_n be a sequence of separable -algebras on whichacts via the pair (F_n,J^(n)). Then, let(ϕ_n,h_n):(A_n,F_n,J^(n))→ (A_n+1,F_n+1,J^(n+1))be a sequence of cocycle morphisms. Recall that the -inductive limit A=lim_⟶{A_n,ϕ_n} is defined as the completion of A^(0)={(a_n)_n≥ 1∈⊕_ℓ^∞A_n/⊕_c_0A_n: ∃ N∈ℕ s.t. ∀ n≥ N ,ϕ_n(a_n)=a_n+1}with respect to the topology induced by the norm (a_n)_n≥ 1=lim_n→∞a_n_A_n. Recall that the connecting maps ϕ_n,∞:A_n→ A are given by (ϕ_n,∞(a_n))_k= ϕ_n,k(a_n),k≥ n0,k<nfor all n≥ 1, where we adopt the standard notation ϕ_n,m:=ϕ_m-1∘…∘ϕ_n for any m>n≥ 1.Similarly, for any X∈ and any m>n≥ 1, we consider the natural family h_n,m^X:F_n(X)→ F_m(X) obtained by composition, with the convention that h_n,n+1^X=h_n^X.To build an action on A, we start by constructing bimodules that will form the image of the functor. Essentially, for any X∈, we can build a Hilbert A-A-bimodule as an inductive limit of F_n(X). The construction is very similar to the one in (<ref>).DefineF^(0)(X)={(x_n)_n≥ 1∈⊕_ℓ^∞F_n(X)/⊕_c_0F_n(X): ∃ N∈ℕ s.t. ∀ n≥ N , h_n^X(x_n)=x_n+1},where the norm on F_n(X) is induced by the right inner product. For any x=(x_n)_n≥ 1,y=(y_n)_n≥ 1∈ F^(0)(X) and any a=(a_n)_n≥ 1∈ A^(0), we can define x a= (x_n a_n)_n≥ 1and ⟨ x,y⟩_A^(0)=(⟨ x_n,y_n⟩_A_n)_n≥ 1.For any X∈, F^(0)(X) equipped with the structure in (<ref>) and (<ref>) is a right pre-Hilbert-A^(0)-module. We start by checking that the right action is well-defined. Firstly, if x=(x_n)_n≥ 1∈ F^(0)(X) and a=(a_n)_n≥ 1∈ A^(0), then (x_n a_n)_n≥ 1 induces an element in F^(0)(X). By <ref> of Lemma <ref>, h_n^X(x_n a_n)=h_n^X(x_n)ϕ_n(a_n) for any n∈ℕ, a_n∈ A_n, and x_n∈ F_n(X). Moreover, since x∈ F^(0)(X) and a∈ A^(0), there exists N∈ℕ such that h_n^X(x_n)= x_n+1 and ϕ_n(a_n)=a_n+1 for all n≥ N. Hence,h_n^X(x_n a_n)=x_n+1 a_n+1for all n≥ N, as required.That (<ref>) is independent of the choice of representative sequences, follows exactly as in the proof of Lemma <ref>. Thus, (<ref>) gives a well-defined right action of A^(0). We now check that (<ref>) gives a well-defined right pre-inner product. Firstly, if x=(x_n)_n≥ 1,y=(y_n)_n≥ 1∈ F^(0)(X), then ⟨ x,y⟩ is an element of A^(0). By <ref> of Lemma <ref> applied to each map h_n^X, we have thatϕ_n(⟨ x_n,y_n⟩_A_n)=⟨ h_n^X(x_n),h_n^X(y_n)⟩_A_n+1.Moreover, since x,y∈ F^(0)(X), there exists N∈ℕ such that for any n≥ N, ⟨ h_n^X(x_n),h_n^X(y_n)⟩_A_n+1=⟨ x_n+1,y_n+1⟩_A_n+1. Hence,ϕ_n(⟨ x_n,y_n⟩_A_n)=⟨ x_n+1,y_n+1⟩_A_n+1for all n≥ N, as required.That (<ref>) is independent of the choice of representative sequences, follows as in the proof of Lemma <ref>. Thus, (<ref>) gives a well-defined A^(0)-valued map. It is now straightforward to check that this function is right linear, left conjugate linear, and antisymmetric. Finally, it is clear that ⟨ x,x⟩_A^(0)≥ 0 and ⟨ x,x⟩_A^(0) = 0 if and only if ⟨ x_n,x_n⟩_A_n converges to 0 i.e. x=0 in F^(0)(X). Thus, the conclusion follows. For any X∈, since A^(0) is a dense ^*-subalgebra of A, combining Lemma <ref> and <cit.>, we form the completion of F^(0)(X), denoted by F(X). This is a right-Hilbert A-module.For any X∈, with the notation above, F(X) is a right Hilbert A-A-bimodule. We start by defining a left A-action on F(X). For any x=(x_n)_n≥ 1∈ F^(0)(X) and any a=(a_n)_n≥ 1∈ A^(0), we can define a x= (a_n x_n)_n≥ 1.The fact that (<ref>) is a well-defined left A^(0)-action on F^(0)(X) follows as in the proof of Lemma <ref>. Moreover, the left A^(0)-action is adjointable, since the left action of A_n on F_n(X) is adjointable for any n≥ 1. Thus, using <cit.>, it extends to a left action of A on F(X) by density.We now show that the assignment X→ F(X) extends to an action ofon A. With the notation above, F:→_0^(A) is a -functor. Moreover, there exists a unitary natural isomorphismJ:={J_X,Y: F(X)⊠ F(Y)→ F(X⊗ Y): X,Y∈}such that the pair (F,J) is an action ofon A.Since F_n(X)∈_0^(A_n) for any X∈ and any n∈ℕ, it follows that F(X)∈_0^(A) for any X∈. Suppose X,Y∈ and f:X→ Y a morphism in . Then, we define F(f)(x)=(F_n(f)(x_n))_n≥ 1 for any x∈ F^(0)(X). Notice that F_n(f) is adjointable for any n∈ℕ, hence bounded. Therefore, F(f) is bounded on F^(0)(X) and may be extended to an adjointable A-A-bimodule map on F(X). It is straightforward to check that F defines -functor. Furthermore, for any X,Y∈, x=(x_n)_n≥ 1∈ F^(0)(X), and y=(y_n)_n≥ 1∈ F^(0)(Y), letJ_X,Y(x⊠ y)=(J_X,Y^(n)(x_n⊠ y_n))_n≥ 1. We claim that J_X,Y is well-defined. Firstly, note that J_X,Y is independent of the choice of representative sequences by continuity of J_X,Y^(n) for any n∈ℕ.Secondly, there exists N∈ℕ such that for any n≥ N,h_n^X(x_n)=x_n+1 and h_n^Y(y_n)=y_n+1.Then, Lemma <ref> gives thath_n^X⊗ Y(J_X,Y^(n)(x_n⊠ y_n))=J_X,Y^(n+1)(h_n^X(x_n)⊠ h_n^Y(y_n))=J_X,Y^(n+1)(x_n+1⊠ y_n+1),so the image of J_X,Y is indeed contained in F^(0)(X⊗ Y). By density, we can now extend J_X,Y to F(X)⊠ F(Y). Moreover, since each J^(n) is a unitary natural isomorphism, so is J. Finally, since each family {J_X,Y^(n):X,Y∈} satisfies a commuting diagram as in (<ref>), so does the collection of maps {J_X,Y:X,Y∈}. Hence, the pair (F,J) gives an action ofon A. We now show the existence of sequential inductive limits in _. The triple (A,F,J) constructed above defines the inductive limit of the inductive system in (<ref>).For any n≥ 1, we need to define cocycle morphisms(ϕ,h)_n,∞:(A_n,F_n,J^(n))→ (A,F,J)such that for any n≥ 1, the diagram [row sep=4em, column sep=2em] (A_n,F_n,J^(n)) rr(ϕ,h)_n,∞[swap]dr(ϕ_n,h_n) (A,F,J)(A_n+1,F_n+1,J^(n+1)) [swap]ur(ϕ,h)_n+1,∞, commutes.For any n≥ 1, we have a ^*-homomorphism ϕ_n,∞:A_n→ A. To define a cocycle morphism from (A_n,F_n,J^(n)) to (A,F,J), it suffices to find a collection of linear maps {h_n,∞^X:F_n(X)→ F(X)}_X∈ satisfying the conditions in Lemma <ref>. If X∈ and x_n∈ F_n(X) for n∈ℕ, we define (h_n,∞^X(x_n))_k= h_n,k^X(x_n) k≥ n 0 k<n.By convention, h_n,n^X(x_n)=x_n for any X∈ and any x_n∈ F_n(X). Then, h_n,∞^X is a linear map into F(X). Moreover, the conditions of Lemma <ref> are checked pointwise since (ϕ_n,k,h_n,k) is a cocycle morphism for any k>n. Thus, for any n≥ 1, the pair (ϕ_n,∞,h_n,∞) defines a cocycle morphism from (A_n,F_n,J^(n)) to (A,F,J). Moreover, as each h_n,k^X satisfies <ref> of Lemma <ref>, then for any a_n,a_n'∈ A_n,h_n,∞^X(a_n x_n a_n')=ϕ_n,∞(a_n) h_n,∞^X(x_n)ϕ_n,∞(a_n'). Hence, the family of maps h_n∞ satisfies <ref> of Lemma <ref>.Then, if f∈(X→ Y), for any k>n, by <ref> of Lemma <ref>, we have F_k(f)∘ h_n,k^X=h_n,k^Y∘ F_n(f). Therefore, using the definition of the functor F, it follows that F(f)∘ h_n,∞^X=h_n,∞^Y∘ F_n(f). Thus, the family of maps h_n,∞ satisfies <ref> of Lemma <ref>. To check condition <ref> of Lemma <ref>, let X∈ and x_n,y_n∈ F_n(X). Then, by definition, ⟨ h_n,∞^X(x_n),h_n,∞^X(y_n)⟩_A= ⟨ h_n,k^X(x_n),h_n,k^X(y_n)⟩_A_kk≥ n 0 k<n.Applying <ref> of Lemma <ref> to the maps h_n,k^X, it follows that for any k≥ n,⟨ h_n,k^X(x_n),h_n,k^X(y_n)⟩_A_k=ϕ_n,k(⟨ x_n,y_n⟩_A_n).Therefore, we conclude that ⟨ h_n,∞^X(x_n),h_n,∞^X(y_n)⟩_A=ϕ_n,∞(⟨ x_n,y_n⟩_A_n). Hence, the family of maps h_n,∞ satisfies <ref> of Lemma <ref>.By <ref> of Lemma <ref>, for any X,Y∈ and any k≥ n, the maps h_n,k^X and h_n,k^Y satisfy the commuting diagramF_n(X)⊠ F_n(Y) [swap]dh_n,k^X⊠ h_n,k^YrJ_X,Y^(n)F_n(X⊗ Y)dh_n,k^X⊗ YF_k(X)⊠ F_k(Y) rJ_X,Y^(k)F_k(X⊗ Y).Therefore, by definition, the diagramF_n(X)⊠ F_n(Y) [swap]dh_n,∞^X⊠ h_n,∞^YrJ_X,Y^(n)F_n(X⊗ Y)dh_n,∞^X⊗ YF(X)⊠ F(Y) rJ_X,YF(X⊗ Y)commutes. Thus, the family of maps h_n,∞ satisfies <ref> of Lemma <ref>. By <ref> of Lemma <ref>, h_n,k^1_(a_n)=ϕ_n,k(a_n) for any k≥ n≥ 1 and any a_n∈ A_n. Then, by definition, we get that h_n,∞^1_(a_n)=ϕ_n,∞(a_n) for any n≥ 1 and any a_n∈ A_n. This shows that the family of maps h_n,∞ satisfies <ref> of Lemma <ref>.We now check that the triple (A,F,J) is the inductive limit of the sequence of triples (A_n,F_n,J^(n)), together with connecting morphisms (ϕ_n,h_n). Firstly, we claim that (<ref>) commutes. Since A is an inductive limit for the system (A_n,ϕ_n)_n≥ 1, it follows that for any n≥ 1, ϕ_n,∞=ϕ_n+1,∞∘ϕ_n. Therefore, it suffices to check that for any X∈ and any n≥ 1, we have that h_n,∞^X=h_n+1,∞^X∘ h_n^X. This follows by (<ref>) since for any k>n+1, h_n,k^X=h_n+1,k^X∘ h_n^X. It remains to check that (A,F,J) satisfies the universal property. Precisely, let B be a -algebra and (B,G,I) defining an action ofon B. Suppose there exists a sequence of cocycle morphisms (ψ,l)_n,∞:(A_n,F_n,J^(n))→ (B,G,I) such that the diagram[row sep=4em, column sep=2em] (A_n,F_n,J^(n)) rr(ψ,l)_n,∞[swap]dr(ϕ_n,m,h_n,m) (B,G,I)(A_m,F_m,J^(m)) [swap]ur(ψ,l)_m,∞commutes for any m>n≥ 1. We claim that there exists a cocycle morphism (Φ,r):(A,F,J)→ (B,G,I) such that for any n≥ 1,(Φ,r)∘ (ϕ,h)_n,∞= (ψ,l)_n,∞. By the universal property of the inductive limit A, there exists a ^*-homomorphism Φ:A→ B such that Φ∘ϕ_n,∞=ψ_n,∞ for all n≥ 1. Then, for any X∈, the union ⋃_k≥ 1h_k,∞^X(F_k(X)) is dense in F(X). For any x_k∈ F_k(X), define r^X(h_k,∞^X(x_k))=l_k,∞^X(x_k). As l_k,∞^X is continuous for any k≥ 1 and any X∈, we may extend r^X to a well-defined linear map r^X:F(X)→ G(X). The fact that r^X satisfies the conditions appearing in Lemma <ref> is routine and essentially follows from the fact that for each k≥ 1, (ψ_k,∞,l_k,∞) and (ϕ_k,∞,h_k,∞) are cocycle morphisms. For the convenience of the reader, we check condition <ref> of Lemma <ref>. Let X∈, x_k, y_k∈ F_k(X). Then, ⟨ r^X(h_k,∞^X(x_k)), r^X(h_k,∞^X(y_k))⟩ = ⟨ l_k,∞^X(x_k), l_k,∞^X(y_k)⟩= ψ_k,∞(⟨ x_k,y_k⟩)= Φ∘ϕ_k,∞(⟨ x_k,y_k⟩)= Φ(⟨ h_k,∞^X(x_k), h_k,∞^X(y_k)⟩).The remaining conditions follow similarly. Therefore, (Φ,r) is a cocycle morphism and (Φ,r)∘ (ϕ,h)_n,∞= (ψ,l)_n,∞. Hence, the triple (A,F,J) is the inductive limit of the system in (<ref>). §.§ Topology induced on cocycle representationsLet 𝒞 be a semisimple -tensor category with countably many isomorphism classes of simple objects. Let 𝒞F↷ A and 𝒞G↷ B be actions of 𝒞 on -algebras A and B. If (ϕ,v),(ψ,w):(A,F,J)→ (B,G,I) are cocycle representations, we introduce pseudometrics that will measure the distance between them. Roughly speaking, the family of pseudometrics measures the pointwise distance between ϕ and ψ in the strict topology and the pointwise distance between v_X and w_X for all X∈. Recall that for any fixed X∈, v_X:F(X)⊠_ϕB→_ϕB⊠ G(X) and w_X:F(X)⊠_ψB→_ψB⊠ G(X). By the construction of the tensor product, both F(X)⊠_ϕB and F(X)⊠_ψB are quotients of the algebraic tensor product F(X)⊙ B. To compare the difference between the maps, it suffices to do so on their respective images of elementary tensors x⊙ b in the right Hilbert B-module B⊠ G(X). For any finite set K⊂() containing 1_ and any compact sets ℱ^B⊂ B, and ℱ^X⊂ F(X) for any X∈ K, denote ℱ= ℱ^B× K×ℱ^X. Then, we define the pseudometricd_ℱ((ϕ,v), (ψ,w)) = max_(b,X,x)∈ℱv_X(x⊠_ϕ b)-w_X(x⊠_ψ b),where the norm is induced by the right inner product on the right Hilbert B-module B⊠ G(X).Note that F(1_)=A and for any a∈ A and b∈ B v_1_(a⊠_ϕb)=ϕ(a)b and w_1_(a⊠_ψb)=ψ(a)b. Therefore, the family of pseudometrics d_ℱ bounds the pointwise distance between ϕ and ψ in the strict topology.We will only use this topology in the setting of cocycle morphisms, so let us derive the relevant subset topology. Let (ϕ_λ,v_λ), (ϕ,v):(A,F,J)→ (B,G,I) be cocycle morphisms with associated families of linear maps {h_λ^X} and {h^X}. Then (ϕ_λ,v_λ) converges to (ϕ,v) if and only if h_λ^X→ h^X pointwise in the norm induced by the right inner product and uniformly over finite sets of () containing 1_.Suppose that h_λ^X converges toh^X pointwise in the norm induced by the right inner product and uniformly over finite sets of () containing 1_. Let ζ_μ be an approximate unit of A and recall from (<ref>) that for any X∈, x∈ F(X), and b∈ Bv_X(x⊠_ϕ b)=lim_μϕ(ζ_μ)⊠ h^X(x) band(v_λ)_X(x⊠_ϕ_λ b)=lim_μϕ_λ(ζ_μ)⊠ h_λ^X(x) b.Moreover, by Lemma <ref> <ref> h^1_(a)=ϕ(a) and h^1__λ(a)=ϕ_λ(a) for any a∈ A. It follows that (v_λ)_X converges pointwise to v_X, and uniformly over finite sets of () containing 1_. Therefore, (ϕ_λ,v_λ) converges to (ϕ,v) by definition of the pseudometrics in Definition <ref>.Conversely, suppose that (ϕ_λ,v_λ) converges to (ϕ,v). Then (ϕ_λ,v_λ) satisfies the Cauchy criterion with respect to every pseudometric d_ℱ above. Recall from (<ref>) that h_λ^X(x)=f((v_λ)_X(ι(x)) for any X∈ and x∈ F(X), where f and ι are defined in (<ref>). Since (v_λ)_X is Cauchy in the point-norm topology and f is continuous, it follows that h_λ^X is Cauchy uniformly over finite sets K⊂() containing 1_. But the norm induced by the right inner product is complete, so h_λ^X converges pointwise to h^X uniformly over finite sets K⊂() containing 1_.Let Γ be a countable discrete group. Let (α,𝔲): Γ↷ A and (β,𝔳): Γ↷ B be two twisted actions on -algebras and (ϕ,v'), (ψ,w'):(A,α,𝔲)→ (B,β,𝔳) be two extendible cocycle morphisms as in Definition <ref>. Recall from Example <ref> that (ϕ,v') and (ψ,w') induce cocycle morphisms (ϕ,v),(ψ,w):(A,α,𝔲)→(B,β,𝔳) in the sense of Definition <ref>, where we view (α,𝔲) and (β,𝔳) as (Γ)-actions. So,v_g([a⊠ b]_ϕ)=lim_μη_μ⊠v_g'ϕ(a)b, w_g([a⊠ b]_ψ)=lim_μη_μ⊠w_g'ψ(a)b,for all a∈ A, b∈ B, g∈Γ, and η_μ an approximate unit for B. In this case, the topology is induced by the pseudometrics d_ℱ((ϕ,v), (ψ,w))= max_(a,b,g)∈ℱv_g'ϕ(a)b-w_g'ψ(a)b,where ℱ=ℱ^A×ℱ^B× K, for compact sets ℱ^A⊂ A, ℱ^B⊂ B, and finite K⊂Γ containing 1_Γ. Equivalently, by taking an approximate unit for B, the topology is generated by the pseudometricsd_ℱ^A× K((ϕ,v), (ψ,w))= max_(a,g)∈ℱ^A× Kv_g'ϕ(a)-w_g'ψ(a).In <cit.>, Szabó defines a topology on cocycle morphisms which is generated by the family of pseudometricsd_ℱ((ϕ,v'), (ψ,w'))=max_a∈ℱ^Aϕ(a)-ψ(a)+ max_g∈ Kmax_b∈ℱ^Bb(v_g'-w_g').The convergence with respect to the family of pseudometrics in (<ref>) implies convergence with respect to the family of pseudometrics in (<ref>). If ϕ and ψ are non-degenerate, then ϕ(A)B and ψ(A)B are dense in B. Moreover, the case g=1 in (<ref>) recovers the pointwise difference of the morphisms. Thus, convergence with respect to the family of pseudometrics in (<ref>) implies convergence with respect to the family of pseudometrics in (<ref>). However, these topologies are different outside the non-degenerate setting, with the topology induced by the family ofpseudometrics in (<ref>) being coarser than the topology induced by the family of pseudometrics in (<ref>). Finally, we would like to point out that the topology induced by (<ref>) coincides with that used in <cit.>. We finish our discussion by noticing that the composition of cocycle morphisms is jointly continuous with respect to the topology defined above. This fact will be used in Section <ref> in the context of asymptotic unitary equivalence. Let (ϕ,h):(A,F,J)→ (B,G,I) and (ψ,l):(B,G,I)→ (C,H,K) be two cocycle morphisms. Then the composition map given by[(ϕ,h),(ψ,l)]↦ (ψ,l)∘ (ϕ,h)is jointly continuous.Let (ϕ_λ,h_λ):(A,F,J)→ (B,G,I) and (ψ_λ,l_λ):(B,G,I)→ (C,H,K) be two convergent nets with limits (ϕ,h) and (ψ,l) respectively. First, note that l^X∘ h^X=lim_λ→∞l_λ^X∘ h_λ^X in the point-norm topology and uniformly over finite sets K⊂() containing 1_. Then, the case X=1_ yields that ψ∘ϕ=lim_λ→∞ψ_λ∘ϕ_λ holds in the point-norm topology. Using the composition formula in Lemma <ref> and Lemma <ref>, the conclusion follows.§.§ Approximate unitary equivalence We can now introduce a notion of approximate unitary equivalence that will be crucial to perform equivariant Elliott intertwining arguments. We start by defining unitary equivalence for cocycle morphisms and then use the topology on the space of morphisms to obtain an approximate notion of unitary equivalence.Supposeis a semisimple -tensor category with countably many isomorphism classes of simple objects acting on a -algebra B. We denote this action by the triple (B,G,I). For any u unitary in ℳ(B), we consider (u):B→ B to be the ^*-homomorphism given by b↦ ubu^*. Then (u) induces a Hilbert B-bimodule _(u)B. The map T_u:_(u)B→ B given by b↦ u^*b is a bimodule isomorphism. But for any X∈, G(X)⊠ B≅ B⊠ G(X), so there exists a unitary isomorphism (v_u)_X: G(X)⊠_(u)B→_(u)B⊠ G(X). It follows that ((u),v_u):(B,G,I)→ (B,G,I) is a cocycle morphism. We denote by h_u={h_u^X:G(X)→ G(X)}_X∈ the collection of linear maps corresponding to the cocycle morphism induced by (u).Let 𝒞F↷ Bbe an action of 𝒞 on a -algebra B, and let u∈ℳ(B) be a unitary. Then (u) induces a cocycle morphism ((u),h_u):(B,G,I)→(B,G,I), where h_u^X(x)=u x u^* for any X∈ and any x∈ G(X).[Recall thatdenotes an action on the left, whilean action on the right.]For any X∈, consider the bimodule maps L_X:G(X)→ B⊠ G(X) and R_X:G(X)→ G(X)⊠ B given by L_X(x)=lim_μη_μ⊠ x and R_X(x)=lim_μx⊠η_μ for some η_μ quasicentral approximate unit of B with respect to ℳ(B).As the bimodule G(X) is non-degenerate for any X∈, it follows from Lemma <ref> thatu x= L_X^-1(u L_X(x))x u^*=R_X^-1(R_X(x) u^*).Then, consider the mapι: G(X)→ G(X)⊠_(u)B→ G(X)⊠ Bgiven byx↦lim_μx⊠η_μ↦lim_μx⊠ u^*η_μ. Since η_μ is quasicentral,ι(x)=lim_μx⊠η_μ u^*=R_X(x) u^*. Then, L_X∘ R_X^-1∘ι: G(X)→ B⊠ G(X) is the map given byL_X(R_X^-1(ι(x)))=L_X(x u^*).Finally, we consider the mapf:B⊠ G(X)→_(u)B⊠ G(X)→ G(X)given by f(b⊠ x)=L_X^-1(u (b⊠ x)). Following the construction in Lemma <ref> and since the family of maps {h_u^X} is independent of the chosen approximate unit, it follows thath_u^X=f∘ L_X∘ R_X^-1∘ι.Therefore, for any X∈ and any x∈ G(X),h_u^X(x)=f(L_X(x u^*))=L_X^-1(u L_X(x u^*))=u x u^*,which finishes the proof.* Let 𝒞F↷ A and 𝒞G↷ Bbe actions of 𝒞 on -algebras A and B and let (ϕ,v),(ψ,w):(A,F,J)→ (B,G,I) be cocycle representations. We say that the pairs (ϕ,v) and (ψ,w) are unitarily equivalent if there exists a unitary u∈𝒰(ℳ(B)) such that((u),v_u)∘ (ϕ,v)= (ψ,w).[As mentioned in Remark <ref>, the composition works since (u) is a non-degenerate ^*-homomorphism.]* Let (ϕ,v):(A,F,J)→ (B,G,I) be a cocycle representation and u∈ℳ(B) a unitary.We say that (ψ,w) is an approximate unitary conjugate of (ϕ,v) if there exists a net of unitaries u_λ∈𝒰(ℳ(B)) such thatψ(a)=lim_λu_λϕ(a)u_λ^*,andmax_X∈ Kw_X(x⊠_ψb)-(v_u_λ*v)_X(x⊠_ϕb)λ⟶ 0for all a∈ A, b∈ B, every finite set K⊂(), and any x∈ F(X) for any X∈ K. We denote this by (ψ,w)⪷_u(ϕ,v).[Note that this means precisely that ((u_λ),v_u_λ)∘(ϕ,v) converges to (ψ,w) with respect to the topology in Definition <ref>.]* We say that (ϕ,v) and (ψ,w) are approximately unitarily equivalent if (ψ,w)⪷_u(ϕ,v) and (ϕ,v)⪷_u(ψ,w). This will be denoted by (ϕ,v)≈_u(ψ,w).In general, we will be interested in these notions when (ϕ,v) and (ψ,w) are cocycle morphisms, so let us record the following lemma. If (ϕ,v),(ψ,w):(A,F,J)→ (B,G,I) are cocycle morphisms with associated families of linear maps {h^X} and {l^X} respectively, then (ψ,w) is an approximate unitary conjugate of (ϕ,v) if and only if there exists a net of unitaries u_λ∈𝒰(ℳ(B)) such thatmax_X∈ Kl^X(x)-u_λ h^X(x) u_λ^*λ⟶ 0for any finite K⊆() containing 1_, and any x∈ F(X).By definition, (ψ,w) is an approximate unitary conjugate of (ϕ,v) if and only if there exists a net of unitaries u_λ in ℳ(B) inducing a net of cocycle morphisms ((u_λ),v_u_λ) such that ((u_λ),v_u_λ)∘(ϕ,v) converges to (ψ,w). Suppose the cocycle morphisms ((u_λ),v_u_λ) have associated families of linear maps {h_u_λ^X}. By Lemma <ref>, ((u_λ),v_u_λ)∘(ϕ,v) converges to (ψ,w) if and only if h_u_λ^X∘ h^X converges pointwise to l^X uniformly over finite sets K⊂() containing 1_. Moreover, by Lemma <ref>,h_u_λ^X(h^X(x))=u_λ h^X(x) u_λ^*for all X∈ and all x∈ F(X) , so the conclusion follows.There is a key difference to Szabó's notion of approximate unitary conjugacy in <cit.>. The notion of approximate unitary conjugacy in Definition <ref> is symmetric, even when restricted to possibly degenerate morphisms. In Lemma <ref>, we show that approximate unitary conjugacy implies approximate unitary equivalence. This is a consequence of the difference in topologies (see Example <ref>).Let (ϕ,h),(ψ,l):(A,F,J)→(B,G,I) be cocycle morphisms. If (ψ,l) is an approximate unitary conjugate of (ϕ,h), then (ϕ,h) and (ψ,l) are approximately unitarily equivalent.Suppose there exist unitaries u_λ∈𝒰(ℳ(B)) such thatmax_X∈ Kl^X(x)-u_λ h^X(x) u_λ^*λ⟶ 0for any finite set K⊆() containing 1_, and any x∈ F(X). We show thatmax_X∈ Kh_u_λ^*^X∘ l^X(x)-h^X(x)λ⟶ 0for any finite set K⊂() containing 1_, and any x∈ F(X) for any X∈ K.Note that for any X∈ and any x∈ F(X), we have that h_u_λ^*^X(h_u_λ^X(h^X(x)))=u_λ^* h_u_λ^X(h^X(x)) u_λ=1_ℳ(B) h^X(x) 1_ℳ(B).But G(X) is non-degenerate, so 1_ℳ(B) h^X(x) 1_ℳ(B)=h^X(x) by Lemma <ref>. Therefore, for any X∈ and any x∈ F(X),h_u_λ^*^X∘ l^X(x)-h^X(x) =h_u_λ^*^X(l^X(x)-h_u_λ^X∘ h^X(x)).Take y_λ= l^X(x)-h_u_λ^X∘ h^X(x) and note that⟨ h_u_λ^*^X(y_λ),h_u_λ^*^X(y_λ)⟩_B=(u_λ^*)(⟨ y_λ,y_λ⟩_B).Thus, if y_λλ⟶0, we also have that h_u_λ^*^X(y_λ)λ⟶0, which completes the proof. We finish this section by defining asymptotic unitary equivalence for cocycle representations. Let 𝒞F↷ A and 𝒞G↷ Bbe actions of 𝒞 on -algebras A and B respectively and let (ϕ,v),(ψ,w):(A,F,J)→ (B,G,I) be cocycle representations. We say that (ψ,w) is asymptotically unitarily equivalent to (ϕ,v) if there exists a strictly continuous map u: [0,∞) →𝒰(ℳ(B)) such thatψ(a)=lim_t→∞u_tϕ(a)u_t^*,andmax_X∈ Kw_X(x⊠_ψb)-(v_u_t*v)_X(x⊠_ϕb)t→∞⟶ 0for all a∈ A, b∈ B, every finite set K⊂(), and any x∈ F(X) for any X∈ K. This will be denoted by (ϕ,v)≅_u(ψ,w).[Following Definition <ref>, we could have defined asymptotic unitary conjugacy. However, this implies asymptotic unitary equivalence as in Lemma <ref>.] The same argument as in Lemma <ref> gives the following equivalent characterisation for asymptotic unitary equivalence of cocycle morphisms. If (ϕ,v),(ψ,w):(A,F,J)→ (B,G,I) are cocycle morphisms with associated families of linear maps {h^X}_X∈ and {l^X}_X∈ respectively, then (ψ,w) is asymptotically unitarily equivalent to (ϕ,v) if and only if there exists a strictly continuous map u:[0,∞)→𝒰(ℳ(B)) such that max_X∈ Kl^X(x)-u_t h^X(x) u_t^*t→∞⟶ 0forany finite K⊆() containing 1_, and every x∈ F(X). Let (ϕ_1,h_1):(A,F,J)→ (B,G,I), (ψ_1,l_1):(B,G,I)→ (C,H,K) be two cocycle morphisms. Suppose that (ϕ_1,h_1) is unitarily equivalent to (ϕ_2,h_2) and (ψ_1,l_1) is unitarily equivalent to (ψ_2,l_2). If ψ_1 is extendible, then (ψ_1∘ϕ_1,l_1∘ h_1) is unitarily equaivalent to (ψ_2∘ϕ_2,l_2∘ h_2). Indeed, let u∈𝒰(ℳ(B)) and v∈𝒰(ℳ(C)) be such that u h_1^X(x) u^*=h_2^X(x) and v l_1^X(y) v^*=l_2^X(y) for any X∈, x∈ F(X), and y∈ G(X). Then, vψ_1^†(u) l_1^X(h_1^X(x)) (vψ_1^†(u))^*=l_2^X(h_2^X(x)). In general, the vertical composition in the 2-category of correspondences is not well defined outside of the extendible setting. We will finish by recording the following lemma which will be used in Section <ref>. Let (ϕ,h):(A,F,J)→ (B,G,I) and (ψ,l):(B,G,I)→ (C,H,K) be two cocycle morphisms which are asymptotically unitarily equivalent to cocycle conjugacies. Then their composition (ψ∘ϕ,l∘ h) is asymptotically unitarily equivalent to a cocycle conjugacy. Suppose that (Φ,H) and (Ψ,L) are cocycle conjugacies such that(Φ,H)≅_u(ϕ,h)(Ψ,L)≅_u(ψ,l).By Lemma <ref>, composition is jointly continuous. Moreover, since Ψ is extendible, (Ψ∘Φ,L∘ H)≅_u(ψ∘ϕ,l∘ h) by Remark <ref>.§ TWO-SIDED ELLIOTT INTERTWINING With our setup we may now use Elliott's abstract framework from <cit.> to show Theorem <ref>.We check the conditions of the second theorem in <cit.>. Our underlying category is the subcategory of _ of separable C^*-algebras, together with extendible cocycle morphisms (see Definition <ref>). In this category, Lemma <ref> gives us a notion of inner automorphisms in the sense of <cit.>. Moreover, the topology of the morphism spaces in this category, as in Lemma <ref>, is induced by a complete metric. Indeed, let K_n be an increasing sequence of finite sets containing 1_ such that ⋃_n∈ K_n=(𝒞). For any X∈(), choose a sequence of contractions μ_n^X which is dense in the unit ball of F(X). Then, the assignment((ϕ,h),(ψ,l))↦∑_n=1^∞2^-nmax_X∈ K_nmax_m≤ nh^X(μ_m^X)-l^X(μ_m^X).yields a complete metric recovering the topology on the space of morphisms. Furthermore, composition with an inner automorphism is isometric. Since the composition of cocycle morphisms is jointly continuous by Lemma <ref>, all the conditions of the second theorem in <cit.> are satisfied. Hence, the conclusion follows.However, we also decide to give the technical argument in full, picking up some more general results in the process. We start by introducing the setup for performing approximate intertwining arguments in the spirit of <cit.>. Letbe a semisimple -tensor category with countably many isomorphism classes of simple objects. Let (F_n,J^(n)): ↷ A_n and (G_n,I^(n)): ↷ B_n be sequences of actions on separable -algebras. Let(ϕ_n,{h_n^X}_X∈): (A_n,F_n,J^(n)) → (A_n+1,F_n+1,J^(n+1))and(ψ_n,{l_n^X}_X∈): (B_n,G_n,I^(n)) → (B_n+1,G_n+1,I^(n+1))be sequences of cocycle morphisms which we view as two inductive systems in the category _.[As in previous sections, we will denote these morphisms by (ϕ_n,h_n) and (ψ_n,l_n) for ease of notation.]Consider two sequences of cocycle morphisms (κ_n,{r_n^X}_X∈): (B_n,G_n,I^(n)) → (A_n,F_n,J^(n))and (θ_n,{s_n^X}_X∈): (A_n,F_n,J^(n)) → (B_n+1,G_n+1,I^(n+1))fitting into the not necessarily family of commutative diagrams…[rr]F_n(X) [rd]^s_n^X[rr]^h_n^X F_n+1(X) [r] [rd]… …[r] G_n(X) [ru]^r_n^X[rr]^l_n^X G_n+1(X) [ru]^r_n+1^X[rr] … . We will call the collection of diagrams (<ref>) an approximate cocycle intertwining, if the following hold:There exist an increasing sequence of finite sets K_n⊂() containing 1_, finite sets ℱ_n^X⊂ F_n(X) and 𝒢_n^X⊂ G_n(X) for any X∈ K_n, and numbers δ_n>0 satisfying* l_n^X(x)-s_n^X(r_n^X(x))≤δ_n for all X∈ K_n and all x∈𝒢_n^X; * h_n^X(x)-r_n+1^X(s_n^X(x))≤δ_nfor all X∈ K_n and all x∈ℱ_n^X;* h_n^X(ℱ_n^X)⊆ℱ_n+1^X, l_n^X(𝒢_n^X)⊆𝒢_n+1^X, r_n^X(𝒢_n^X)⊆ℱ_n^X, ands_n^X(ℱ_n^X)⊆𝒢_n+1^X; * ⋃_m>n (h_n,m^X)^-1(ℱ_m^X)⊂ F_n(X) and ⋃_m>n (l_n,m^X)^-1(𝒢_m^X)⊂ G_n(X) are dense for all X∈ K_n and all n; * ⋃_n∈ℕK_n=();* ∑_n∈ℕδ_n <∞. The conditions listed above are in the spirit of Elliott approximate intertwining arguments, as seen for example in <cit.> or <cit.>. Note that applying X=1_ in conditions <ref>-<ref> above, together with the <ref> recover the usual assumptions in the two-sided Elliott intertwining argument. However, to boost the Elliott intertwining in the equivariant setting, one needs to preserve the equivariant structure. This is why conditions <ref>-<ref> above are phrased for irreducible elements in the category. These conditions resemble the assumptions needed in <cit.> in the case of a twisted group action, although recall that the topology on cocycle morphisms is different.Moreover, <ref>-<ref> encode that the diagrams in <ref> approximately commute in the topology described in Lemma <ref>. Furthermore, in <ref> we are making crucial use of our definition of the functors F_n and G_n to assume that F_n(X) and G_n(X) have a countable dense subset for any X∈ and any n∈ℕ.Note that, in the light of Remark <ref>, it is enough to state the conditions above for X∈(). This will uniquely determine the family of linear maps {h^X}_X∈.Let (A,F,J) and (B,G,I) be inductive limits in _ given by(A,F,J)=lim_⟶{(A_n,F_n,J^(n)),(ϕ_n,{h_n^X}_X∈)}and(B,G,I)=lim_⟶{(B_n,G_n,I^(n)),(ψ_n,{l_n^X}_X∈)}.Let(κ_n,{r_n^X}_X∈): (B_n,G_n,I^(n)) → (A_n,F_n,J^(n))and (θ_n,{s_n^X}_X∈): (A_n,F_n,J^(n)) → (B_n+1,G_n+1,I^(n+1))be sequences of cocycle morphisms making (<ref>) an approximate cocycle intertwining.Then there exist mutually inverse cocycle conjugacies (θ,{s^X}_X∈):(A,F,J)→ (B,G,I) and (κ,{r^X}_X∈):(B,G,I)→ (A,F,J) given by the formulae θ(ϕ_n,∞(a)) = lim_k→∞ (ψ_k+1,∞∘θ_k∘ϕ_n,k)(a), a∈ A_ns^X(h_n,∞^X(x))=lim_k→∞(l_k+1,∞^X∘ s_k^X∘ h_n,k^X)(x), X∈, x∈ F_n(X)andκ(ψ_n,∞(b)) = lim_k→∞ (ϕ_k,∞∘κ_k∘ψ_n,k)(b), b∈ B_n,r^X(l_n,∞^X(x))=lim_k→∞(h_k,∞^X∘ r_k^X∘ l_n,k^X)(x), X∈, x∈ G_n(X).The limits in the formulae (<ref>) and (<ref>) are taken in the topologies induced by the respective right inner products.To deduce the formulae above, one can work backwards. If the cocycle conjugacy θ was to exist, the fact that A and B are inductive limits would force the diagramsA_k [swap]dθ_krϕ_k,∞AdθB_k+1rψ_k+1,∞Bto commute for all k∈ℕ. This immediately gives (<ref>). Considering the associated families of linear maps for the cocycle morphisms in (<ref>), we would need a similar diagram involving the linear maps to commute, which gives (<ref>). The formulae in (<ref>) and (<ref>) can be obtained in exactly the same fashion.We will show that the limits in (<ref>) and (<ref>) exist and that the pair (θ,{s^X}_X∈) is a cocycle morphism. Firstly, note that the limit in (<ref>) exists and θ:A→ B is a ^*-homomorphism by the general Elliott intertwining, as can be found for example in <cit.>.We first show that the limit in (<ref>) exists for all X∈(). We can employ a similar argument to the one in <cit.>. Let X∈ K_n and given x∈ F_n(X), by condition <ref> in Definition <ref>, we may assume x is contained in ⋃_m>n (h_n,m^X)^-1(ℱ_m^X). Moreover, we may assume that h_n,m^X(x)∈ℱ_m^X for all m greater than or equal to some m_0. Consider the possibly non-commutative diagram F_n(X)[r]^h_n,m^XF_m(X) [d]_s_m^X[r]^h_m^XF_m+1(X) [d]^s_m+1^X G_m+1(X) [ru]^r_m+1^X[r]_l_m+1^XG_m+2(X) [r]_ l_m+2,∞^XG(X).Using triangle inequality, it follows that (s_m+1^X∘ h_m^X)(x)-(l_m+1^X∘ s_m^X)(x) ≤s_m+1^X(h_m^X(x)-(r_m+1^X∘ s_m^X)(x))+(s_m+1^X∘ r_m+1^X-l_m+1^X)(s_m^X(x)).Also, by <ref> s_m^X(ℱ_m^X)⊂𝒢_m+1^X, so we may combine <ref>, <ref>, and that s_m+1^X is contractive to get that(s_m+1^X∘ h_m^X)(x)-(l_m+1^X∘ s_m^X)(x)≤δ_m+δ_m+1, ∀ x∈ℱ_m^X⊆ F_m(X).Moreover, since l_m+2,∞^X is contractive, it follows that(l_m+2,∞^X∘ s_m+1^X∘ h_n,m+1^X)(x)-(l_m+1,∞^X∘ s_m^X∘ h_n,m^X)(x)=l_m+2,∞^X((s_m+1^X∘ h_m^X)(h_n,m^X(x))-(l_m+1^X∘ s_m^X)(h_n,m^X(x)))<δ_m+δ_m+1,for all m≥ m_0. Using <ref>, the sequence in (<ref>) is Cauchy and therefore convergent. Moreover <ref> yields that s^X:F(X)→ G(X) is a well-defined linear map for all X∈().We claim that (θ,{s^X}_X∈()) induces a cocycle morphism. For this, we check the conditions in Lemma <ref>. For any n∈ℕ, X∈(), x_n∈ F_n(X), and any a_n,a_n'∈ A_n, as (ϕ_n,∞,h_n,∞) is a cocycle morphism, it follows thats^X(ϕ_n,∞(a_n) h_n,∞^X(x_n)ϕ_n,∞(a_n'))=s^X(h_n,∞^X(a_n x_na_n')).Since the limit in (<ref>) exists and l_k+1,∞^X, s_k^X, h_n,k^X are cocycle morphisms, a direct calculation shows thats^X(h_n,∞^X(a_n x_na_n'))=θ(ϕ_n,∞(a_n)) s^X(h_n,∞^X(x_n))θ(ϕ_n,∞(a_n')). Given X,Y∈(), consider a morphism f∈(X→ Y). We want to show that G(f)∘ s^X=s^Y∘ F(f). Since l_k+1,∞∘ s_k∘ h_n,k is a cocycle morphism, one has thatG(f)∘ l_k+1,∞^X∘ s_k^X∘ h_n,k^X= l_k+1,∞^Y∘ s_k^Y∘ h_n,k^Y∘ F_n(f).Taking the limit as k goes to infinity we see thatG(f)∘ s^X∘ h_n,∞^X=s^Y∘ h_n,∞^Y∘ F_n(f).Using the equalityh_n,∞^Y∘ F_n(f)= F(f)∘ h_n,∞^Xand that ⋃_n≥ 1h_n,∞^X(F_n(X)) is dense in F(X), we get thatG(f)∘ s^X=s^Y∘ F(f). Next, we want to show that θ(⟨ x,y⟩_A)=⟨ s^X(x),s^X(y)⟩_B for any x,y∈ F(X). It suffices to check this condition on the dense subset ⋃_n≥ 1h_n,∞^X(F_n(X)). If x_n,y_n∈ F_n(X) and k>n, as l_k+1,∞^X∘ s_k^X∘ h_n,k^X satisfies condition <ref> of Lemma <ref> one has max width= ⟨(l_k+1,∞^X∘ s_k^X∘ h_n,k^X)(x_n), (l_k+1,∞^X∘ s_k^X∘ h_n,k^X)(y_n)⟩=(ψ_k+1,∞∘θ_k∘ϕ_n,k)(⟨ x_n,y_n⟩).Taking the limit as k goes to infinity and using the formulae in (<ref>) and (<ref>),θ(ϕ_n,∞(⟨ x_n,y_n⟩))=⟨ s^X(h_n,∞^X(x_n)),s^X(h_n,∞^X(y_n))⟩. Moreover, ϕ_n,∞(⟨ x_n,y_n⟩)= ⟨ h_n,∞^X(x_n),h_n,∞^X(y_n)⟩, so θ(⟨ x,y⟩)=⟨ s^X(x),s^X(y)⟩ for any x,y in the dense subset ⋃_n≥ 1h_n,∞^X(F_n(X)).Then, we check <ref> of Lemma <ref>. Let X,Y∈. Since (ψ_k+1,∞∘θ_k∘ϕ_n,k,l_k+1,∞∘ s_k∘ h_n,k) is a cocycle morphism, the diagramF_n(X)⊠ F_n(Y) [swap]dl_k+1,∞^X∘ s_k^X∘ h_n,k^X⊠ l_k+1,∞^Y∘ s_k^Y∘ h_n,k^YrJ_X,Y^(n)F_n(X⊗ Y)dl_k+1,∞^X⊗ Y∘ s_k^X⊗ Y∘ h_n,k^X⊗ YG(X)⊠ G(Y) rI_X,YG(X⊗ Y)commutes. Then, by taking the limit as k goes to infinity, it follows that I_X,Y∘ ((s^X∘ h_n,∞^X)⊠ (s^Y∘ h_n,∞^Y))=s^X⊗ Y∘ h_n,∞^X⊗ Y∘ J_X,Y^(n). Note that by the construction of the map J_X,Y in Lemma <ref>,J_X,Y∘(h_n,∞^X⊠ h_n,∞^Y)=h_n,∞^X⊗ Y∘ J_X,Y^(n).Thus,I_X,Y∘(s^X⊠ s^Y)∘ (h_n,∞^X⊠ h_n,∞^Y)=s^X⊗ Y∘ J_X,Y∘ (h_n,∞^X⊠ h_n,∞^Y).By density it now follows that I_X,Y∘(s^X⊠ s^Y)=s^X⊗ Y∘ J_X,Y. Finally, it follows from (<ref>) that for any a∈ A, s^1_(a)=θ(a). Hence, asis semisimple, (θ,{s^X}_X∈()) induces a cocycle morphism (see Remark <ref>). This follows as the map s^X will be given by a direct sum of linear maps corresponding to irreducible objects. Since same holds for h_n,∞^X, l_k+1,∞^X, s_k^X, and h_n,k^X, and the limit preserves this decomposition, the formula in (<ref>) holds for any X∈. It follows in the same way that (κ,{r^X}_X∈) given by the formulae in (<ref>) and (<ref>) yields a well-defined cocycle morphism. Moreover, the fact that θ and κ are mutually inverse isomorphisms follows from <cit.>. Finally, it remains to check that for any X∈, r^X∘ s^X=𝕀_F(X) and s^X∘ r^X=𝕀_G(X). It suffices to show that for any n≥ 1, r^X∘ s^X∘ h_n,∞^X=h_n,∞^X. For any k>n, by (<ref>), we have thatr^X∘ l_k+1,∞^X∘ s_k^X∘ h_n,k^X=lim_m→∞(h_m,∞^X∘ r_m^X∘ l_k+1,m^X∘ s_k^X∘ h_n,k^X).Using conditions <ref> and <ref> of Definition <ref>, it follows that the right hand side coincides with h_n,∞^X. Now, taking the limit as k goes to infinity, we get that r^X∘ s^X∘ h_n,∞^X=h_n,∞^X. Similarly, s^X∘ r^X=𝕀_G(X), so (θ,{s^X}_X∈):(A,F,J)→ (B,G,I) and (κ,{r^X}_X∈):(B,G,I)→ (A,F,J) are mutually inverse cocycle conjugacies. We now use Theorem <ref> to show that if we assume that the diagrams in (<ref>) commute up to approximate unitary equivalence, then there exist mutually inverse cocycle conjugacies as in Theorem <ref>. The proof follows in a similar fashion to <cit.> and <cit.>. Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F_n,J^(n)): ↷ A_n and (G_n,I^(n)): ↷ B_n be sequences of actions on separable -algebras. Let(ϕ_n,{h_n^X}_X∈): (A_n,F_n,J^(n)) → (A_n+1,F_n+1,J^(n+1))and(ψ_n,{l_n^X}_X∈): (B_n,G_n,I^(n)) → (B_n+1,G_n+1,I^(n+1))be sequences of cocycle morphisms, in the sense of Lemma <ref>, which we view as two inductive systems in the category _.Consider two sequences of extendible cocycle morphisms (κ_n,{r_n^X}_X∈): (B_n,G_n,I^(n)) → (A_n,F_n,J^(n))and (θ_n,{s_n^X}_X∈): (A_n,F_n,J^(n)) → (B_n+1,G_n+1,I^(n+1))fitting into the not necessarily commutative collection of diagrams…[rr]F_n(X) [rd]^s_n^X[rr]^h_n^X F_n+1(X) [r] [rd]… …[r] G_n(X) [ru]^r_n^X[rr]^l_n^X G_n+1(X) [ru]^r_n+1^X[rr] … . Suppose that(ψ_n,l_n)≈_u (θ_n, s_n)∘ (κ_n, r_n) and(ϕ_n, h_n)≈_u (κ_n+1, r_n+1)∘ (θ_n, s_n)for all n∈ℕ. Then there exist mutually inverse cocycle conjugacies (θ,{s^X}_X∈):(A,F,J)→ (B,G,I) and (κ,{r^X}_X∈):(B,G,I)→ (A,F,J). Moreover, if ϕ_n and ψ_n are extendible for any n∈ℕ, then(θ,s)∘ (ϕ_n,∞,h_n,∞)≈_u (ψ_n+1,∞,l_n+1,∞)∘ (θ_n,s_n)and(κ,r)∘ (ψ_n,∞,l_n,∞)≈_u (ϕ_n,∞,h_n,∞)∘ (κ_n,r_n).This will follow as an application of Theorem <ref>. For this, it suffices to show that we can obtain a collection of diagrams as in Definition <ref>. The strategy is to replace the families of cocycle morphisms (κ_n,r_n) and (θ_n,s_n) by unitary perturbations (η_n, R_n) and (ζ_n, S_n) respectively such that the diagrams in (<ref>) become an approximate cocycle intertwining.Let δ_n=2^-n for any n∈ℕ and K_n an increasing sequence of finite sets containing 1_ such that ⋃_n∈ℕK_n=(). For any n∈ℕ, and any X∈ K_n, we can choose t_n^X∈ℕ and finite sets {f_m,n^X}_1≤ m≤ t_n^X⊂ F_n(X) and {g_m,n^X}_1≤ m≤ t_n^X⊂ G_n(X) such that the inclusionsmax width=⋃_k>n(h_n,k^X)^-1({f_m,k^X}_1≤ m≤ t_k^X)⊂ F_n(X) ⋃_k>n(l_n,k^X)^-1({g_m,k^X}_1≤ m≤ t_k^X)⊂ G_n(X)are dense for all n∈ℕ. To simplify notation, we will write {f_m,n^X} to denote the set {f_m,n^X}_1≤ m≤ t_n^X.Set (η_1, R_1)=(κ_1,r_1), 𝒢_1^X={g_m,1^X}⊂ G_1(X), and ℱ_1^X={f_m,1^X}∪ r_1^X(𝒢_1^X) for any X∈ K_1. Since(ψ_1,l_1)≈_u (θ_1, s_1)∘ (κ_1, r_1)= (θ_1, s_1)∘ (η_1, R_1),we can find a unitary u_1∈𝒰(ℳ(B_2)) such that if we set (ζ_1, S_1)=(u_1)∘ (θ_1,s_1), one has thatmax_X∈ K_1max_x∈𝒢_1^Xl_1^X(x)-S_1^X(R_1^X(x))≤δ_1. At the next stage, let 𝒢_2^X={g_m,2^X}∪ S_1^X(ℱ_1^X)∪ l_1^X(𝒢_1^X) for any X∈ K_1 and 𝒢_2^X={g_m,2^X} for any X∈ K_2∖ K_1. Using the assumption (ϕ_1, h_1)≈_u (κ_2, r_2)∘ (θ_1, s_1), that (θ_1,s_1) is unitarily equivalent to (ζ_1,S_1), and that κ_2 is extendible, it follows that (ϕ_1, h_1)≈_u (κ_2, r_2)∘ (ζ_1, S_1) (see Remark <ref>). Therefore, there exists a unitary v_2∈𝒰(ℳ(A_2)) such that if we set (η_2,R_2)=(v_2)∘ (κ_2,r_2), we have thatmax_X∈ K_1max_x∈ℱ_1^Xh_1^X(x)-R_2^X(S_1^X(x))≤δ_1.Then, we let ℱ_2^X={f_m,2^X}∪ R_2^X(𝒢_2^X)∪ h_1^X(ℱ_1^X) for any X∈ K_1 and ℱ_2^X={f_m,2^X}∪ R_2^X(𝒢_2^X) for any X∈ K_2∖ K_1. We continue inductively. Suppose we have finite sets 𝒢_n^X={g_m,n^X}∪ S_n-1^X(ℱ_n-1^X)∪ l_n-1^X(𝒢_n-1^X)⊂ G_n(X)andℱ_n^X={f_m,n^X}∪ R_n^X(𝒢_n^X)∪ h_n-1^X(ℱ_n-1^X)⊂ F_n(X)for any X∈ K_n-1 and𝒢_n^X={g_m,n^X}⊂ G_n(X)andℱ_n^X={f_m,n^X}∪ R_n^X(𝒢_n^X)⊂ F_n(X)for any X∈ K_n∖ K_n-1.Moreover, let unitaries v_n∈𝒰(ℳ(A_n)) and u_n∈𝒰(ℳ(B_n+1)) such that (η_n,R_n)=(v_n)∘ (κ_n,r_n) and (ζ_n,S_n)=(u_n)∘ (θ_n,s_n). It follows thatmax_X∈ K_nmax_x∈𝒢_n^Xl_n^X(x)-S_n^X(R_n^X(x))≤δ_nandmax_X∈ K_nmax_x∈ℱ_n^Xh_n^X(x)-R_n+1^X(S_n^X(x))≤δ_n. We claim that the diagram …[rr]F_n(X) [rd]^S_n^X[rr]^h_n^X F_n+1(X) [r] [rd]… …[r] G_n(X) [ru]^R_n^X[rr]^l_n^X G_n+1(X) [ru]^R_n+1(X)[rr] … .is an approximate cocycle intertwining in the sense of Definition <ref>. Conditions <ref> and <ref> is ensured by (<ref>) and (<ref>) respectively. Then, condition <ref> follows by (<ref>), (<ref>),(<ref>), and (<ref>). Moreover, condition <ref> follows by (<ref>), (<ref>),(<ref>), (<ref>), and the choice of finite sets {f_m,n^X}, {g_m,n^X} from (<ref>). Then, <ref> is ensured by the choice of finite sets K_n, while <ref> is satisfied because the sum ∑_n2^-n converges.Hence, by Theorem <ref> applied to the family of diagrams in (<ref>), there exists mutually inverse cocycle conjugacies (θ,{s^X}_X∈):(A,F,J)→ (B,G,I) and (κ,{r^X}_X∈):(B,G,I)→ (A,F,J). Moreover, Theorem <ref> also gives thatθ(ϕ_n,∞(a)) = lim_k→∞ (ψ_k+1,∞∘ζ_k∘ϕ_n,k)(a), a∈ A_nands^X(h_n,∞^X(x))=lim_k→∞(l_k+1,∞^X∘ S_k^X∘ h_n,k^X)(x), X∈, x∈ F_n(X). Now, assume that ϕ_n and ψ_n are extendible. Recall that(ψ_m,l_m)≈_u (θ_m, s_m)∘ (κ_m, r_m) and (ϕ_m, h_m)≈_u (κ_m+1, r_m+1)∘ (θ_m, s_m)and (ζ_m,S_m)=(u_m)∘ (θ_m,s_m) for any m∈ℕ. Then, for all n≥ 1 and all k>n we have that [2l (ψ_n+1,k+1,l_n+1,k+1)∘ (θ_n,s_n);≈_u (θ_k,s_k)∘(κ_k,r_k)∘(θ_k-1,s_k-1)∘…∘(κ_n+1,r_n+1)∘ (θ_n,s_n);≈_u (θ_k,s_k)∘ (ϕ_n,k,h_n,k);≈_u(ζ_k,S_k)∘ (ϕ_n,k,h_n,k). ]Composing with (ψ_k+1,∞,l_k+1,∞) yields that(ψ_n+1,∞,l_n+1,∞)∘ (θ_n,s_n)≈_u (ψ_k+1,∞,l_k+1,∞)∘ (ζ_k, S_k)∘(ϕ_n,k,h_n,k).Then, by (<ref>) and (<ref>), one gets that(θ,s)∘ (ϕ_n,∞,h_n,∞)≈_u(ψ_n+1,∞,l_n+1,∞)∘ (θ_n,s_n). The fact that(κ,r)∘ (ψ_n,∞,l_n,∞)≈_u (ϕ_n,∞,h_n,∞)∘ (κ_n,r_n)follows analogously. As a corollary of Theorem <ref>, we obtain the following result. Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F,J): ↷ A and (G,I): ↷ B be actions on separable -algebras. Let(ϕ, h): (A,F,J) → (B,G,I)and(ψ, l): (B,G,I) → (A,F,J)be two extendible cocycle morphisms such that𝕀_A≈_u (ψ,l)∘ (ϕ, h) and 𝕀_B≈_u (ϕ,h)∘ (ψ, l).Then there exist mutually inverse cocycle conjugacies(Φ, H): (A,F,J) → (B,G,I)and(Ψ, L): (B,G,I) → (A,F,J)such that(Φ, H)≈_u (ϕ,h)and(Ψ, L)≈_u (ψ,l). This is a direct application of Theorem <ref> with A_n=A, B_n=B, ϕ_n=𝕀_A, ψ_n=𝕀_B, k_n=ψ, and θ_n=ϕ for all n∈ℕ.§.§ Intertwining through reparametrisation Intertwining through reparametrisation is a type of intertwining argument commonly employed in the classification programme of -algebras and -dynamics. In broad terms, if we want to prove that a morphism θ:A→ B_∞ is unitarily equivalent to a morphism ψ:A→ B_∞ which factors through B, then it suffices to check that θ is invariant under reparametrisations. This type of result appears for example in <cit.> and it is used in successful classification results in <cit.>.If A is a separable C^*-algebra and η: ℕ→ℕ is any map with lim_n→∞η(n)=∞, then it induces an endomorphism η^* on A_∞ via η^*((a_n)_n)=(a_η(n))_n. Moreover, ifacts on A by the triple (A,F,J), recall from Lemma <ref> that there exists an induced action on A_∞ by the triple (A_∞, F_∞, J^∞). Then, a straightforward checking of the conditions in Lemma <ref> shows that η induces a cocycle morphism (η^*, r):(A_∞, F_∞, J^∞)→ (A_∞, F_∞, J^∞), where r^X:F_∞(X)→ F_∞(X) is given by r^X((ξ_n)_n)=((ξ_η(n))_n) for any X∈ and any (ξ_n)_n∈ F_∞(X). Using this construction, we will prove an intertwining argument concerning maps into sequence algebras. First we need a preparatory lemma. Let (F,J): ↷ A and (G,I): ↷ B be actions on separable -algebras with B unital. Suppose that (ϕ,h):(A,F,J)→ (B_∞, G_∞, I^∞) is a cocycle morphism such that for any map η: ℕ→ℕ with lim_n→∞η(n)=∞, the cocycle morphisms (ϕ,h) and (η^*,r)∘(ϕ,h) are approximately unitarily equivalent. For each X∈, let (h_n^X)_n^ be any lift of h^X. Then, for every finite set K⊆() containing 1_, finite sets ℱ^X⊆ F(X) for X∈ K, ϵ>0, and m∈ℕ, there is an integer k≥ m such that for every integer n≥ k there is a unitary u∈ B for whichu h_n^X(x) u^* - h_k^X(x)<ϵ,X∈ K,x∈ℱ^X. We prove this by contradiction. Suppose that there exists a finite set K⊆() containing 1_, finite sets ℱ^X⊆ F(X) for X∈ K, ϵ>0, and m∈ℕ such that for every k≥ m, there exists n_k≥ k for which max_X∈ K, x∈ℱ^Xu_k h_n_k^X(x) u_k-h_k^X(x)≥ϵ,for every unitary u_k∈ B. Let η:ℕ→ℕ be the map η(k)=n_k whenever k≥ m and η(k)=1 for k<m. As η_k≥ k for k≥ m, it follows that lim_k→∞η(k)=∞. Moreover, (ϕ,h) and (η^*,r)∘(ϕ,h) are approximately unitarily equivalent, so there exists a unitary u∈ B_∞ for whichu r^X(h^X(x)) u^* - h^X(x)<ϵ,X∈ K,x∈ℱ^X.If we let (u_k)_k≥ 1 to be a sequence of unitaries lifting u, thenlim sup_k→∞u_k h_n_k^X(x) u_k^*-h_k^X(x)<ϵfor all X∈ K and x∈ℱ^X. But this contradicts (<ref>), so we reach the conclusion.Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F,J): ↷ A and (G,I): ↷ B be actions on separable -algebras with B unital and suppose that (ϕ,h):(A,F,J)→ (B_∞, G_∞, I^∞) is a cocycle morphism. Then the following are equivalent:* (ϕ,h) is unitarily equivalent to a cocycle morphism (ψ,l):(A,F,J)→ (B,G,I);[We view ψ as a cocycle morphism into B_∞, but identifying B with the constant sequences in B_∞.]* for any map η: ℕ→ℕ with lim_n→∞η(n)=∞, the cocycle morphisms (ϕ,h) and (η^*,r)∘(ϕ,h) are approximately unitarily equivalent. Let us first show that (i) implies (ii). Suppose that u∈ B_∞ is a unitary such that ((u),h_u)∘ (ψ,l)=(ϕ,h).[Recall the definition of the family of linear maps h_u from Lemma <ref>.] Let η: ℕ→ℕ be any map with lim_n→∞η(n)=∞. Since r^X∘ l^X=l^X for any X∈, it follows thatr^X∘ h^X= r^X∘ h_u^X∘ l^X= h_η^*(u)^X∘ r^X∘ l^X= h_η^*(u)u^*^X∘ h^X.Since this holds for any X∈, we get that((η^*(u)u^*), h_η^*(u)u^*)∘ (ϕ, h)= (η^*,r)∘ (ϕ, h).Hence, (ϕ,h) and (η^*,r)∘(ϕ,h) are unitarily equivalent. Suppose now that (ϕ,h) and (η^*,r)∘(ϕ,h) are approximately unitarily equivalent for any η:→ such that η(n) converges to infinity. Let K_n⊆() be an increasing sequence of finite sets containing 1_ such that ⋃_n=1^∞K_n=(), and ℱ_n^X⊆ F(X) be an increasing sequence of finite sets such that ⋃_n=1^∞ℱ_n^X is dense in F(X) for any X∈().Recursively applying Lemma <ref> to K=K_n, ℱ^X=ℱ_n^X, ϵ= 1/2^n, one may pick k_0=1<k_1<k_2<… and unitaries u_n∈ B for n∈ such thatu_n h_k_n^X(x) u_n^*-h_k_n-1^X(x)<1/2^n,X∈ K_n,x∈ℱ_n^X. Let v_n=u_nu_n-1… u_1, we claim that l^X(x):=lim_n→∞v_n h_k_n^X(x) v_n^* is well-defined for any X∈() and any x∈ F(X). It requires to show that the sequence (v_n h_k_n^X(x) v_n^*)_n≥ 1 is Cauchy for any X∈() and any x∈ F(X). Given ϵ>0, n∈ℕ, X∈ K_n, and x∈ F(X), there exists m∈ℕ and y∈ℱ_m^X such that x-y<ϵ/3. Then, for j>l≥ m, we may use (<ref>) repeatedly, together with triangle inequality, to achieve thatv_j h_k_j^X(y) v_j^*-v_l h_k_l^X(y) v_l^* ≤∑_i=l+1^j 2^-i <ϵ/3for any l greater than some large enough N. Moreover, since h_k_l^X is a contractive linear map for any l∈ℕ and any X∈(), it follows that h_k_l^X(x)-h_k_l^X(y)<ϵ/3 Therefore,v_j h_k_j^X(x) v_j^*-v_l h_k_l^X(x) v_l^*<ϵ,for all l>N, so the sequence is Cauchy. Thus, l^X:F(X)→ G(X) is a well-defined map for every X∈() and it is linear.We let ψ:A→ B be the linear map given by ψ=l^1_. If v∈ B_∞ is the unitary given by (v_n)_n and η:→ is the map η(n)=k_n, we have that(ψ,l^X)=((v),h_v^X)∘ (η^*,r^X)∘ (ϕ,h^X)for any X∈(). Hence, ψ is a ^*-homomorphism and the pair (ψ,{l^X}_X∈()) induces a cocycle morphism. Extending by linearity, we can define l^X in the same way for any X∈ and it is straightforward to see that these maps are linear. Moreover, (ψ, l) is approximately unitarily equivalent to (ϕ,h), which finishes the proof. § ONE SIDED INTERTWINING ARGUMENTS We start this section by showing a tensor category equivariant adaptation of the classical one-sided intertwining argument (see <cit.>). A group equivariant version of this result can be found in <cit.>. We end this section by proving an asymptotic Elliott two-sided intertwining (see Theorem <ref>).Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F,J):↷ A and (G,I):↷ B be actions on separable C^*-algebras and (φ,h):(A,F,J)→ (B,G,I) an injective cocycle morphism. Then (φ,h) is asymptotically unitarily equivalent to a cocycle conjugacy if and only if: For all ε>0 and finite sets K⊂() containing 1_, ^X⊂ F(X) and ^X⊂ G(X) there exists a strictly continuous path z:[0,1]→((B)) with z_0=1 such that * sup_0≤ t≤ 1z_t h^X(x) z_t^*-h^X(x)≤ε for all X∈ K, x∈^X,* max_X∈ K(z_1^* y z_1,h^X(F(X)))≤ε for all y∈^X.The proof of this Theorem will follow the proof of <cit.> closely. We start by showing the “only if" statement. Let (Φ,H) be a cocycle conjugacy such that (Φ,H) (φ,h). By Lemma <ref> there exists a strictly continuous map w:[0,∞)→((B)) such thatH^X(x)=lim_t→∞w_t h^X(x) w_t^*for all X∈() and x∈ F(X). Therefore, for any finite sets K⊂() containing 1_, ^X⊂ F(X), and ^X⊂ G(X) we may choose n_1≥ 1 sufficiently large such that sup_t≥ n_1H^X(x)-w_t h^X(x) w_t^*≤ε/2,X∈ K, x∈^X.Similarly, one may pick n_2>n_1 such thatsup_t≥ n_2H^X(x)-w_t h^X(x) w_t^*≤ε,X∈ K,x∈ (H^X)^-1(w_n_1^X w_n_1^*).We claim that the unitary path z_t=w_n_1^*w_(1-t)n_1+tn_2 satisfies <ref> and <ref>. Indeed, it is a strictly continuous path z:[0,1]→((B)) with z_0=1. Moreover, for X∈ K, t∈ [0,1] and x∈^X, using (<ref>) we have thath^X(x)-z_th^X(x) z_t^*=h^X(x)-w_n_1^*w_(1-t)n_1+tn_2 h^X(x) w_(1-t)n_1+tn_2^*w_n_1=w_n_1 h^X(x) w_n_1^*-w_(1-t)n_1+tn_2 h^X(x) w_(1-t)n_1+tn_2^*≤w_n_1 h^X(x) w_n_1^*-H^X(x) +H^X(x)-w_(1-t)n_1+tn_2 h^X(x) w_(1-t)n_1+tn_2^*≤ε.This shows condition <ref>. We now turn to condition <ref>. Let X∈ K, y∈^X, and x=(H^X)^-1(w_n_1 y w_n_1^*). Then, we get thatz_1^* y z_1-h^X(x)= w_n_2^*(w_n_1 y w_n_1^*) w_n_2-h^X(x)= H^X(H^X)^-1(w_n_1 y w_n_1^*)-w_n_2 h^X(x) w_n_2^*=H^X(x)-w_n_2 h^X(x) w_n_2^*(<ref>)≤ϵ.As x∈ F(X), condition <ref> holds. We now turn to the if direction. To prove this statement we will use <ref>-<ref> to construct a path of unitaries v_t∈ U(M(B)) for t∈[0,∞) such that ((v_t),h_v_t)∘ (φ,h) converges to a cocycle conjugacy.Let {x_n^X}_n∈ℕ⊂ F(X), {y_n^X}_n∈ℕ⊂ G(X) be countable dense subsets for any X∈() and K_n increasing finite subsets of (𝒞) containing 1_ such that (𝒞)=⋃_n∈ K_n. Firstly, use <ref>-<ref> to find x_1,1^X∈ F(X) for X∈ K_1 and z^(1):[0,1]→((B)) such that z_0^(1)=1 and for 0≤ t≤ 1* z_t^(1) h^X(x_1^X) (z_t^(1))^*-h^X(x_1^X)≤ 1/2 for X∈ K_1,* (z_1^(1))^* y_1^X z_1^(1)-h^X(x_1,1^X)≤ 1/2 for X∈ K_1.Again use <ref>-<ref> to find x_2,1^X, x_2,2^X in F(X) for X∈ K_2, z^(2):[0,1]→((B)) such that z^(2)_0=1 and for every 0≤ t≤ 1 * z_t^(2) h^X(x_j^X) (z_t^(2))^*-h^X(x_j^X)≤ 1/4 for X∈ K_2 and 1≤ j ≤ 2, * z_t^(2) h^X(x_1,1^X) (z_t^(2))^*-h^X(x_1,1^X)≤ 1/4 for X∈ K_2, * (z_1^(2))^*((z_1^(1))^* y_j^X z_1^(1)) z_1^(2)-h^X(x_2,j^X)≤ 1/4 for X∈ K_2 and 1≤ j≤ 2.Now suppose you have z^(k):[0,1]→((B)) for 1≤ k≤ n with z_0^(k)=1 and x_m,j∈ F(X) for X∈ K_m with 1≤ j≤ m ≤ n such that for any t∈[0,1] * z_t^(n) h^X(x_j^X) (z_t^(n))^*-h^X(x_j^X)≤ 2^-n for X∈ K_n and 1≤ j ≤ n, * z_t^(n) h^X(x_m,j^X) (z_t^(n))^*-h^X(x_m,j^X)≤ 2^-n for X∈ K_n and 1≤ j ≤ m<n, * (z_1^(n))^*…(z_1^(1))^* y_j^X z_1^(1)… z_1^(n)-h^X(x_n,j^X)≤ 2^-n for X∈ K_n and 1≤ j≤ n.Then use <ref>-<ref> to get {x_n+1,j^X}_j≤ n+1∈ F(X) for X∈ K_n+1 and z^(n+1):[0,1]→((B)) such that for all t∈[0,1] * z_t^(n+1) h^X(x_j^X) (z_t^(n+1))^*-h^X(x_j^X)≤ 2^-(n+1) for X∈ K_n+1 and 1≤ j ≤ n+1, * z_t^(n+1) h^X(x_m,j^X) (z_t^(n+1))^*-h^X(x_m,j^X)≤ 2^-(n+1) for X∈ K_n+1 and 1≤ j ≤ m<n+1, * (z_1^(n+1))^*…(z_1^(1))^* y_j^X z_1^(1)… z_1^(n+1)-h^X(x_n+1,j^X)≤ 2^-(n+1) for X∈ K_n+1 and 1≤ j≤ n+1.We carry on inductively to construct z^(n) and x_m,j for every n,m∈ and j≤ m satisfying <ref>-<ref>. We may now define the path v_t:[0,∞)→((B)) by v_t=z_1^(1)… z_1^(n)z_t-n^(n+1) for every t∈ [n,n+1]. This path is norm continuous on every open interval (n,n+1) for n∈ as the paths z^(k) are norm continuous for each k∈. Moreover, the path v_t is also norm continuous at each n∈ as z_0^(k)=1 for every k∈. Adjoining by v_t we obtain a continuous path of cocycle morphisms ((v_t),h_v_t)∘(φ,h)=(ψ_t,h_t) where ψ_t=(v_t)∘φ and h_t^X=h_v_t^X∘ h^X for any t∈[0,∞) and X∈(). We claim that as t→∞ the path ψ_t converges to an isomorphism Ψ and that the path h_t^X converges to a bijective linear map H^X for all X∈() such that the pair (Ψ,H) induces a cocycle morphism (recall Remark <ref>). For any X∈() and j∈ the net (h_t^X(x_j^X))_t≥ 0 is Cauchy by <ref>. Since the set {x_j^X}_j∈ is dense in F(X) for any X∈() a standard triangle inequality argument shows that the net (h_t^X(x))_t≥ 0 converges for any x∈ F(X) and X∈(). We may hence define H^X(x)=lim_t→∞h_t^X(x) for all x∈ F(X) and X∈(). The maps H^X:F(X)→ G(X) are linear as inherited by the linearity of h_t^X. Letting X=1_ the map h^1_ coincides with φ and we get that Ψ(a):=lim_t→∞(v_t)∘φ(a)=lim_t→∞ψ_t(a) exists for all a∈ A. As each ψ_t is a ^*-homomorphism, then so is Ψ. In light of Remark <ref>, it suffices to check that the family of maps {H^X}_X∈() satisfies the conditions <ref>-<ref> of Lemma <ref> to conclude that (Ψ, H) is a cocycle morphism. Condition <ref> follows by construction, whereas conditions <ref>-<ref> are easily verified as (ψ_t,h_t) is a cocycle morphism for every t∈ [0,∞) and one may approximate Ψ pointwise by ψ_t and H pointwise by h_t for some large enough t.It remains to check that Φ is bijective and H^X is bijective for every X∈. By Remark <ref> and by taking X=1_, it suffices to check that H^X is bijective for every X∈(). Firstly, each H^X is isometric. To see this, note that Ψ is an injective ^*-homomorphism and hence isometric, so as (Ψ,H) is a cocycle morphism H^X(x)^2 =⟨ H^X(x),H^X(x)⟩_B=Ψ(⟨ x,x⟩_A)=⟨ x,x⟩_A=x^2for any X∈ and x∈ F(X). Therefore H^X is injective for every X∈. We now turn to surjectivity. As H^X are isometric, it suffices to show that H^X have dense image. Fix X∈() and j∈. There is a large enough n_0∈ satisfying n_0>j and X∈ K_n for all n>n_0. So by <ref> h_n^X(x_n,j^X)-y_j^X≤ 2^-n for any n≥ n_0. Moreover, by <ref>H^X(x_n,j^X)-h_n^X(x_n,j) ≤∑_k=n^∞h_k+1^X(x_n,j^X)-h_k^X(x_n,j^X)≤∑_k=n^∞z_1^(k+1) h^X(x_n,j^X) (z_1^(k+1))^*-h^X(x_n,j^X)≤∑_k=n^∞ 2^-(k+1)≤ 2^1-n,for any n≥ n_0. So y_j^X-H^X(x_n,j^X)≤ 2^1-n+2^-n. Therefore, as n may be chosen arbitrarily and {y_j^X}_j∈ℕ is dense in G(X), it follows that H^X is surjective. Theorem <ref> also holds in the setting of approximate unitary equivalence by replacing the path of unitaries with a single unitary and dropping the assumption that z_0=1.Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F,J):↷ A and (G,I):↷ B be actions on separable C^*-algebras and (φ,h):(A,F,J)→ (B,G,I) an injective cocycle morphism. Then (φ,h) is approximately unitarily equivalent to a cocycle conjugacy if and only if: For all ε>0 and finite sets K⊂() containing 1_, ^X⊂ F(X) and ^X⊂ G(X) there exists a unitary z∈ℳ(B) such that * max_X∈ Kz h^X(x) z^*-h^X(x)≤ε for all x∈^X,* max_X∈ K(z^* y z,h^X(F(X)))≤ε for all y∈^X.Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F_n,J^(n)): ↷ A_n be asequence of actions on separable -algebras. Let(ϕ_n,h_n): (A_n,F_n,J^(n)) → (A_n+1,F_n+1,J^(n+1))be a sequence of injective and extendible cocycle morphisms with inductive limit (A,F,J)=lim_⟶(A_n,F_n,J^(n)). Suppose that for every n≥ 1, (ϕ_n,h_n) is asymptotically unitarily equivalent to a cocycle conjugacy. Then it follows that(ϕ_1,∞,h_1,∞):(A_1,F_1,J^(1))→ (A,F,J)is asymptotically unitarily equivalent to a cocycle conjugacy.Let ε>0 and finite sets K⊂() containing 1_, _1^X⊂ F_1(X), ^X⊂ F(X). We will check the conditions in Theorem <ref> for (ϕ_1,∞,h_1,∞). Perturbing ^X by an arbitarily small tolerance, we may assume that there exists n≥ 1 large enough and a finite set _n^X⊂ F_n(X) such that ^X=h_n,∞^X(_n^X) for any X∈ K.By Lemma <ref>, (ϕ_1,n,h_1,n) is asymptotically unitarily equivalent to a cocycle conjugacy. Then, by Theorem <ref>, there exists a unitary path y:[0,1]→𝒰(ℳ(A_n)) such that* sup_0≤ t≤ 1y_t h_1,n^X(μ) y_t^*-h_1,n^X(μ)≤ε for any X∈ K, μ∈_1^X,* max_X∈ K(y_1^*η y_1, h_1,n^X(F_1(X)))≤ε for any η∈_n^X . Let z_t=ϕ_n,∞^†(y_t) be a unitary in ℳ(A) for any t∈[0,1], where ϕ_n,∞^†:ℳ(A_n)→ℳ(A) is the extension of ϕ_n,∞ from Definition <ref>. Now we claim that the path of unitaries z_t satisfies the conditions of Theorem <ref> for (ϕ_1,∞,h_1,∞). By <ref> and <ref> of Lemma <ref> (see also Remark <ref>) we get thatz_t h_1,∞^X(μ) z_t^*-h_1,∞^X(μ)≤z_t h_1,n^X(μ) z_t^*-h_1,n^X(μ)≤εfor any X∈ K, μ∈_1^X, and any t∈[0,1]. Moreover, recall that ^X=h_n,∞^X(_n^X) and(y_1^*η y_1, h_1,n^X(F_1(X)))≤εfor any X∈ K and any η∈_n^X. Hence, by applying h_n,∞^X to the formula above, it follows that for all ξ∈^X,max_X∈ K(z_1^*ξ z_1, h_1,∞^X(F_1(X)))≤ε.Therefore, (ϕ_1,∞,h_1,∞) is asymptotically unitarily equivalent to a cocycle conjugacy by Theorem <ref>. The following result can be seen as the asymptotic version of Corollary <ref>.Letbe a semisimple C^*-tensor category with countably many isomorphism classes of simple objects. Let (F,J): ↷ A and (G,I): ↷ B be actions on separable -algebras. Let(ϕ, h): (A,F,J) → (B,G,I)and(ψ, l): (B,G,I) → (A,F,J)be two extendible cocycle morphisms such that𝕀_A≅_u (ψ,l)∘ (ϕ, h) and 𝕀_B≅_u (ϕ,h)∘ (ψ, l).Then there exist mutually inverse cocycle conjugacies(Φ, H): (A,F,J) → (B,G,I)and(Ψ, L): (B,G,I) → (A,F,J)such that(Φ, H)≅_u (ϕ,h)and(Ψ, L)≅_u (ψ,l). Consider the cocycle morphisms(κ,r)= (ψ,l)∘ (ϕ,h) : (A,F,J)→ (A,F,J)and(θ,s) = (ϕ,h)∘ (ψ,l) :(B,G,I)→ (B,G,I)fitting into the family of commuting diagrams…[rr]F(X) [rd]^h^X[rr]^r^X F(X) [r] [rd]^h^X … …[r] G(X) [ru]^l^X[rr]^s^X G(X) [ru]^l^X[rr]^s^X… . We can then form the inductive limits(A^(∞),F^(∞),J^(∞))=lim_⟶{(A,F,J),(κ,r)}and(B^(∞),G^(∞),I^(∞))=lim_⟶{(B,G,I),(θ,s)}in the category _. Consider the universal embeddings(κ_∞,r_∞):(A,F,J)→ (A^(∞),F^(∞),J^(∞))and(θ_∞,s_∞):(B,G,I)→ (B^(∞),G^(∞),I^(∞)).Since the collection of diagrams in (<ref>) commutes, the universal properties of both inductive limits yield that there exist mutually inverse cocycle conjugacies(ϕ_∞,h_∞): (A^(∞),F^(∞),J^(∞))→ (B^(∞),G^(∞),I^(∞))and(ψ_∞, l_∞): (B^(∞),G^(∞),I^(∞))→ (A^(∞),F^(∞),J^(∞))such that (ϕ_∞,h_∞)∘ (κ_∞,r_∞)=(θ_∞,s_∞)∘ (ϕ,h) and (ψ_∞, l_∞)∘ (θ_∞,s_∞) =(κ_∞,r_∞) ∘ (ψ,l).Furthermore, the cocycle morphism (κ,r) is asymptotically unitarily equivalent to the cocycle morphism induced by the identity map on A, so Lemma <ref> gives that (κ_∞,r_∞) is asymptotically unitarily equivalent to a cocycle conjugacy (K, R) : (A,F,J)→ (A^(∞),F^(∞),J^(∞)). Likewise, (θ_∞,s_∞) is asymptotically unitarily equivalent to a cocycle conjugacy (Θ, S):(B,G,I)→ (B^(∞),G^(∞),I^(∞)).Then, taking(Φ, H)=(Θ, S)^-1∘ (ϕ_∞, h_∞)∘(K, R)yields that (Φ, H)≅_u (Θ, S)^-1∘ (ϕ_∞, h_∞)∘ (κ_∞,r_∞) = (Θ, S)^-1∘ (θ_∞,s_∞)∘ (ϕ,h) ≅_u (ϕ,h).Similarly, if we take (Ψ, L)=(Φ,H)^-1=(K, R)^-1∘ (ψ_∞,l_∞)∘ (Θ, S), we get that (Ψ, L)≅_u(ψ,l), which finishes the proof. abbrv
http://arxiv.org/abs/2310.18125v1
{ "authors": [ "Sergio Girón Pacheco", "Robert Neagu" ], "categories": [ "math.OA", "math.QA", "46L35, 46L37, 46L55, 18D10" ], "primary_category": "math.OA", "published": "20231027131539", "title": "An Elliott intertwining approach to classifying actions of C$^*$-tensor categories" }
Random Fields from Quenched Disorder in an Archetype for Correlated Electrons: the Parallel Spin Stripe Phase of La_1.6-xNd_0.4Sr_xCuO_4 at the 1/8 Anomaly B. D. Gaulin January 14, 2024 =========================================================================================================================================================== -.2in The classical Feynman-Kac identity represents solutions of linear partial differential equations in terms of stochastic differential euqations. This representation has been generalized to nonlinear partial differential equations on the one hand via backward stochastic differential equations and on the other hand via stochastic fixed-point equations. In this article we generalize the representation via stochastic fixed-point equations to allow the nonlinearity in the semilinear partial differential equation to depend also on the gradient of the solution. § INTRODUCTIONThe classical Feynman-Kac identity (see, e.g., <cit.>)is a representation of linear partial differential equations (PDEs) in terms of stochastic differential equations (SDEs). This identity has various applications, e.g., the classical Monte Carlo method exploits this representation and allows to approximate solutions of linear PDEs without suffering from the curse of dimensionality.There are different approaches for Feynman-Kac type formulas in the case of nonlinear PDEs. One approach represents viscosity solutions (see, e.g., <cit.>) of nonlinear PDEs via solutions of backward stochastic differential equations (BSDEs); see, e.g., <cit.> for references on BSDEs and see, e.g., <cit.> for references on the connection between PDEs and BSDEs. Another approach is via stochastic fixed-point equations (SFPEs) which arise when the linear Feynman-Kac identity is applied to a semilinear PDE whose nonlinear part is viewed as inhomogeneity; see, e.g., <cit.>. A central motivation for this latter approach is that full-history recursive multilevel Picard (MLP) approximation algorithms (see <cit.>for references on MLP approximation algorithms) exploit the fact that viscosity solutions of semilinear PDEs are solutions of SFPEs. MLP approximation algorithms are – up to now – the only methods which have been mathematically proven to overcome the curse of dimensionality in the numerical approximation of solutions of semilinear Kolmogorov PDEs. In this article we generalize the results of <cit.> from the case to gradient-independent nonlinearities to the gradient-dependent case (e.g., the function f in (<ref>) depends on ∇_x u). To illustrate the findings of this article, we now present in Theorem <ref> below a special case of Theorem <ref> which is the main result of this article. Let d ∈,α, c, L, T ∈ (0, ∞), let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·^d+1→[0,∞) be the standard Euclidean norm on ^d+1, let ·_F^d× d→ [0,∞) be the Frobenius norm on ^d× d, let (Ω, ℱ, ℙ, (𝔽_s)_s ∈ [0,T])be a filtered probability space satisfying the usual conditions, let W[0,T] ×Ω→^dbe a standard (𝔽_s)_s ∈ [0,T]-Brownian motion, let μ∈C^1(^d, ^d),σ∈ C^1 (^d, ^d × d) satisfy for allx, y ∈^d, v∈^d thatmax{⟨ x-y,μ(x)-μ(y)⟩,12σ(x)-σ(y)_F^2}≤c2x-y^2,max{⟨ x, μ(x)⟩, σ(x)_F^2 }≤ c (1+x^2), and v^* σ(x) (σ(x))^* v ≥αv^2, assume for allj∈{1,2,…, d} that ∂μ/∂ x and ∂σ/∂ x_j are locally Lipschitz continuous, for everyt∈ [0,T], x ∈^d let X^x_t = (X^x_t,s)_s ∈ [t,T] [t,T] ×Ω→^dbe an(𝔽_s)_s ∈ [t,T]-adaptedstochastic process with continuous sample paths satisfying that for alls ∈ [t,T] it holds a.s. thatX^x_t,s = x + ∫_t^s μ( X^x_t,r)ṛ+ ∫_t^s σ(X^x_t,r)Ẉ_r,assume for all t∈ [0,T], ω∈Ω that ([t,T] ×^d ∋ (s,x)↦ X^x_t,s(ω) ∈^d ) ∈ C^0,1([t,T]×^d, ^d), for every t∈[0,T],x ∈^d let Z^x_t = (Z^x_t,s)_s ∈ (t,T] (t,T] ×Ω→^d+1be an (𝔽_s)_s ∈ (t,T]-adapted stochastic processwith continuous sample paths satisfying thatfor all s ∈ (t,T]it holds a.s. thatZ^x_t,s =[1; 1/s-t∫_t^s (σ( X^x_t,r))^-1 (∂/∂ x X^x_t,r)Ẉ_r ], let f ∈ C([0,T] ×^d ××^d, ) ∩ L^2([0,T] ×^d ××^d, ), g ∈ C(^d, ) ∩ L^2(^d, ) be at most polynomially growing, and assume for all t ∈ [0,T], x_1,x_2 ∈^d,a_1,a_2∈, w_1,w_2 ∈^d that | f(t,x_1,a_1,w_1)- f(t,x_2,a_2,w_2) |≤ L (a_1, w_1) -(a_2, w_2). Then* there exists a uniquev∈ C([0,T]×^d,) ∩ C^0,1([0,T)×^d,) which satisfies that ((v, ∇_x v)(t,x) ·√(T-t))_t∈ [0,T), x∈^d grows at most polynomiallyand for allt∈[0,T), x∈^d it holds that [g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r,v(r, X^x_t,r), (∇_x v)(r,X^x_t,r)) Z^x_t,r ṛ ] <∞ and(v, ∇_x v)(t,x)=[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, v(r, X^x_t,r), (∇_x v)(r,X^x_t,r))Z^x_t,r ṛ], *there exists a unique viscosity solutionu∈{𝐮∈ C([0,T]×^d,) ∩ C^0,1([0,T)×^d,) ((𝐮,∇_x𝐮)(t,x)√(T-t))_t∈ [0,T), x∈^d grows at most polynomially} of(∂ u∂ t)(t,x) +⟨μ(x), (∇_x u)(t,x)⟩ +12Tr(σ(x)[σ(x)]^*(Hess_x u)(t,x)) +f(t,x,u(t,x), (∇_x u)(t,x)) =0with u(T,x)=g(x) for (t,x)∈ (0,T)×^d, and* for all t∈[0,T], x∈^d it holds that u(t,x)=v(t,x). § EXISTENCE AND UNIQUENESS RESULTS FOR VISCOSITY SOLUTIONS (VS) OF KOLMOGOROV PDES §.§ DefinitionsIn this section we recall the definitionsof elliptic functions, viscosity solutions, and parabolic superjets. The following definitions are from<cit.> and <cit.>.Let d∈, T∈(0,∞), let O⊆^d be a non-empty open set, and let ⟨·,·⟩^d×^d→ be the standard Euklidean scalar product on ^d. Then G is degenerate elliptic on (0,T)× O××^d ×𝕊_d if * it holds thatG (0,T)× O××^d×𝕊_d→ is a function from(0,T)× O××^d ×𝕊_d toand* it holds for all t∈ (0,T), x∈ O, r∈, p∈^d, A,B∈𝕊_d with ∀ y∈^d⟨ A y,y⟩≤⟨ B y, y ⟩ that G(t,x,r,p,A)≤ G(t,x,r,p,B).Let d∈, T∈(0,∞), let O⊆^d be a non-empty open set, and letG (0,T)× O××^d×𝕊_d→ be degenerate elliptic.Then u is a viscosity solution of (∂/∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) ≥ 0 for (t,x)∈(0,T)× O (we say that u is a viscosity subsolution of (∂/∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) =0) if and only if there exsits a set A⊆×^dsuch that * it holds that(0,T)× O⊆ A,* it holds thatu A→ is upper semi-continuous,and* for all t∈(0,T), x∈ O, ϕ∈ C^1,2((0,T)× O,) with ϕ(t,x)=u(t,x) and ϕ≥ u it holds that(∂∂ tϕ)(t,x) +G(t,x,ϕ(t,x), (∇_x ϕ)(t,x), (Hess_x ϕ)(t,x)) ≥ 0. Let d∈, T∈(0,∞), let O⊆^d be a non-empty open set, and letG (0,T)× O××^d×𝕊_d→ be degenerate elliptic.Then u is a viscosity solution of (∂/∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) ≤ 0 for (t,x)∈(0,T)× O (we say that u is a viscosity supersolution of (∂/∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) =0) if and only if there exsits a set A⊆×^dsuch that * it holds that(0,T)× O⊆ A,* it holds thatu A→ is lower semi-continuous,and* for all t∈(0,T), x∈ O, ϕ∈ C^1,2((0,T)× O,) with ϕ(t,x)=u(t,x) and ϕ≤ u it holds that(∂∂ tϕ)(t,x) +G(t,x,ϕ(t,x), (∇_x ϕ)(t,x), (Hess_x ϕ)(t,x)) ≤ 0. Let d∈, T∈(0,∞), let O⊆^d be a non-empty open set, and letG (0,T)× O××^d×𝕊_d→ be degenerate elliptic.Then u is a viscosity solution of (∂/∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) = 0 for (t,x)∈(0,T)× O if and only if * it holds that uis a viscosity subsolution of(∂∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) = 0for (t,x)∈(0,T)× O and* it holds that uis a viscosity supersolution of(∂∂ tu)(t,x) +G(t,x,u(t,x), (∇_x u)(t,x), (Hess_x u)(t,x)) = 0for (t,x)∈(0,T)× O.Let d∈, T∈ (0,∞), let O⊆^d be a non-empty open set, let t∈ (0,T), x∈ O, let ⟨·, ·⟩^d×^d→ be thestandard Euclidean scalar product on ^d, let ·^d→[0,∞)be the standard Euclidean norm on ^d, and let u(0,T)× O→ be a function. Then * we denote by (𝒫^+ u)(t,x) the set satisfying(𝒫^+ u)(t,x) ={(b,p,A)∈×^d×𝕊_dlim sup_[(0,T)× O]∖{(t,x)}∋ (s,y)↦ (t,x)[u(s,y)-u(t,x)-b(s-t) -⟨ p,y-x⟩ -1/2⟨ A(y-x),y-x⟩t-s+x-y^2] ≤ 0 },and* we denote by (𝔓^+ u)(t,x) the set satisfying(𝔓^+ u)(t,x) ={(b,p,A)∈×^d×𝕊_d(∃ (t_n,x_n,b_n,p_n,A_n)_n∈⊆ (0,T)× O××^d×𝕊_d(∀ n∈ (b_n,p_n,A_n) ∈ (𝒫^+u)(t_n,x_n))and lim_n→∞ (t_n,x_n,u(t_n,x_n),b_n,p_n,A_n) =(t,x,u(t,x),b,p,A))}.§.§ Existence result for viscosity solutions of linear inhomogeneous Kolmogorov PDEs The following proposition is a variation of <cit.> where we replace [0,T] by [0,T]∖ K_r. Let d,m∈, T∈ (0,∞), let O⊆^d be a non-empty open set, let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·_F^d× m→ [0,∞) be the Frobenius norm on ^d× m, for every r∈(0,∞) let K_r⊆[0,T), O_r⊆ O satisfy K_r=[0,max{T-1/r,0}] and O_r = {x ∈ Ox≤ rand { y ∈^dy-x< 1/r}⊆ O }, let g∈ C(O,),h∈ C([0,T]× O,), μ∈ C([0,T]× O,^d), σ∈ C([0,T]× O, ^d× m), V∈ C^1,2([0,T]× O, (0,∞)) satisfyfor all r∈(0,∞) thatsup({μ(t,x)-μ(t,y)+σ(t,x)-σ(t,y)_Fx-y t∈ [0,T], x,y∈ O_r, x≠ y }∪{0}) <∞,assume for all t∈[0,T], x∈ O that(∂ V∂ t)(t,x) +⟨μ(t,x), (∇_x V)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess_x V)(t,x)) ≤ 0,assume that sup_r∈ (0,∞)[inf_t∈[0,T)∖ K_rinf_x∈ O∖ O_r V(t,x)]=∞ and inf_r∈ (0,∞)[sup_t∈ [0,T)∖ K_r sup_x∈ O∖ O_r (g(x)/V(T,x) +h(t,x)/V(t,x)√(T-t))] =0, let (Ω, ℱ, ℙ, (𝔽_t)_t∈[0,T]) be a filtered probability space, let W [0,T]×Ω→^m be a standard (𝔽_t)_t∈ [0,T]-Brownian motion, for every t∈[0,T], x∈ O let X^x_t=(X^x_t,s)_s∈[t,T] [t,T]×Ω→ O be an(𝔽_s)_s∈[t,T]-adapted stochastic process with continuous sample paths satisfying that for all s∈[t,T] it holdsa.s. thatX^x_t,s= x+∫_t^s μ(r,X^x_t,r) ṛ+∫_t^s σ(r,X^x_t,r) Ẉ_r, and let u[0,T]×^d→ satisfy for all t∈ [0,T], x∈^d thatu(t,x) = [g(X^x_t,T) +∫_t^T h(s,X^x_t,s) ṣ].Then it holds that u is aviscosity solution of(∂ u∂ t)(t,x) +⟨μ(t,x), (∇_x u)(t,x) ⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess_x u)(t,x)) +h(t,x) = 0with u(T,x)=g(x) for (t,x)∈(0,T)× O.Throughout this proof let 𝔤_n∈ C(^d,), n∈, and 𝔥_n∈ C([0,T]×^d,), n∈, be compactly supported functions which satisfy that [⋃_n∈supp(𝔤_n)] ⊆ O, [⋃_n∈supp(𝔥_n)]⊆ [0,T]× O, andlim sup_n→∞[ sup_t∈[0,T)sup_x∈ O(𝔤_n(x)-g(x)/V(T,x)+𝔥_n(t,x)-h(t,x)/V(t,x)√(T-t))] =0(cf. <cit.>) let 𝔪_n∈ C([0,T]×^d,^d), n∈, and𝔰_n∈ C([0,T]×^d,^d× m),n∈, satisfy that * for all n∈it holds thatsup_t∈[0,T]sup_x,y∈^d x≠ y[𝔪_n(t,x)-𝔪_n(t,y) +𝔰_n(t,x)-𝔰_n(t,y)_F/x-y] <∞, *for all n∈,t∈ [0,T], x∈ O it holds that{V≤ n}(t,x) [𝔪_n(t,x)-μ(t,x) +𝔰_n(t,x)-σ(t,x)_F] =0,and*for all n∈, t∈ [0,T], x∈^d∖{V≤ n+1} it holds that 𝔪_n(t,x)+𝔰_n(t,x)_F =0 (cf., e.g., the proof of<cit.>), for every n∈, t∈ [0,T], x∈^d let 𝔛^x,n_t=(𝔛^x,n_t,s)_s∈[t,T] [t,T]×Ω→^d be an (𝔽_s)_s∈ [t,T]-adapted stochastic process withcontinuous sample paths satisfyingthat for all s∈[t,T] it holdsa.s. that𝔛^x,n_t,s = x +∫_t^s 𝔪_n(r,𝔛^x,n_t,r)ṛ +∫_t^s 𝔰_n(r,𝔛^x,n_t,r) Ẉ_r(cf., e.g., <cit.>), let 𝔲^n,k[0,T]×^d→,n∈_0, k∈, satisfy for all n,k∈, t∈ [0,T], x∈^d that𝔲^n,k(t,x) = [𝔤_k(𝔛^x,n_t,T) +∫_t^T 𝔥_k(s,𝔛^x,n_t,s) ṣ]and𝔲^0,k(t,x)= [𝔤_k(X^x_t,T) +∫_t^T 𝔥_k(s,X^x_t,s) ṣ], and for every n∈, t∈ [0,T], x∈ O letτ^x,n_tΩ→ [t,T] satisfy τ^x,n_t =inf({s∈[t,T]max{V(s,𝔛^x,n_t,s), V(s,X^x_t,s)}≥ n}∪{T}).Next note that <cit.> (applied for every n,k∈ withμ↶𝔪_n, σ↶𝔰_n, g↶𝔤_k, h↶𝔥_k in the notation of<cit.>), item <ref>, and the fact that for all n∈ it holds that 𝔪_n and 𝔰_n have compact support demonstrate thatfor all n,k∈ it holds that 𝔲^n,k is a viscosity solution of(∂∂ t𝔲^n,k)(t,x) +⟨𝔪_n(t,x),(∇_x 𝔲^n,k)(t,x)⟩ +12Tr(𝔰_n(t,x)[𝔰_n(t,x)]^*(Hess_x 𝔲^n,k)(t,x)) +𝔥_k(t,x)=0for (t,x)∈(0,T)×^d. Furthermore, note that items<ref>-<ref> and (<ref>) ensure that for all n∈, t∈ [0,T], x∈ O it holds thatℙ(∀ s∈ [t,T]{s≤τ_t^x,n}𝔛^x,n_t,s ={s≤τ_t^x,n} X^x_t,s)=1. Hence, we obtain for all n,k∈, t∈[0,T], x∈ O that[𝔤_k(𝔛^x,n_t,T) -𝔤_k(X^x_t,T)] =[{τ^x,n_t<T}𝔤_k(𝔛^x,n_t,T) -𝔤_k(X^x_t,T)]≤ 2[sup_y∈ O𝔤_k(y)] ℙ(τ^x,n_t<T) and∫_t^T [𝔥_k(s,𝔛^x,n_t,s) -𝔥_k(s,X^x_t,s)] ṣ= ∫_t^T [{τ^x,n_t<T}𝔥_k(s,𝔛^x,n_t,s) -𝔥_k(s,X^x_t,s)] ṣ≤ 2T [sup_s∈ [0,T]sup_y∈ O𝔥_k(s,y)] ℙ(τ^x,n_t<T).In addition, observe that <cit.> and(<ref>) ensure that for all n∈, t∈ [0,T], x∈ O it holds that[V(τ^x,n_t, X^x_t,τ^x,n_t)] ≤ V(t,x).Markov'sinequality, (<ref>), and (<ref>) therefore imply that forall n,k∈, t∈ [0,T], x∈ O it holds that𝔲^n,k(t,x)-𝔲^0,k(t,x)≤ 2[sup_y∈ O𝔤_k(y) +T sup_s∈ [0,T]sup_y∈ O𝔥_k(s,y)]ℙ(τ^x,n_t<T)≤ 2[sup_y∈ O𝔤_k(y) +T sup_s∈ [0,T]sup_y∈ O𝔥_k(s,y)]ℙ(V(τ^x,n_t,X^x_t,τ^x,n_t)≥ n)≤2/n[sup_y∈ O𝔤_k(y) +T sup_s∈ [0,T]sup_y∈ O𝔥_k(s,y)][V(τ^x,n_t,X^x_t,τ^x,n_t)]≤2/n[sup_y∈ O𝔤_k(y) +T sup_s∈ [0,T]sup_y∈ O𝔥_k(s,y)]V(t,x).Thisshows that for all k∈ and all compact 𝒦⊆ (0,T)× O it holds thatlim sup_n→∞[ sup_(t,x) ∈𝒦𝔲^n,k(t,x) -𝔲^0,k(t,x)] =0.Moreover, observe thatitem <ref> and the assumption that sup_r∈(0,∞)[inf_t∈[0,T)∖ K_rinf_x∈^d∖ O_r V(t,x)]=∞imply that for all compact 𝒦⊆ [0,T]× O it holds thatlim sup_n→∞[ sup_(t,x)∈𝒦(𝔪_n(t,x)-μ(t,x) +𝔰_n(t,x)-σ(t,x)) ] =0.Combining <cit.>, (<ref>), and (<ref>) hence demonstrates that for all k∈ it holds that 𝔲^0,k is a viscosity solution of(∂∂ t𝔲^0,k)(t,x) +⟨μ(t,x), (∇_x 𝔲^0,k)(t,x) ⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x 𝔲^0,k)(t,x)) +𝔥_k(t,x)=0for (t,x)∈ (0,T)× O. Next observe that the fact that for allt∈ [0,T], s∈[t,T],x∈ O it holds that [V(s, X^x_t,s)]≤ V(t,x)demonstrates thatfor all k∈, t∈ (0,T), x∈ O it holds that𝔲^0,k(t,x)-u(t,x)= |[𝔤_k(X^x_t,T)-g(X^x_t,T)]+∫_t^T[ 𝔥_k(s, X^x_t,s)-h(s,X^x_t,s)] ṣ|≤[𝔤_k(X^x_t,T)-g(X^x_t,T)V(T,X^x_t,T)/V(T,X^x_t,T)] +∫_t^T[ 𝔥_k(s, X^x_t,s)-h(s,X^x_t,s)V(s,X^x_t,s)√(T-s)/V(s,X^x_t,s)√(T-s)] ṣ≤[sup_y∈ O𝔤_k(y)-g(y)/V(T,y)][V(T,X^x_t,T) ]+[sup_r∈ [0,T)sup_y∈ O𝔥_k(r,y)-h(r,y)/V(r,y)√(T-r)] ∫_t^T[ V(s,X^x_t,s)/√(T-s)] ṣ≤[sup_y∈ O𝔤_k(y)-g(y)/V(T,y)]V(T,x) +[sup_r∈ [0,T)sup_y∈ O𝔥_k(r,y)-h(r,y)/V(r,y)√(T-r)] ∫_t^T V(t,x)/√(T-s) ṣ≤[sup_y∈ O𝔤_k(y)-g(y)/V(T,y)]V(T,x) +[sup_r∈ [0,T)sup_y∈ O𝔥_k(r,y)-h(r,y)/V(r,y)√(T-r)] 2√(T)V(t,x).Combining this with (<ref>) shows that for all compact 𝒦⊆ (0,T)× O it holds thatlim sup_k→∞[ sup_(t,x) ∈𝒦𝔲^0,k(t,x)-u(t,x)] =0.This, <cit.>, (<ref>), and (<ref>) imply that u is a viscosity solution of(∂ u∂ t)(t,x) +⟨μ(t,x), (∇_x u)(t,x) ⟩+12Tr(σ(t,x)[σ(t,x)]^* (Hess_x u)(t,x)) +h(t,x) = 0for (t,x)∈ (0,T)× O. In addition, observe that (<ref>) ensures that for all x∈^d it holds that u(T,x)=g(x). This and(<ref>) establish (<ref>). The proof of Proposition <ref> is thus complete. §.§ Uniqueness results for viscosity solutions of semilinear Kolmogorov PDEsThe following lemma is an extension of <cit.> where we consider (0,T)∖ K_r instead of (0,T) in (<ref>). Let d,k∈, T∈(0,∞), let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·(⋃_m∈^m)→[0,∞) satisfy for all m∈,x=(x_1,x_2,…, x_m)∈^m that x=(∑_i=1^m x_i^2)^1/2,let O⊆^d be a non-emptyopen set,for every r∈(0,∞) letK_r⊆[0,T), O_r⊆ O satisfy K_r=[0,max{T-1/r,0}] and O_r = {x ∈ Ox≤ rand { y ∈^dy-x< 1/r}⊆ O }, let G_i∈ C((0,T)× O××^d×𝕊_d,), i∈{1,2,…,k}, satisfy for all i∈{1,2,…,k} that G_i is degenerate elliptic and upper semi-continuous, let u_i[0,T]× O→, i∈{1,2,…,k}, satisfy for all i∈{1,2,…,k} thatu_i is a viscosity solution of(∂ u_i∂ t)(t,x) +G_i(t,x,u_i(t,x),(∇_x u_i)(t,x), (Hess_x u_i)(t,x)) ≥ 0 for (t,x)∈(0,T)× O, assume that sup_x∈ O[∑_i=1^k u_i(T,x) ] ≤ 0 andlim_n→∞[ sup_t∈(0,T)∖ K_nsup_x∈ O∖ O_n[ ( ∑_i=1^k u_i(t,x) ) √(T-t)]] ≤ 0,and assume for allt^(n)∈(0,T), n∈_0,and all (x^(n)_i,r^(n)_i,A^(n)_i) ∈ O××𝕊_d, n∈_0, i∈{1,2,…,k}, with lim sup_n→∞[t^(n)-t^(0) +x^(n)_1-x^(0)_1] +√(n)∑_i=2^kx^(n)_i-x^(n)_i-1 =0<lim inf_n→∞[∑_i=1^k r^(n)_i] =lim sup_n→∞[∑_i=1^k r^(n)_i] ≤sup_n∈[∑_i=1^k r^(n)_i] <∞ and ∀(n∈,z_1,z_2,…, z_k∈^d) -5∑_i=1^k z_i^2 ≤∑_i=1^k ⟨ z_i, A^(n)_i z_i⟩≤ 5∑_i=2^k z_i-z_i-1^2thatlim sup_n→∞[∑_i=1^k G_i(t^(n), x^(n)_i,r^(n)_i, n([2,k](i)[x^(n)_i-x^(n)_i-1] +[1,k-1](i)[x^(n)_i-x^(n)_i+1]), nA^(n)_i)] ≤ 0.Then it holds for all t∈ (0,T], x∈ O that ∑_i=1^k u_i(t,x)≤ 0.The goal of this proof is to show that for all t∈ (0,T], x∈ O it holds that ∑_i=1^k u_i(t,x)≤ 0 by demonstrating that for all δ∈ (0,∞), t∈ (0,T], x∈ O it holds that ∑_i=1^k u_i(t,x)≤kδ/t.Throughout this proof letδ∈ (0,∞),let v_i [0,T]× O→ [-∞,∞), i∈{1,2,…,k}, satisfy for all i∈{1,2,…,k}, t∈[0,T], x∈ O thatv_i(t,x)=u_i(t,x)-δ/tt>0 -∞t=0,let H_i (0,T)× O××^d×𝕊_d→, i∈{1,2,…,k}, satisfy for all i∈{1,2,…,k}, t∈(0,T), x∈ O, r∈, p∈^d, A∈𝕊_d thatH_i(t,x,r,p,A)=G_i(t,x,r+δt, p,A)-δt^2,let Φ[0,T]× (^d)^k→ [0,∞) and η [0,T]× (^d)^k→[-∞,∞) satisfy for all t∈ [0,T], x=(x_1,x_2,…,x_k)∈ (^d)^k that η(t,x)=∑_i=1^k v_i(t,x_i) andΦ(t,x)= 1/2[ ∑_i=2^k x_i-x_i-1^2],let S∈ (-∞,∞] satisfy S=sup_t∈ [0,T]sup_x∈ O [∑_i=1^k v_i(t,x)], let S_α,r∈ (-∞,∞],α,r ∈ [0,∞), satisfy for all α, r∈ [0,∞) thatS_α,r = sup_t∈ K_rsup_x∈ (O_r)^k [η(t,x)-αΦ(t,x)],and let ·^(kd)× (kd)→[0,∞) satisfy for all A∈^(kd)× (kd) thatA = sup{ [ ∑_i=1^kdy_i^2]^1/2[∑_i=1^kdx_i^2 ]^-1/2[ x=(x_1,x_2,…,x_kd)∈^kd∖{0},; y=(y_1,y_2,…,y_kd)∈^kd,;y=Ax ]}.First observe that(<ref>), the fact thatsup_x∈ O[∑_i=1^kv_i(0,x)] =-∞, and the fact thatfor all i∈{1,2,…,k} it holds that v_i≤ u_i show thatsup_x∈ O[ ∑_i=1^k v_i(T,x) ]≤ 0 andlim sup_n→∞[ sup_t∈ [0,T)∖ K_nsup_x∈ O∖ O_n[ ∑_i=1^k v_i(t,x) √(T-t)]] ≤ 0.In addition, note that theassumption that for all i∈{1,2,…,k} it holds that u_i is uppersemi-continuous ensures that for all i∈{1,2,…,k} it holds that v_i is upper semi-continuous. Moreover, observe that(<ref>) and (<ref>) imply thatfor all i∈{1,2,…,k} it holds that v_i is a viscosity solution of(∂ v_i∂ t)(t,x) +H_i(t,x,v_i(t,x), (∇_x v_i)(t,x), (Hess_x v_i)(t,x))≥ 0for (t,x)∈(0,T)× O. Next we claim that for all t∈ [0,T], x∈ O it holds thatS= sup_t∈ [0,T]sup_x∈ O[∑_i=1^k v_i(t,x)] ≤ 0.We prove (<ref>) by contradiction.For this assume that S∈ (0,∞]. Note that the hypothesis thatS∈ (0,∞] and(<ref>) imply that there exists N∈ which satisfies that *it holds that K_N≠∅ and O_N≠∅,* it holds that K_N andO_N are compact, and* it holds that sup_t∈ K_Nsup_x∈ O_N [∑_i=1^k v_i(t,x)]=S. The fact that for alli∈{1,2,…,k}it holds that v_i is upper semi-continuous therefore shows that S∈ (0,∞). Moreover, observe thatitem <ref> and the fact that for all i∈{1,2,…,k}it holds thatsup_x∈ O v_i(0,x)=-∞ ensure thatS= sup_t∈ K_N∩(0,T]sup_x∈ O_N [∑_i=1^k v_i(t,x)]. Next note that the fact thatΦ∈ C([0,T]× (^d)^k, ) and the fact that for all i∈{1,2,…,k}it holds that v_i is upper semi-continuous demonstrate thatfor all α∈(0,∞) it holds that K_N × (O_N)^k∋ (t,x) ↦η(t,x)-αΦ(t,x) ∈[-∞,∞) is upper semi-continuous. Item <ref> hence proves that there exists t^(α)∈ K_N, α∈ (0,∞), and x^(α)=(x^(α)_1,x^(α)_2, …,x^(α)_k)∈ (O_N)^k, α∈ (0,∞), which satisfy for allα∈ (0,∞) thatη(t^(α),x^(α)) -αΦ(t^(α),x^(α)) =sup_t∈ K_Nsup_x∈ (O_N)^k [η(t,x)-αΦ(t,x)] =S_α,N.Furthermore, observe that the fact that for all t∈ [0,T], y∈ O it holds that η(t,y,y,…,y) = ∑_i=1^k v_i(t,y) and the fact that for all t∈ [0,T], y∈ O it holds that Φ(t,y,y,…,y)=0 show that for all α∈ (0,∞) it holds thatS_α,N≥sup_t∈ [0,T]sup_y∈ O [η(t,y,y,…,y)-αΦ(t,y,y,…,y)]=sup_t∈ [0,T]sup_y∈ O[∑_i=1^k v_i(t,y)] =S > 0.This and the fact that for allα,β∈ (0,∞)with α≥β it holds that S_α,N≤ S_β, N ensure that lim inf_α→∞ S_α,N =lim sup_α→∞ S_α,N∈ [S,∞)⊆. Next observe that (<ref>) and the fact that for allα∈ (0,∞) it holds that sup_x∈ O^k[η(0,x)-αΦ(0,x)] =-∞imply that for all α∈(0,∞) it holds thatS_α,N =sup_t∈ K_N∩(0,T]sup_x∈ (O_N)^k [η(t,x)-αΦ(t,x)].Combining this and<cit.> (applied with 𝒪↶ (K_N∩(0,T])× (O_N)^k,η↶η|_(K_N∩(0,T]) ×(O_N)^k, ϕ↶Φ|_(K_N∩ (0,T]) × (O_N)^k, x↶ ((0,∞)∋α↦ (t^(α),x^(α))∈(K_N∩ (0,T])× (O_N)^k) in the notation of <cit.>) demonstrates that0= lim sup_α→∞ [αΦ(t^(α),x^(α))] =lim sup_α→∞[ α/2∑_i=2^kx^(α)_i-x^(α)_i-1^2 ].In the next step note that item <ref> ensures that there exist 𝔱∈ K_N, 𝔵=(𝔵_1, 𝔵_2, …,𝔵_k)∈ (O_N)^k, (α_n)_n∈⊆ which satisfylim inf_n→∞α_n=∞ and lim sup_n→∞ [t^(α_n)- 𝔱 +x^(α_n)-𝔵] =0. Moreover, observe that the fact that η is upper semi-continuousand the fact that Φ is continuous show thatη(𝔱,𝔵) ≥lim sup_n→∞ [η(t^(α_n),x^(α_n)) -α_nΦ(t^(α_n),x^(α_n))] ≥ S > 0.The fact that for all x∈ O^k it holds that η(0,x)=-∞hence demonstrates that 𝔱∈ K_N∩(0,T] ⊆ (0,T). In addition, note that (<ref>) and the fact that for all α∈ (0,∞) it holds thatη(0,x^(α)) -αΦ(0,x^(α)) = -∞ imply that for all n∈ it holds that t^(α_n)∈ K_N∩ (0,T] ⊆ (0,T). This and <cit.> (applied with 𝒪↶ (K_N∩(0,T])× (O_N)^k,η↶η|_(K_N∩(0,T]) ×(O_N)^k, ϕ↶Φ|_(K_N∩ (0,T]) × (O_N)^k, x↶ ((0,∞)∋α↦ (t^(α),x^(α))∈(K_N∩ (0,T])× (O_N)^k) in the notation of <cit.>) prove that 0= Φ(𝔱,𝔵) =1/2∑_i=2^k𝔵_i -𝔵_i-1^2 andη(𝔱,𝔵) =sup_(t,x)∈[Φ^-1(0)]∩[(K_N∩(0,T])× (O_N)^k]η(t,x). Therefore, we obtain for all i∈{1,2,…, k} that 𝔵_i = 𝔵_1 andS ≤lim_α→∞ S_α,N≤∑_i=1^k v_i(𝔱,𝔵_i) =η (𝔱,𝔵) =sup_t∈ K_Nsup_y∈ O_N[∑_i=1^k v_i(t,y) ] ≤ S. Next observe that <cit.> (applied for every n∈ with𝒪↶ O, ε↶1/α_n,Φ↶α_nΦ|_(0,T)× O^k, (u_i)_i∈{1,2,…,k}↶ (v_i|_(0,T)× O)_i∈{1,2,…,k}, (G_i)_i∈{1,2,…,k}↶ (H_i)_i∈{1,2,…,k}, 𝔱↶ t^(α_n), 𝔵↶ x^(α_n) in the notation of <cit.>) and (<ref>) demonstrates that there exist b^(α_n)_1, b^(α_n)_2,…, b^(α_n)_k∈, n∈, and A^(α_n)_1,A^(α_n)_2,…, A^(α_n)_k∈𝕊_d, n∈, which satisfy that *for all n∈, i∈{1,2,…, k} it holds that(b^(α_n)_i,α_n(∇_x_iΦ)(t^(α_n), x^(α_n)), α_n A^(α_n)_i) ∈ (𝔓^+ v_i)(t^(α_n),x^(α_n)_i), * for all n∈ it holds that ∑_i=1^k b^(α_n)_i =α_n (∂/∂ tΦ)(t^(α_n),x^(α_n)) =0, and *for all n∈ it holds that(-α_n+α_n(Hess_xΦ) (t^(α_n),x^(α_n)))I_kd≤α_n [ A^(α_n)_1 … 0; ⋮ ⋱ ⋮; 0 … A^(α_n)_k ]≤α_n(Hess_xΦ)(t^(α_n),x^(α_n)) +1α_n[α_n (Hess_xΦ) (t^(α_n),x^(α_n))]^2.Note that the fact that for all t∈ (0,T), x∈ O^k it holds that (Hess_xΦ)(t,x) =(Hess_xΦ)(0,0) and item <ref> prove that for all n∈ it holds that-(1+ (Hess_xΦ)(0,0))I_kd≤[ A^(α_n)_1 … 0; ⋮ ⋱ ⋮; 0 … A^(α_n)_k ]≤ (Hess_xΦ)(0,0) +[(Hess_xΦ)(0,0)]^2.Furthermore, observe that <cit.> (applied for alli∈{ 1,…,k} with u↶ v_i, G↶ H_i in the notation of<cit.>),item <ref>, and (<ref>) ensure that for all n∈, i∈{1,2,…,k} it holds thatb^(α_n)_i+ H_i(t^(α_n),x^(α_n)_i, v_i(t^(α_n),x^(α_n)_i), α_n(∇_x_iΦ) (t^(α_n),x^(α_n)),α_n A^(α_n)_i) ≥ 0.Item <ref>, (<ref>) and the fact that∂/∂ tΦ = 0 hence show thatfor all n∈ it holds that∑_i=1^k G_i( t^(α_n),x^(α_n)_i, v_i(t^(α_n),x^(α_n)_i)+δ/t^(α_n), α_n(∇_x_iΦ)(t^(α_n),x^(α_n)), α_n A^(α_n)_i )≥kδ/[t^(α_n)]^2.Throughout the rest of the proof let r^(n)_i, n∈, i∈{1,2,…,k} satisfy for all n∈, i∈{1,2,…, k} thatr^(n)_i= v_i(t^(n), x_i^(n)) +δ/t^(n).Thisand the fact thatlim sup_n→∞η(t^(α_n), x^(α_n))-S=0ensure thatlim inf_n→∞[ ∑_i=1^k r^(α_n)_i] =lim sup_n→∞[ ∑_i=1^k r^(α_n)_i] = S +kδ/𝔱 >0.Inaddition, observe that the fact that{(t^(α_n), x^(α_n)) ∈ (0,T)× O^k n∈}∪{(𝔱,𝔵)} is compactand the fact that for all i∈{1,2,…, k} it holds that v_i is upper semi-continuousdemonstrate thatsup{r^(α_n)_i n∈, i∈{1,2,…,k}} <∞.This and(<ref>) show thatsup_n∈[ ∑_i=1^kr^(α_n)_i] <∞. Moreover, observe that (<ref>) ensures that for all t∈ (0,T),x=(x_1,x_2,…,x_k)∈ O^k it holds that(∂/∂ x_iΦ)(t,x) = x_1-x_2 1=i<k2x_i-x_i-1-x_i+11<i<kx_k-x_k-11< i=k0 1=i=k = [2,k](i) [x_i-x_i-1] +[1,k-1](i) [x_i-x_i+1]. This and the Taylor expansion ∀ z ∈ (^d)^kΦ(0,z)=Φ(0,0) +⟨ (∇_x Φ)(0,0),z⟩ +1/2⟨ z, (Hess_x Φ)(0,0)z⟩ = 1/2⟨ z, (Hess_x Φ)(0,0)z⟩ demonstrate that for all z∈ (^d)^k it holds that(∇_xΦ)(0,z) =(Hess_xΦ)(0,0)z.Combining this with (<ref>) and the fat that for alla,b∈it holds that (a+b)^2≤ 2(a^2+b^2) proves that for all z∈ (^d)^k it holds that⟨ z, ((Hess_xΦ) (0,0))^2 z⟩ = ⟨ (Hess_xΦ)(0,0)z, (Hess_x Φ)(0,0)z⟩=(Hess_xΦ)(0,0)z^2 =(∇_x Φ)(0,z)^2=z_1-z_2^2 +[ ∑_i=2^k-12z_i -z_i-1-z_i+1^2] +z_k-z_k-1^2≤ 2z_1-z_2^2+[ ∑_i=2^k-12(z_i -z_i-1^2+z_i-z_i+1^2)] +2z_k-z_k-1^2= 4[∑_i=2^kz_i-z_i-1^2 ].Hence, we obtain for all z∈ (^d)^kthat(Hess_x Φ)(0,0) z^2 ≤ 8 [∑_i=2^k z_i^2] + 8 [∑_i=2^k z_i-1^2] ≤ 16 z^2.This ensures that (Hess_xΦ)(0,0)≤ 4. Combiningthis with (<ref>)and (<ref>) demonstrates that for all n∈, z_1,z_2,…, z_k ∈^d it holds that-5[∑_i=1^k z_i^2 ] ≤∑_i=1^k ⟨ z_i, A^(α_n)_i z_i ⟩≤ 2Φ(0,z) + ⟨ z, (Hess_xΦ)(0,0))^2z ⟩ ≤ 5 [ ∑_i=2^k z_i-z_i-1^2].Combining this with (<ref>) and (<ref>)-(<ref>) proves that0 < kδ/𝔱^2 =lim sup_n→∞kδ/[t^(α_n)]^2≤lim sup_n→∞[ ∑_i=1^k G_i(t^(α_n), x^(α_n)_i,r^(α_n)_i, α_n([2,k](i)[x^(α_n)_i-x^(α_n)_i-1] +[1,k-1](i)[x^(α_n)_i-x^(α_n)_i+1]), α_n A^(α_n)_i)] ≤ 0.This contradiction implies that S≤ 0. Therefore, we obtain that for all t∈ (0,T], y∈ O it holds that ∑_i=1^k u_i(t,y) ≤kδ/t. The proof of Lemma <ref> is thus complete. The following corollary extends <cit.> which assumes (<ref>)to hold without √(T-t) and (0,T)∖ K_r replaced by (0,T). Let d∈, T∈(0,∞), let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let O⊆^d be a non-emptyopen set,for every r∈(0,∞) letK_r⊆[0,T), O_r⊆ O satisfy K_r=[0,max{T-1/r,0}] and O_r = {x ∈ Ox≤ rand { y ∈^dy-x < 1/r}⊆ O }, let G∈ C((0,T)× O××^d×𝕊_d,), u,v∈ C([0,T]× O, ), assume thatsup_x∈ O(u(T,x)-v(T,x))≤ 0 andinf_r∈(0,∞)[ sup_t∈(0,T)∖ K_rsup_x∈ O∖ O_r[(u(t,x)-v(t,x))√(T-t)]] ≤ 0,assume that G is degenerate elliptic, assume that u is a viscosity solution of(∂ u∂ t)(t,x) +G(t,x,u(t,x),(∇_x u)(t,x), (Hess_x u)(t,x)) ≥ 0 for (t,x)∈(0,T)× O, assume that v is a viscosity solution of(∂ v∂ t)(t,x) +G(t,x,v(t,x),(∇_x v)(t,x), (Hess_x v)(t,x)) ≤ 0for (t,x)∈(0,T)× O, and assume for all t_n∈(0,T), n∈_0,all (x_n,r_n,A_n)∈ O××𝕊_d, n∈_0, and all (𝔵_n,𝔯_n, 𝔄_n)∈ O××𝕊_d, n∈_0, with lim sup_n→∞[t_n-t_0 +x_n-x_0]+√(n)x_n-𝔵_n =0<lim inf_n→∞(r_n-𝔯_n) =lim sup_n→∞(r_n-𝔯_n) ≤sup_n∈(r_n+𝔯_n) <∞ and ∀(n∈,z,𝔷∈^d) -5(z^2+𝔷^2) ≤⟨ z, A_nz⟩ -⟨𝔷,𝔄_n𝔷⟩≤ 5z-𝔷^2thatlim sup_n→∞[G(t_n,x_n,r_n, n(x_n-𝔵_n),nA_n) -G(t_n,𝔵_n,𝔯_n, n(x_n-𝔵_n),n𝔄_n)] ≤ 0.Then it holds for all t∈ [0,T], x∈ O that u(t,x)≤ v(t,x).Throughout this proof letH (0,T)× O××^d ×𝕊_d→ satisfy for all t∈(0,T), x∈ O, r∈, p∈^d, A∈𝕊_d thatH(t,x,r,p,A)=-G(t,x,-r,-p,-A).Note that the fact that G is degnerate elliptic implies that H is degenerate elliptic. In addition, observe that (<ref>) and the assumption thatG∈ C((0,T)× O××^d×𝕊_d,) ensure that H∈ C((0,T)× O××^d×𝕊_d,). Next note that(<ref>) and (<ref>) assure that -v is a viscosity solution of(∂(-v)∂ t)(t,x) +H(t,x,(-v)(t,x),(∇_x(-v))(t,x), (Hess_x(-v))(t,x)) ≥ 0for (t,x)∈(0,T)× O. Furthermore, observe that (<ref>) shows that for all t_n∈(0,T), n∈_0, all (x_n,r_n,A_n)∈ O××𝕊_d, n∈_0, and all (𝔵_n,𝔯_n, 𝔄_n)∈ O××𝕊_d, n∈_0, with lim sup_n→∞[t_n-t_0 +x_n-x_0]+√(n)x_n-𝔵_n =0<lim inf_n→∞(r_n+𝔯_n) =lim sup_n→∞(r_n+𝔯_n) ≤sup_n∈(r_n+𝔯_n) <∞ and ∀(n∈,z,𝔷∈^d) -5(z^2+𝔷^2) ≤⟨ z, A_nz⟩ +⟨𝔷,𝔄_n𝔷⟩≤ 5z-𝔷^2it holds thatlim sup_n→∞[ G(t_n,x_n, r_n,n(x_n-𝔵_n),nA_n) +H(t_n,𝔵_n,𝔯_n, n(𝔵_n-x_n),n𝔄_n) ]=lim sup_n→∞[ G(t_n,x_n, r_n,n(x_n-𝔵_n),nA_n) -G(t_n,𝔵_n,-𝔯_n, n(x_n-𝔵_n),-n𝔄_n)]≤ 0.Lemma <ref> (applied withk↶ 2, u_1↶ u, u_2↶ -v, G_1↶ G, G_2 ↶ H in the notation of Lemma <ref>) therefore demonstrates that for all t∈ (0,T], x∈ O it holds thatu(t,x)-v(t,x)≤ 0.Combining this with the assumption that u,v∈ C([0,T]× O,)proves that for all t∈ [0,T], x∈ O it holds thatu(t,x)-v(t,x)≤ 0. The proof of Corollary <ref> is thus complete. The following proposition extends <cit.> to the case of semi-linear PDEs with gradient-dependent nonlinearities.Let d,m∈, L,T∈ (0,∞), let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·^d+1→[0,∞) be the standard Euclidean norm on ^d+1, let·_F(⋃_a,b=1^∞^a× b)→ [0,∞) satisfy for all a,b∈, A=(A_ij)_(i,j)∈{1,2,…, a}×{1,2,…,b}∈^a× b that A_F=[∑_i=1^a ∑_j=1^b A_ij^2 ]^1/2, let O⊆^d be a non-empty open set, for every r∈(0,∞) let K_r⊆[0,T), O_r⊆ O satisfy K_r=[0,max{T-1/r,0}] and O_r = {x ∈ Ox≤ rand { y ∈^dy-x< 1/r}⊆ O }, let g∈ C(O,),f∈ C([0,T]× O××^d,), μ∈ C([0,T]× O,^d), σ∈ C([0,T]× O, ^d× m), V∈ C^1,2([0,T]× O, (0,∞)) satisfyfor all r∈(0,∞) thatsup({μ(t,x)-μ(t,y)+σ(t,x)-σ(t,y)_Fx-y t∈ [0,T], x,y∈ O_r, x≠ y }∪{0}) <∞,assume for all t∈ [0,T], x∈ O,a,b∈, v,w∈^d that f(t,x,a,v)-f(t,x,b,w)≤ L(a,v) -(b,w), lim sup_r→∞[sup_s∈[0,T)∖ K_rsup_y∈ O∖ O_rf(s,y,0,0)/V(s,y)]=0, and(∂ V∂ t)(t,x) +⟨μ(t,x), (∇_x V)(t,x) ⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess_x V)(t,x))+L(∇_x V)(t,x)≤ 0,and let u_1, u_2∈C([0,T]× O, ) satisfy for all i∈{1,2} that lim sup_r→∞ [sup_t∈[0,T)∖ K_r sup_x∈ O∖ O_r (u_i(t,x)/V(t,x)√(T-t))]=0 and that u_i is a viscosity solution of(∂ u_i∂ t)(t,x) +⟨μ(t,x), (∇_x u_i)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess_x u_i)(t,x)) +f(t,x,u_i(t,x),(∇_x u_i)(t,x)) = 0with u_i(T,x)=g(x) for(t,x)∈ (0,T)× O. Then it holds for all t∈[0,T], x∈ O that u_1(t,x)=u_2(t,x).Throughout this prooflet [0,T]× O→ (0,∞) satisfy for all t∈ [0,T], x∈ O that (t,x)=e^-LtV(t,x), let v_i [0,T]× O→, i∈{1,2}, satisfy for all i∈{1,2}, t∈ [0,T], x∈ O that v_i(t,x)= u_i(t,x)/(t,x), let G (0,T)× O××^d×𝕊_d → satisfyfor all t∈(0,T), x∈ O, r∈, p∈^d, A∈𝕊_d thatG(t,x,r,p,A) =⟨μ(t,x), p⟩ +12Tr(σ(t,x)[σ(t,x)]^*A) +f(t,x,r,p),and let H(0,T)× O××^d×𝕊_d→ satisfy for all t∈(0,T), x∈ O, r∈, p∈^d, A∈𝕊_d thatH(t,x,r,p,A) = r(t,x)(∂∂ t)(t,x) +1(t,x)G(t,x,r(t,x),(t,x)p+r(∇_x )(t,x),(t,x)A+p[(∇_x )(t,x)]^*+(∇_x )(t,x)p^*+r(Hess_x )(t,x)).Observe that(<ref>) and the assumption that V∈ C^1,2([0,T]× O, (0,∞)) ensure that for all t∈[0,T], x∈ O it holds that ∈ C^1,2([0,T]× O, (0,∞)) and(∂∂ t)(t,x) +⟨μ(t,x), (∇_x )(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x )(t,x))+L(t,x) +L (∇_x)(t,x)≤ 0.Next note that(<ref>) implies that G∈ C((0,T)× O××^d×𝕊_d,) is degenerate elliptic.Combining this with (<ref>) shows that H∈ C((0,T)× O××^d×𝕊_d,) is degenerate elliptic. In the next step observe that the assumption that for alli∈{1,2}, x∈ O it holds that u_i(T,x)=g(x) implies that for all x∈ O it holds thatv_1(T,x)≤ v_2(T,x) ≤ v_1(T,x).Furthermore, note that the hypothesis thatlim sup_r→∞ [sup_t∈[0,T)∖ K_rsup_x∈ O∖ O_r(u_1(t,x)+u_2(t,x)/V(t,x)√(T-t))]=0 shows thatlim sup_r→∞[sup_t∈[0,T)∖ K_rsup_x∈ O∖ O_r(v_1(t,x)-v_2(t,x)√(T-t))] =0.In addition, observe that <cit.> (applied for every i∈{1,2} with G̃↶ -H, V↶, ũ↶ v_i in the notation of <cit.>), (<ref>), and (<ref>) demonstrate that for all i∈{1,2} it holds that v_i is a viscosity solution of(∂∂ tv_i)(t,x) +H(t,x,v_i(t,x),(∇_x v_i)(t,x), (Hess_x v_i)(t,x)) =0for (t,x)∈ (0,T)× O.Throughout the rest of the proof let e_1, e_2, …, e_m∈^m satisfye_1=(1,0,…,0), e_2=(0,1,0,…, 0), …, e_m=(0,…,0,1), let t_n∈(0,T), n∈_0, satisfy lim sup_n→∞t_n-t_0=0,and let (x_n,r_n,A_n)∈ O××𝕊_d, n∈_0, and (𝔵_n, 𝔯_n, 𝔄_n)∈ O××𝕊_d, n∈_0, satisfy lim sup_n→∞[t_n-t_0 +x_n-x_0+√(n)x_n-𝔵_n]=0 < r_0 = lim inf_n→∞r_n-𝔯_n = lim sup_n→∞r_n-𝔯_n≤sup_n∈ (r_n +𝔯_n) < ∞ and for all n∈, y,z∈^d that ⟨ y, A_n y⟩ -⟨ z, 𝔄_n z ⟩≤ 5 y-z^2. Note that (<ref>) and the fact that lim sup_n→∞[√(n)x_n-𝔵_n]=0 ensure thatlim sup_n→∞[nσ(t_n,x_n) -σ(t_n,𝔵_n)_F^2] =0.The fact that for all B∈𝕊_d, C∈^d× m it holds that Tr(CC^*B) =∑_i=1^m ⟨ C e_i, BC e_i⟩ and the assumption that for alln∈, y,z∈^d it holds that ⟨ y, A_n y ⟩ - ⟨ z, 𝔄_n z ⟩≤ 5 y-z^2 therefore imply thatlim sup_n→∞[12Tr(σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n)(t_n,x_n)nA_n -σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)(t_n,𝔵_n)n 𝔄_n) ]=lim sup_n→∞[n2Tr(σ(t_n,x_n)[σ(t_n,x_n)]^* A_n - σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^* 𝔄_n) ]=lim sup_n→∞[n2∑_i=1^m ( ⟨σ(t_n,x_n)e_i, A_n σ(t_n,x_n) e_i⟩- ⟨σ(t_n,𝔵_n)e_i, 𝔄_nσ(t_n,𝔵_n)e_i⟩) ]≤lim sup_n→∞[ ∑_i=1^m 5n2σ(t_n,x_n)e_i -σ(t_n,𝔵_n)e_i^2]= 52lim sup_n→∞[n σ(t_n,x_n) -σ(t_n,𝔵_n)_F^2] =0.Furthermore, note that (<ref>) and the fact that∈ C^1,2([0,T]× O,(0,∞)) show that for all compact𝒦⊆ O there exists c∈ which satisfies for all s∈[0,T],y_1,y_2∈𝒦 thatσ(s,y_1)[σ(s,y_1)]^*(s,y_1) -σ(s,y_2)[σ(s,y_2)]^*(s,y_2)_F +(∇_x )(s,y_1)-(∇_x)(s,y_2)≤ c y_1-y_2.This,the fact that lim sup_n→∞[t_n-t_0 +x_n-x_0]=0, andthe assumption that lim sup_n→∞[√(n)x_n-𝔵_n]=0 demonstrate thatlim sup_n→∞[ nx-𝔵_nσ(t_n,x_n)[σ(t_n,x_n)]^*/(t_n,x_n) -σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*/(t_n,𝔵_n)_F] =0andlim sup_n→∞ [nx_n-𝔵_n (∇_x)(t_n,x_n) -(∇_x)(t_n, 𝔵_n)] =0.In addition, note that for allB∈𝕊_d, v,w∈^d it holds thatTr(Bvw^*) =Tr(w^*Bv) = w^*Bv=⟨ w, Bv ⟩ =⟨ Bw, v ⟩=⟨ v, Bw ⟩ = v^* Bw = Tr(v^*Bw) = Tr(Bwv^*).This,Cauchy-Schwarz inequality, (<ref>), and (<ref>) demonstrate thatlim sup_n→∞[ 12Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n)(n(x_n-𝔵_n)[(∇_x)(t_n,x_n)]^* +(∇_x)(t_n,x_n)n(x_n-𝔵_n)^*) - σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)(n(x_n-𝔵_n)[(∇_x)(t_n,𝔵_n)]^* +(∇_x)(t_n,𝔵_n)n(x_n-𝔵_n)^*)) ]=lim sup_n→∞[ ⟨σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n) n(x_n-𝔵_n), (∇_x)(t_n,x_n) ⟩ - ⟨σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n) n(x_n-𝔵_n), (∇_x)(t_n,𝔵_n) ⟩]= lim sup_n→∞[ ⟨(σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n) -σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)) n(x_n-𝔵_n),(∇_x)(t_n,x_n) ⟩ + ⟨σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n) n(x_n-𝔵_n), (∇_x)(t_n,x_n)-(∇_x)(t_n,𝔵_n) ⟩]≤lim sup_n→∞[ σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n) -σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)_F nx_n-𝔵_n (∇_x)(t_n,x_n) + σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)_Fnx_n-𝔵_n (∇_x)(t_n,x_n)-(∇_x)(t_n,𝔵_n)]=0.Next observe that the fact that(0,T)× O ∋ (s,y)↦σ(s,y)[σ(s,y)]^*/(s,y) (Hess_x)(s,y) ∈^d× d is continuousand the assumption that lim sup_n→∞[t_n-t_0 +x_n-x_0]=0 andlim sup_n→∞[√(n)x_n-𝔵_n]=0 show thatlim sup_n→∞|Tr( σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n) (Hess_x)(t_n,𝔵_n) -σ(t_0,x_0)[σ(t_0,x_0)]^*(t_0,x_0) (Hess_x)(t_0,x_0)) |= lim sup_n→∞|Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n) (Hess_x)(t_n,x_n) -σ(t_0,x_0)[σ(t_0,x_0)]^*(t_0,x_0) (Hess_x)(t_0,x_0)) | =0.The fact that0< r_0= lim inf_n→∞r_n-𝔯_n = lim sup_n→∞r_n-𝔯_n≤sup_n∈ (r_n +𝔯_n) < ∞ therefore ensures that lim sup_n→∞[ 12Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n)r_n (Hess_x)(t_n,x_n) -σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)𝔯_n (Hess_x)(t_n,𝔵_n) )]=12lim sup_n→∞[ (r_n-𝔯_n) Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n) (Hess_x)(t_n,x_n)) +𝔯_𝔫Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n) (Hess_x)(t_n,x_n) -σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n) (Hess_x)(t_n,𝔵_n) ) ]= r_02(t_0,x_0)Tr(σ(t_0,x_0) [σ(t_0,x_0)]^* (Hess_x)(t_0,x_0)).Combining this with (<ref>) and (<ref>) ensures thatlim sup_n→∞[ 12Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n)((t_n,x_n)n A_n+n(x_n-𝔵_n) [(∇_x)(t_n,x_n)]^*) +(∇_x)(t_n,x_n) n(x_n-𝔵_n)^* +r_n(Hess_x)(t_n,x_n)) -12Tr( σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)((t_n,𝔵_n) n 𝔄_n+n(x_n-𝔵_n) [(∇_x)(t_n,𝔵_n)]^*) +(∇_x)(t_n,𝔵_n) n(x_n-𝔵_n)^* +𝔯_n (Hess_x)(t_n,𝔵_n)) ]≤r_02(t_0,x_0)Tr( σ(t_0,x_0) [σ(t_0,x_0)]^* (Hess_x)(t_0,x_0)).Next note thatthe fact that (0,T)× O∋(s,y) ↦1/(s,y)(∂/∂ t)(s,y)∈ is continuous and the assumption that0<r_0 =lim inf_n→∞ (r_n-𝔯_n) =lim sup_n→∞ (r_n-𝔯_n) ≤sup_n∈(r_n +𝔯_n)<∞ prove thatlim sup_n→∞[ r_n(t_n,x_n) (∂∂ t)(t_n,x_n) - 𝔯_n(t_n,𝔵_n) (∂∂ t)(t_n,𝔵_n)]=lim sup_n→∞[ r_n-𝔯_n(t_n,x_n) (∂∂ t)(t_n,x_n) +𝔯_n (1(t_n,x_n) (∂∂ t)(t_n,x_n) - 1(t_n,𝔵_n) (∂∂ t)(t_n,𝔵_n)) ]=r_0(t_0,x_0)(∂∂ t)(t_0,x_0). Furthermore, observe that (<ref>) andthe fact that lim sup_n→∞[t_n-t_0 +x_n-x_0]=0=lim sup_n→∞[√(n)x_n-𝔵_n] ensure that lim sup_n→∞[nμ(t_n,x_n)-μ(t_n,𝔵_n) x_n-𝔵_n] =0.This, the Cauchy-Schwarz inequality, the fact that (0,T)× O∋ (s,y)↦⟨μ(s,y)/(s,y), (∇_x)(s,y) ⟩∈ is continuous,andthe assumption that 0<r_0 =lim inf_n→∞ (r_n-𝔯_n) =lim sup_n→∞ (r_n-𝔯_n) ≤sup_n∈(r_n +𝔯_n)<∞ imply thatlim sup_n→∞[ 1/(t_n,x_n)⟨μ(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n) +r_n(∇_x)(t_n,x_n) ⟩ -1/(t_n,𝔵_n)⟨μ(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +𝔯_n(∇_x)(t_n,𝔵_n) ⟩]= lim sup_n→∞[ ⟨μ(t_n,x_n)-μ(t_n,𝔵_n), n(x_n-𝔵_n)⟩ +r_n⟨μ(t_n,x_n)/(t_n,x_n), (∇_x)(t_n,x_n)⟩ - 𝔯_n⟨μ(t_n,𝔵_n)/(t_n,𝔵_n), (∇_x)(t_n,𝔵_n)⟩]≤lim sup_n→∞[ μ(t_n,x_n)-μ(t_n,𝔵_n) nx_n-𝔵_n]+lim sup_n→∞[ (r_n-𝔯_n) ⟨μ(t_n,x_n)/(t_n,x_n), (∇_x)(t_n,x_n) ⟩]+lim sup_n→∞[ 𝔯_n (⟨μ(t_n,x_n)/(t_n,x_n), (∇_x)(t_n,x_n)⟩ -⟨μ(t_n,𝔵_n)/(t_n,𝔵_n), (∇_x)(t_n,𝔵_n)⟩)]= r_0/(t_0,x_0)⟨μ(t_0,x_0), (∇_x)(t_0,x_0)⟩.Next note that the assumption that f∈ C([0,T]× O××^d,) shows that for all compact 𝒦⊆[0,T]× O ××^d it holds thatlim sup_(0,∞)∋ ε→ 0 [sup( {f(s_1,y_1,a_1,w_1)- f(s_2,y_2,a_2,w_2) (s_1,y_1,a_1,w_1),(s_2,y_2,a_2,w_2)∈𝒦,s_1-s_2≤ε, y_1-y_2≤ε, (a_1,w_1) -(a_2,w_2)≤ε}∪{0})]=0. Moreover, observe that the assumption that for alls∈ [0,T], y∈ O, a,b∈,v,w∈^d it holds that f(s,y,a,v)-f(s,y,b,w)≤ (a,v) -(b,w) ensures that for all n∈ it holds thatf(t_n,x_n,r_n(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n)+r_n(∇_x)(t_n,x_n))V(t_n,x_n) -f(t_n,𝔵_n,𝔯_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n)+𝔯_n(∇_x)(t_n,𝔵_n))V(t_n,𝔵_n)≤f(t_n,x_n,r_n(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n)+r_n(∇_x)(t_n,x_n))(t_n,x_n) - f(t_n,𝔵_n,r_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n)+r_n(∇_x)(t_n,𝔵_n))(t_n,𝔵_n) +f(t_n,𝔵_n,r_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +r_n(∇_x)(t_n,𝔵_n))(t_n,𝔵_n) -f(t_n,𝔵_n,𝔯_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +𝔯_n(∇_x)(t_n,𝔵_n))(t_n,𝔵_n)≤f(t_n,x_n,r_n(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n)+r_n(∇_x)(t_n,x_n))(t_n,x_n) - f(t_n,𝔵_n,r_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n)+r_n(∇_x)(t_n,𝔵_n))(t_n,𝔵_n)+L(t_n,𝔵_n) (r_n(t_n, 𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +r_n(∇_x)(t_n,𝔵_n))-(𝔯_n (t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +𝔯_n(∇_x)(t_n,𝔵_n)).This and (<ref>) demonstrate thatlim sup_n→∞[ f(t_n,x_n,r_n(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n) +r_n(∇_x)(t_n,x_n))(t_n,x_n) -f(t_n,𝔵_n,𝔯_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +𝔯_n(∇_x)(t_n,𝔵_n))(t_n,𝔵_n)]≤lim sup_n→∞[Lr_n-𝔯_n +Lr_n-𝔯_n(∇_x )(t_n,𝔵_n)(t_n,𝔵_n)] = L r_0(1 + (∇_x )(t_0,x_0)(t_0,x_0)).Combing this with (<ref>),(<ref>), (<ref>)(<ref>), (<ref>), (<ref>), and proves thatlim sup_n→∞ [ H(t_n,x_n,r_n,n(x_n-𝔵_n),nA_n) -H(t_n,𝔵_n,𝔯_n, n(x_n-𝔵_n),n𝔄_n)]= lim sup_n→∞[ r_n(t_n,x_n)(∂∂ t)(t_n,x_n) -𝔯_n(t_n,𝔵_n)(∂∂ t)(t_n,𝔵_n) + 12Tr( σ(t_n,x_n)[σ(t_n,x_n)]^*(t_n,x_n)((t_n,x_n)n A_n+n(x_n-𝔵_n) [(∇_x)(t_n,x_n)]^*) +(∇_x)(t_n,x_n) n(x_n-𝔵_n)^* +r_n(Hess_x)(t_n,x_n)) -12Tr( σ(t_n,𝔵_n)[σ(t_n,𝔵_n)]^*(t_n,𝔵_n)((t_n,𝔵_n) n 𝔄_n+n(x_n-𝔵_n) [(∇_x)(t_n,𝔵_n)]^*) +(∇_x)(t_n,𝔵_n) n(x_n-𝔵_n)^* +𝔯_n (Hess_x)(t_n,𝔵_n)) +1(t_n,x_n)⟨μ(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n) +r_n(∇_x)(t_n,x_n) ⟩ -1(t_n,𝔵_n)⟨μ(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +𝔯_n(∇_x)(t_n,𝔵_n) ⟩ +f(t_n,x_n,r_n(t_n,x_n), (t_n,x_n)n(x_n-𝔵_n) +r_n(∇_x)(t_n,x_n))(t_n,x_n) -f(t_n,𝔵_n,𝔯_n(t_n,𝔵_n), (t_n,𝔵_n)n(x_n-𝔵_n) +𝔯_n(∇_x)(t_n,𝔵_n))(t_n,𝔵_n)] ≤r_0(t_0,x_0)[ (∂∂ t)(t_0,x_0) +12Tr(σ(t_0,x_0) [σ(t_0,x_0)]^*(Hess_x)(t_0,x_0) +⟨μ(t_0,x_0),(∇_x)(t_0,x_0)⟩ +L(t_0,x_0) +L(∇_x)(t_0,x_0)] ≤ 0. Corollary <ref>, (<ref>), and (<ref>) therefore demonstrate that v_1≤ v_2 andv_2≤ v_1. This implies v_1=v_2. Hence, we obtain that u_1=u_2. The proof of Proposition <ref> is thus complete.§ BISMUT-ELWORTHY-LI TYPE FORMULAIn this section we derive aBismut-Elworthy-Li type formula thatholds under certain assumptions. To achieve this, we work with results from the Malliavin calculus to establish the derivative representation in (<ref>). We therefore follow the notation of <cit.> and denote the Malliavin derivative of a random variable X∈𝔻^1,2 by {D_t X t∈ [0,T]} where 𝔻^1,2⊆ L^2(Ω, ^d)denotes the space of Malliavin differentiable random variables. For the Skorohod integral of a stochastic process u∈ L^2([0,T]×Ω, ^d) we write ∫_0^T u_rδ W_r.To prove the Bismut-Elworthy-Li type formula in Theorem <ref>we need the followingresults, Lemma <ref> and Lemma <ref>. Lemma <ref> establishes a representation for theMalliavin derivative of a solution of the SDE in (<ref>) under the global monotonicityassumption. The proof of this lemma is based on the ideasin <cit.>. Let d,m∈, c,T∈(0,∞), let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·_F^d× m→[0,∞)be the Frobenius norm on ^d× m, let (Ω, ℱ, ℙ, (𝔽_s)_s ∈ [0,T])be a filtered probabilityspace satisfying the usualconditions, let W=(W^1,W^2,…, W^m)[0,T]×Ω→^mbe a standard (𝔽_s)_s ∈ [0,T]-Brownian motion, let μ=(μ_1,μ_2,…,μ_d) ∈C^0,1([0,T] ×^d, ^d) and σ=(σ_ij)_i∈{1,2,…,d}, j∈{1,2,…,m}∈ C^0,1 ([0,T] ×^d, ^d × m) satisfy for all s∈[0,T], x, y ∈^d thatmax{⟨ x-y, μ(s,x)-μ(s,y)⟩, 12 σ(s,x)-σ(s,y)^2_F}≤c2x-y^2,let X = ((X^(1)_s,X^(2)_s, …,X^(d)_s))_s∈ [0,T] [0,T] ×Ω→^dbe an adapted stochastic processwith continuous sample paths satisfying that[X_0^2]<∞ and for all s ∈ [0,T] it holds a.s. thatX_s = X_0 +∫_0^s μ(r, X_r)ṛ+ ∫_0^s σ(r, X_r)Ẉ_r,and let Y=(Y^(i,j)_s)_i,j∈{1,2,…, d}, s∈ [0,T][0,T]×Ω→^d× d bean adapted stochastic process with continuous sample pathssatisfying that for all s∈[0,T]it holds a.s. thatY^(i,j)_s = δ_ij +∫_0^s ∑_k=1^d (∂μ_i/∂ x_k)(r,X_r)Y^(k,j)_r ṛ +∑_l=1^m ∫_0^s ∑_k=1^d (∂σ_il/∂ x_k)(r, X_r) Y^(k,j)_r Ẉ^l_r.Then there exists an adapted stochastic process Y^-1 [0,T]×Ω→^d× d with continuous sample paths satisfying that for allt∈[0,T], s∈ [t,T] it holds a.s. that Y_s Y^-1_s = Y^-1_s Y_s =I_d andD_t X_s = Y_s Y^-1_tσ(t, X_t). Throughout this proof letZ=(Z^(i,j)_s)_i,j∈{1,2,…,d}, s∈ [0,T] [0,T]×Ω→^d× d be an adapted stochastic process with continuous sample paths satisfying thatfor all i,j∈{1,2,…,d}, s∈[0,T] it holds a.s. thatZ^(i,j)_s= δ_ij -∫_0^s ∑_α=1^d Z^(i,α)_r[ (∂μ_α/∂ x_j)(r,X_r) - ∑_n=1^m ∑_p=1^d (∂σ_α n/∂ x_p)(r,X_r)(∂σ_pn/∂ x_j)(r,X_r) ] ṛ -∑_l=1^m ∫_0^s ∑_α=1^dZ^(i,α)_r(∂σ_α l/∂ x_j)(r, X_r)Ẉ^l_r.Observe that (<ref>), (<ref>), and Itô's lemmaensure thatfor all i,k∈{1,2,…, d}, s∈[0,T] it holds a.s. that∑_j=1^d Z^(i,j)_s Y^(j,k)_s=∑_j=1^d[δ_ijδ_jk + ∫_0^s Z^(i,j)_r ∑_α=1^d(∂μ_j/∂ x_α)(r,X_r)Y^(α,k)_r ṛ +∑_l=1^m ∫_0^s Z^(i,j)_r∑_α=1^d (∂σ_jl/∂ x_α)(r, X_r) Y^(α,k)_r Ẉ^l_r -∫_0^s ∑_α=1^d Z^(i,α)_r[ (∂μ_α/∂ x_j)(r,X_r) - ∑_n=1^m ∑_p=1^d (∂σ_α n/∂ x_p)(r,X_r)(∂σ_pn/∂ x_j)(r,X_r) ] Y^(j,k)_r ṛ -∑_l=1^m ∫_0^s ∑_α=1^dZ^(i,α)_r(∂σ_α l/∂ x_j)(r, X_r) Y^(j,k)_rẈ^l_r -∫_0^s ∑_α=1^d Z^(i,α)_r ∑_n=1^m ∑_p=1^d (∂σ_α n/∂ x_j)(r,X_r) (∂σ_j n/∂ x_p)(r,X_r) Y^(p,k)_r ṛ]= δ_ik.Combining this with, e.g., <cit.> and <cit.> (applied for every s∈[0,T]with n↶ d, A↶ Y_s, B↶ Z_s in the notation of<cit.>) implies thatfor all s∈[0,T] it holds a.s. thatY_s Z_s = I_d = Z_s Y_s. Next note that <cit.> (applied with p↶ 2, θ↶ (Ω∋ω↦ X_0(ω) ∈^d), b↶ ([0,T]×Ω×^d∋(s,ω,x)↦μ(s,x)∈^d),σ↶ ([0,T]×Ω×^d∋(s,ω,x)↦σ(s,x)∈^d× d) in the notation of <cit.>), the assumption that μ∈ C^0,1([0,T] ×^d, ^d), σ∈ C^0,1 ([0,T] ×^d,^d × m), and (<ref>) demonstrate that for all i∈{1,2,…, d}, j∈{1,2,…,m}, t∈[0,T], s∈[t,T] it holds a.s. that D^j_t X^(i)_s =σ_ij(t,X_t) +∫_t^s ∑_k=1^d (∂μ_i/∂ x_k)(r,X_r) D^j_t X^(k)_r ṛ + ∑_l=1^m ∫_t^s∑_k=1^d (∂σ_il/∂ x_k)(r,X_r) D^j_t X^(k)_r Ẉ^l_r.Moreover, observe that (<ref>) and the fact that for all s∈[0,T] it holds a.s thatY_s Z_s = I_d = Z_s Y_s imply that for alli∈{1,2,…,d}, j∈{1,2,…,m}, t∈[0,T], s∈[t,T] it holds a.s. thatσ_ij(t,X_t) +∫_t^s ∑_k=1^d (∂μ_i/∂ x_k)(r,X_r) ∑_n=1^d∑_p=1^d Y^(k,n)_r Z^(n,p)_tσ_pj(t,X_t) ṛ +∑_l=1^m∫_t^s∑_k=1^d (∂σ_il/∂ x_k)(r,X_r) ∑_n=1^d∑_p=1^d Y^(k,n)_r Z^(n,p)_t σ_pj(t,X_t) Ẉ^l_r= σ_ij(t,X_t) +∑_n=1^d∑_p=1^d [Y^(i,n)_s - Y^(i,n)_t]Z^(n,p)_t σ_pj(t,X_t)= ∑_n=1^d∑_p=1^d Y^(i,n)_t Z^(n,p)_t σ_pj(t,X_t)+∑_n=1^d∑_p=1^d [Y^(i,n)_s - Y^(i,n)_t]Z^(n,p)_t σ_pj(t,X_t)=∑_n=1^d ∑_p=1^d Y^(i,n)_s Z^(n,p)_t σ_pj(t,X_t).This, (<ref>), and the fact thatlinear SDEs are pathwiseuniqueestablish (<ref>). The proof of Lemma <ref> is thus complete. The following Lemma is a well-known result on the connection between uniform convergence and convergence of the derivative, generalized to d dimensions. Lemma <ref> is adirect consequence of<cit.>. Let d∈, let O⊆^d open, let f_n^d→, n∈_0, satisfythat (f_n)_n∈ converges pointwise to f_0 on O and (∇ f_n)_n∈ converges uniformly on O. Then it holds that (f_n)_n∈converges uniformly on O and for all x∈ O it holds that(∇ f_0)(x)= lim_n→∞ (∇ f_n)(x).Throughout this proof let 𝐞_1,𝐞_2, …,𝐞_d∈^dsatisfy that 𝐞_1=(1,0,…,0),𝐞_2=(0,1,0,…,0), …, 𝐞_d=(0,…,0,1). For every j∈{1,2,…,d}, x∈ O let y^x_j∈ O,λ^x_j∈ (0,1) satisfy y_j^x-x=λ^x_j𝐞_j andfor all t∈ [0,1] that x+λ^x_j 𝐞_j ∈ O and let g^x_j [0,1]→^d satisfy for all t∈ [0,1] that g^x_j(t)= (1-t)x+ty^x_j. Note that for allj∈{1,2…,d}, x∈ O it holds thatg^x_j(t)= (1-t)x+ty^x_j = x+t(y^x_j-x) = x+tλ^x_j 𝐞_j.Furthermore, observe that the assumption that (f_n)_n∈ converges pointwise to f_0 on O, the hypothesis that (∇ f_n)_n∈ converges uniformly on O, and the fact that for allj∈{1,2,…,d}, t∈ [0,1], x∈ O it holds that g^x_j(t)∈ Oensure that for all j∈{1,2,…,d}, x∈ O it holds that (f_n∘ g^x_j)_n∈ converges pointwise to f_0∘ g^x_j on [0,1] and (∇ f_n∘ g^x_j)_n∈ converges uniformly on [0,1].This and <cit.> (applied for every j∈{1,2, …,d},x∈ O witha↶ 0, b↶ 1, (f_n)_n∈↶ (f_n∘ g^x_j)_n∈ in the notation of<cit.>) demonstrate thatfor all j∈{1,2, …,d}, t∈ [0,1], x∈ O it holds that (f_n∘ g^x_j)_n∈ converges uniformly to f∘ g^x_j and(f∘ g^x_j)'(t) = lim_n→∞(f_n∘ g^x_j)'(t).The chain rule and the fact that for all j∈{1,2,…,d} t∈ [0,1], x∈ O it holds that (g^x_j)'(t)= λ^x_j 𝐞_j thereforedemonstrate that for allj∈{1,2, …,d}, t∈ [0,t], x∈ O it holds that(∇ f)(g^x_j(t))λ^x_j 𝐞_j =(∇ f)(g^x_j(t))(g^x_j)'(t) =(f∘ g^x_j)'(t)= lim_n→∞(f_n∘ g^x_j)'(t) = lim_n→∞ (∇ f_n)(g^x_j(t))(g^x_j)'(t) = lim_n→∞ (∇ f_n)(g^x_j(t))λ^x_j 𝐞_j.Hence, we obtain that for all j∈{1,2, …, d}, x∈ Oit holds that(∇ f)(g^x_j(0))λ^x_j 𝐞_j =(∇ f)(x) λ^x_j 𝐞_j = lim_n→∞ (∇ f_n)(x) λ^x_j 𝐞_j. This proves that for all j∈{1,2,…, d}, x∈ O it holds that(∂ f/∂ x_j)(x) =lim_n→∞(∂ f_n/∂ x_j)(x).The proof of Lemma <ref> is thus complete.The following theorem, Theorem <ref>,is a Bismut-Elworthy-Li type formula for continuousL^2-functions under global monotonicity assumption on the coefficients of the considered SDE. Let d ∈,c∈ [0,∞), α, T ∈ (0, ∞), t∈[0,T), let O⊆^d be an open set, let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·_F^d× d→ [0,∞) be the Frobenius norm on ^d× d, let (Ω, ℱ, ℙ, (𝔽_s)_s ∈ [0,T]) be a filtered probability space satisfying the usual conditions,let W[0,T] ×Ω→^dbe a standard (𝔽_s)_s ∈ [0,T]-Brownian motion, let μ∈C^0,1([0,T] × O, ^d),σ∈ C^0,1 ([0,T] × O, ^d × d) satisfy for all s∈[t,T], x, y ∈ O, v∈^d thatmax{⟨ x-y,μ(s,x)-μ(s,y)⟩,12σ(s,x)-σ(s,y)_F^2}≤c2x-y^2and v^* σ(s,x) (σ(s,x))^* v ≥αv^2,for every x ∈ O let X^x = (X^x_s)_s ∈ [t,T] [t,T] ×Ω→ O be an(𝔽_s)_s ∈ [t,T]-adapted stochastic process with continuous sample paths satisfying that for all s ∈ [t,T] it holds a.s. thatX^x_s = x + ∫_t^s μ(r, X^x_r)ṛ + ∫_t^s σ(r, X^x_r)Ẉ_r,assume for all ω∈Ω that ([t,T] × O ∋ (s,x) ↦ X^x_s(ω) ∈^d ) ∈ C^0,1([t,T] × O, O), let f∈ C(O,)∩ L^2(O,), let u O → satisfy for all x ∈ O thatu(x) =[f(X^x_T)],and for every x ∈ O let Z^x = (Z^x_s)_s ∈ (t,T] (t,T] ×Ω→^dbe an (𝔽_s)_s ∈ (t,T]-adapted stochastic processwith continuous sample paths satisfying thatfor all s ∈ (t,T] it holds a.s. thatZ^x_s = 1/s-t∫_t^s (σ(r, X^x_r))^-1 (∂/∂ x X^x_r)Ẉ_r. Then*for allx ∈ O it holds that[f(X^x_T)Z^x_T]<∞, *it holds that u ∈ C^1(O, ), and*for all x ∈ Oit holds that(∇ u )(x) = [f(X^x_T) Z^x_T].First note that <cit.>proves that for allx ∈ O it holds that[Z^x_T^2] ≤d/α(T-t)^2∫_t^Texp(2(r-t)c)ṛ.Next observe that the assumption that f∈ L^2(O,) ensures that for all x∈ O it holds that[f(X^x_T)^2] ≤sup_y∈^df(y)^2 ≤∫_y∈^df(y)^2 ỵ < ∞.The Cauchy-Schwarz inequality and(<ref>) hence show that for all x∈ O it holds that[f(X^x_T)Z^x_T] ≤[f(X^x_T)Z^x_T] ≤([f(X^x_T)^2])^1/2([Z^x_T^2])^1/2 <∞.This establishesitem <ref>. Next we prove items <ref> and <ref> in two steps.Step 1: In addition to the assumptions of Theorem <ref> we assume in step 1 thatf∈ C_c^∞(O,). Observe that the assumption that f∈ C_c^∞(O,)ensures that there exists L∈(0,∞) which satisfies for all x,y∈ O thatf(x)-f(y)≤ L x-y.This implies that for all x∈ O it holds that(∇ f)(x)≤ L.The chain rule,the fundamental theorem of calculus, Jensen's inequality,Fubini's theorem,and <cit.> hence demonstrate that for all h∈∖{0}, j∈{1,2,…, d}, x∈ O it holds that[|f(X^x+h𝐞_j_T)-f(X^x_T)/h|^2] =[|∫_0^1(∇ f)(X^x+λ h 𝐞_j_T)(∂/∂ x_jX^x+λ h 𝐞_j_T)λ̣|^2]≤[∫_0^1 (∇ f)(X^x+λ h 𝐞_j_T)(∂/∂ x_jX^x+λ h 𝐞_j_T )^2 λ̣]= ∫_0^1[ (∇ f)(X^x+λ h 𝐞_j_T)(∂/∂ x_jX^x+λ h 𝐞_j_T)^2] λ̣≤∫_0^1[(∇ f)(X^x+λ h 𝐞_j_T)^2∂/∂ x_jX^x+λ h 𝐞_j_T^2 ] λ̣≤ L^2 ∫_0^1[ ∂/∂ x_jX^x+λ h 𝐞_j_T^2 ] λ̣≤ L^2 ∫_0^1 exp(2(T-t)c) λ̣= L^2 exp(2(T-t)c).In addition, observe thatthe chain rule, the assumption that f∈ C^∞(O,), andthe fact that for all ω∈Ω, s∈[t,T] it holds that (O ∋ x ↦ X^x_s(ω)∈^d)∈ C^1(O,O) ensure that for all j∈{1,2,…,d}, x∈ O, ω∈Ωit holds a.s. thatlim_∖{0}∋ h→ 0f(X^x+h𝐞_j_T(ω))-f(X^x_T(ω))/h =∂/∂ x_j (f(X^x_T(ω)))=(∇ f)(X^x_T(ω)) (∂/∂ x_j(X^x_T(ω))).This, (<ref>), and the Vitali convergence theorem demonstrate that for allj∈{1,2,…,d}, x∈ O it holds that0=lim_∖{0}∋ h → 0[|f(X^x+h 𝐞_j_T) -f(X^x_T)/h - (∇ f)(X^x_T) (∂/∂ x_jX^x_T) | ]≥lim sup_∖{0}∋ h → 0| [f(X^x+h 𝐞_j_T) -f(X^x_T)/h - (∇ f)(X^x_T) (∂/∂ x_jX^x_T)]|= lim sup_∖{0}∋ h → 0| u(x+h𝐞_j)-u(x)/h -[(∇ f)(X^x_T) (∂/∂ x_jX^x_T)]| ≥ 0.This proves that for all j∈{1,2,…,d},x∈ O it holds thatlim_∖{0}∋ h → 0u(x+h𝐞_j)-u(x)/h = [(∇ f)(X^x_T) (∂/∂ x_jX^x_T)].In addition, note that<cit.> and the fact that for all x∈ O it holds that(∇ f)(x)≤ L demonstrate thatfor all h∈(0,∞), j∈{1,2,…, d}, x∈ O it holds that[(∇ f)(X^x+h𝐞_j_T)(∂/∂ x_jX^x+h𝐞_j_T)^2 ] ≤[(∇ f)(X^x+h𝐞_j_T)^2 ∂/∂ x_jX^x+h𝐞_j_T^2 ]≤ d L^2 [ ∂/∂ x_jX^x+h𝐞_j_T^2 ] ≤ L^2 exp(2(T-t)c).Moreover, observe that the assumption that f∈ C^∞(O,) and the fact that for all s∈[t,T], ω∈Ω it holds that(O ∋ x ↦ X^x_s(ω)∈^d)∈ C^1(O,O) show that for all j∈{1,2,…, d}, x∈ O, ω∈Ω it holds thatlim_∋ h → 0[ (∇ f)(X^x+h𝐞_j_T(ω))(∂/∂ x_jX^x+h𝐞_j_T(ω))] = (∇ f)(X^x_T(ω))(∂/∂ x_jX^x_T(ω)).Combining this, (<ref>), and the Vitali convergence theoremshows that for all x∈ O,j∈{1,2,…,d}it holds that0= lim_∋ h→ 0[ | (∇ f)(X^x+h𝐞_j_T)(∂/∂ x_jX^x+h𝐞_j_T) -(∇ f)(X^x_T)(∂/∂ x_jX^x_T)|]≥lim sup_∋ h→ 0|[ (∇ f)(X^x+h𝐞_j_T)(∂/∂ x_jX^x+h𝐞_j_T) -(∇ f)(X^x_T)(∂/∂ x_jX^x_T) ]|= lim sup_∋ h→ 0|[ (∇ f)(X^x+h𝐞_j_T)(∂/∂ x_jX^x+h𝐞_j_T)] -[(∇ f)(X^x_T)(∂/∂ x_jX^x_T) ]|≥ 0.This proves that for all j∈{1,2,…,d} it holds that (O ∋ x↦ (∇ f)(X^x_T)(∂/∂ x_jX^x_T) ∈ L^1(ℙ,)) ∈ C^0(O,L^1(ℙ,)). This and (<ref>) demonstrate that for all j∈{1,2,…,d}, x∈ O it holds that u∈ C^1(O,) and∂ u/∂ x_j(x) =[(∇ f)(X^x_T)(∂/∂ x_jX^x_T)]. Next observe that <cit.> (applied with T↶ T-t,p↶ 2, m↶ d, θ↶ (Ω∋ω↦ x ∈ O), b↶ ([0,T-t]×Ω× O ∋(s,ω,x)↦μ(t+s,x)∈^d),σ↶ ([0,T-t]×Ω×O ∋(s,ω,x)↦σ(t+s,x)∈^d× d) in the notation of <cit.>), the assumption that μ∈ C^0,1([0,T]× O, ^d),σ∈ C^0,1([0,T]× O,^d× d), and (<ref>) ensure that for all s∈[t,T], x∈ O it holds thatX^x_s∈𝔻^1,2. In addition, note thatLemma <ref> (applied for every x∈ O with m↶ d, T↶ T-t,X_0 ↶ x, X↶ ([0,T-t]×Ω∋(s,ω)↦ X^x_t+s∈ O), Y↶ ([0,T-t]×Ω∋ (s,ω)↦ (∂/∂ xX^x_t+s)(ω)∈^d× d) in the notation of Lemma <ref>) shows that for all x ∈ O there exists a stochastic process (∂/∂ x X^x)^-1 =((∂/∂ x X^x_r)^-1)_r∈[t,T] [t,T]×Ω→^d× d which satisfies that for all r ∈ [t,T], s∈[t,r] it holds a.s. that(∂/∂ x X^x_r)(∂/∂ x X^x_r)^-1 = I_d =(∂/∂ x X^x_r)^-1(∂/∂ x X^x_r) and D_s X^x_r = (∂/∂ x X^x_r) (∂/∂ x X^x_s)^-1σ(s, X^x_s).This,<cit.> (applied for every r∈[t,T], x∈ Owith m↶ d, φ↶ f, p↶ 2, F↶ X^x_r in the notation of <cit.>),and the assumption thatf∈ C^∞_c(O,) demonstrate thatfor all r ∈ [t,T], x ∈ O it holds that f(X^x_r)∈𝔻^1,2 andfor all r ∈ [t,T], s∈[t,r], x ∈ O it holds a.s. thatD_s ( f(X^x_r) ) = (∇ f)(X^x_r) D_s X^x_r=(∇ f)(X^x_r) (∂/∂ x X^x_r)(∂/∂ x X^x_s)^-1σ(s, X^x_s).Integrating both sides of (<ref>) showsthat for all r ∈ (t,T], x ∈ O it holds a.s. that(∇ f)(X^x_r) (∂/∂ x X^x_r)= 1/r-t∫_t^r D_s ( f(X^x_r) ) (σ(s, X^x_s))^-1(∂/∂ x X^x_s)ṣ. Next note that the fact thatσ∈ C^0,0 ([0,T] × O, ^d × d) implies thatσ^-1∈C^0,0 ([0,T] × O, ^d × d). The assumption that for every x∈ O it holds that X^x is an (𝔽_s)_s ∈ [t,T]-adapted process therefore shows that for all x∈ O it holds that((σ(s,X^x_s))^-1(∂/∂ x X^x_s))_s∈[t,T] is an (𝔽_s)_s ∈ [t,T]-adapted process. Combining this with<cit.> (applied for every j∈{1,2,…, d} with u↶ ([0,T]×Ω∋ (s,ω)↦ (σ(t+s(T-t),X^x_t+s(T-t)))^-1 · (∂/∂ x X^x_t+s(T-t))𝐞_j ∈^d) in the notation of <cit.>) and (<ref>) demonstrates that for all r∈[t,T], x∈ O it holds a.s. that∫_t^r (σ(s, X^x_s))^-1(∂/∂ x X^x_s)δ W_s = ∫_t^r (σ(s, X^x_s))^-1(∂/∂ x X^x_s)Ẉ_s.The dualityproperty of the Skorohod integral (cf., e.g., <cit.> (applied with T↶ T-t, F↶ (Ω∋ω↦ f(X^x_T(ω))∈^d), u ↶ ([0,T-t]×Ω∋(s,ω)↦ (σ(t+s,X^x_t+s))^-1(∂/∂ xX^x_t+s)))) therefore shows that for all x∈ O it holds that[∫_t^T D_s ( f(X^x_T) ) (σ(s, X^x_s))^-1(∂/∂ x X^x_s) ṣ] = [ f(X^x_T) ∫_t^T (σ(s, X^x_s))^-1(∂/∂ x X^x_s)Ẉ_s ].This,(<ref>), and (<ref>) ensure that for all x ∈ O it holds that[ (∇ f)(X^x_T) (∂/∂ x X^x_T) ] = 1/T-t[∫_t^T D_s ( f(X^x_T) ) (σ(s, X^x_s))^-1(∂/∂ x X^x_s) ṣ] = 1/T-t[ f(X^x_T) ∫_t^T (σ(s, X^x_s))^-1(∂/∂ x X^x_s)Ẉ_s ] = [ f(X^x_T) Z^x_T].Combining this with (<ref>)proves that u∈ C^1(O,) and for all x∈ Oit holds that (∇ u)(x) = [f(X^x_T)Z^x_T]. Step 2: For the second step note that the fact that C^∞_c(O,) is dense in L^2(O,) (cf., e.g., <cit.>) ensures that there exist (f_n)_n∈⊆C^∞_c(O,) which satisfy for all x∈ O that lim sup_n→∞f_n(x)-f(x)=0.For every n∈ let u_n O→ satisfy for all x∈ O that u_n(x)=[f_n(X^x_T)]. Observe that (<ref>), the triangle inequality,the dominated convergence theorem, and the assumption that f∈ L^2(O,) show that for all x∈ O it holds thatlim sup_n→∞u_n(x)-u(x) = lim sup_n→∞[f_n(X^x_T)]-[f(X^x_T)]≤lim sup_n→∞[f_n(X^x_T)-f(X^x_T)] = [lim sup_n→∞f_n(X^x_T)-f(X^x_T)] =0.Next note that step 1 demonstrates that for all n∈, x∈ O it holds that u_n∈ C^1(O,) and (∇ u_n)(x) = [f_n(X^x_T)Z^x_T].Combining this with the triangle inequality and the Cauchy-Schwarz inequality proves that for all n∈, x∈ O it holds that(∇ u_n)(x) -[f(X^x_T)Z^x_T] =[f_n(X^x_T)Z^x_T] -[f(X^x_T)Z^x_T]≤[f_n(X^x_T) -f(X^x_T)Z^x_T] ≤([f_n(X^x_T) -f(X^x_T)^2])^1/2([Z^x_T^2])^1/2.The assumption that f∈ C(O,), the assumption that σ∈ C([0,T]× O,^d× d),and the fact that(f_n)_n∈⊆ C^∞(O,) therefore ensure that for all compact K⊆ O there exists x̃∈ K which satisfiessup_x∈ K(∇ u_n)(x) -[f(X^x_T)Z^x_T]≤([f_n(X^x̃_T) -f(X^x̃_T)^2])^1/2([Z^x̃_T^2])^1/2.Hence, we obtain that u_n converges uniformly to (O∋ x↦[f(X^x_T)Z^x_T]∈^d) on compact subsets of O. Combining this with(<ref>) and Lemma <ref> proves thatfor all x∈ O it holds that(∇ u)(x) = lim_n→∞ (∇ u_n)(x) = lim_n→∞[f_n(X^x_T)Z^x_T] = [f(X^x_T)Z^x_T].This establishes items <ref> and <ref>. The proof of Theorem <ref>is thus complete.§ EXISTENCE AND UNIQUENESS RESULT FOR VISCOSITY SOLUTIONS OF SEMILINEAR PDES WITH GRADIENT-DEPENDENT NONLINEARITIESIn this section we use the resultsfrom Section <ref> and <ref> to show thatthe unique viscosity solution of semilinear PDEs and the unique solution of their connected SFPEs coincide. The following theorem proves exactly this connection underdifferentiabilityand global monotonicity assumptions on μ and σ and a Lipschitz and continuity assumption on f. Theorem <ref>extends <cit.> to PDEs with gradient-dependentnonlinearities. Let d ∈,α, b, c, K, L, T∈ (0, ∞), let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·^d+1→[0,∞) be the standard Euclidean norm on ^d+1, let ·_F^d× d→ [0,∞) be the Frobenius norm on ^d× d, let O⊆^d be an open set, for every r ∈ (0, ∞) let K_r⊆ [0,T), O_r ⊆ O satisfy K_r=[0,max{T-1/r,0}] and O_r = {x ∈ Ox≤ rand { y ∈^dy-x< 1/r}⊆ O }, let (Ω, ℱ, ℙ, (𝔽_s)_s ∈ [0,T])be a filtered probability space satisfying the usual conditions,let W[0,T] ×Ω→^dbe a standard (𝔽_s)_s ∈ [0,T]-Brownian motion, let μ∈C^0,1([0,T] × O, ^d),σ∈ C^0,1 ([0,T] × O, ^d × d) satisfy for alls∈[0,T], x, y ∈ O, v∈^d thatmax{⟨ x-y,μ(s,x)-μ(s,y)⟩,12σ(s,x)-σ(s,y)_F^2}≤c2x-y^2and v^* σ(s,x) (σ(s,x))^* v ≥αv^2, assume for all r∈ (0,∞), j∈{1,2,…, d} thatsup({ ∂μ/∂ x(t,x)-∂μ/∂ x(t,y)_F +∂σ/∂ x_j(t,x) -∂σ/∂ x_j(t,y)_Fx-y t∈[0,T], x,y∈ O_r, x≠ y }∪{0}) <∞,for everyt∈ [0,T], x ∈ O let X^x_t = (X^x_t,s)_s ∈ [t,T] [t,T] ×Ω→ Obe an(𝔽_s)_s ∈ [t,T]-adaptedstochastic process with continuous sample paths satisfying that for alls ∈ [t,T] it holds a.s. thatX^x_t,s = x + ∫_t^s μ(r, X^x_t,r)ṛ+ ∫_t^s σ(r, X^x_t,r)Ẉ_r,assume for all t∈ [0,T], ω∈Ω that ([t,T] × O ∋ (s,x)↦ X^x_t,s(ω) ∈ O ) ∈ C^0,1([t,T] × O, O), for every t∈[0,T],x ∈ O let Z^x_t = (Z^x_t,s)_s ∈ (t,T] (t,T] ×Ω→^d+1be an (𝔽_s)_s ∈ (t,T]-adapted stochastic processwith continuous sample paths satisfying thatfor all s ∈ (t,T]it holds a.s. thatZ^x_t,s =[1; 1/s-t∫_t^s (σ(r, X^x_t,r))^-1 (∂/∂ x X^x_t,r)Ẉ_r ], let V ∈ C^1,2([0,T]× O,(0, ∞)) satisfy that for allt∈ [0,T], s∈[t,T], x∈ O it holds a.s. that (∂ V∂ t)(s, X^x_t,s) +⟨μ(s, X^x_t,s), (∇_x V)(s, X^x_t,s)⟩ +12Tr(σ(s, X^x_t,s)[σ(s, X^x_t,s)]^*(Hess_xV)(s, X^x_t,s))+12[(∇_x V)(s, X^x_t,s)]^* σ(s, X^x_t,s)^2V(s, X^x_t,s)≤ K V(s, X^x_t,s)+band (∂ V∂ t)(t,x) +⟨μ(t,x), (∇_x V)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess_xV)(t,x))+L(∇_x V)(t,x)≤ 0,let f ∈ C([0,T] × O ××^d, ) ∩ L^2([0,T] × O ××^d, ), g ∈ C(O, ) ∩ L^2(O, ) satisfy for all t ∈ [0,T], x_1,x_2 ∈ O,a_1,a_2∈, w_1,w_2 ∈^d that | f(t,x_1,a_1,w_1)- f(t,x_2,a_2,w_2) |≤ L (a_1, w_1) -(a_2, w_2), and assume that inf_r ∈ (0, ∞) [ sup_t ∈ [0,T)∖ K_r sup_x ∈ O ∖ O_r(|g(x) |/V(T,x)+| f(t,x,0,0) |/V(t,x)√(T-t))]= 0, lim inf_r→∞ [inf_t∈ [0,T]inf_x∈ O∖ O_r V(t,x)]=∞,andinf_t∈[0,T] inf_x∈ O V(t,x)>0. Then* there exists a uniquev∈ C([0,T]× O,) ∩ C^0,1([0,T)× O,) which satisfies for allt∈[0,T), x∈ O that lim sup_r →∞[ sup_s ∈ [0,T)∖ K_rsup_y ∈ O ∖ O_r( (v,∇_x v)(s,y)/V(s,y)√(T-s) ) ]= 0, [g(X^x_t,T)Z^x_t,T + ∫_t^Tf(r, X^x_t,r,v(r, X^x_t,r), (∇_x v)(r,X^x_t,r)) Z^x_t,r ṛ ] <∞,v(T,x)=g(x), and(v, ∇_x v)(t,x) =[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, v(r, X^x_t,r), (∇_x v)(r,X^x_t,r))Z^x_t,r ṛ], *there exists a unique viscosity solutionu∈{𝐮∈ C([0,T]× O,) ∩ C^0,1([0,T)× O,)lim sup_r →∞ [sup_t ∈ [0,T)∖ K_rsup _x ∈ O ∖ O_r ((𝐮,∇_x 𝐮)(t,x) /V(t,x) √(T-t) ) ] = 0}of(∂ u∂ t)(t,x) +⟨μ(t,x), (∇_x u)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x u)(t,x))+f(t,x,u(t,x), (∇_x u)(t,x)) =0with u(T,x)=g(x) for (t,x)∈ (0,T)× O, and* for all t∈[0,T], x∈ O it holds that u(t,x)=v(t,x).First note that <cit.> (<ref>), and (<ref>) prove that there existsa uniquew=(w_1,w_2,…, w_d+1) ∈ C([0,T)× O,^d+1) which satisfies*that lim sup_r →∞ [sup_s ∈ [0,T)∖ K_rsup_y ∈ O ∖ O_r (w(s,y) /V(s,y) √(T-s) ) ]= 0,*for all t∈ [0,T),x∈ O that[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r,w(r,X^x_t,r))Z^x_t,r ṛ] <∞,and*for all t∈ [0,T), x∈ O it holds thatw(t,x)=[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r,w(r,X^x_t,r))Z^x_t,r ṛ]. Let v [0,T]× O →satisfy for all t∈ [0,T), x∈ O that v(t,x)=w_1(t,x) and v(T,x)=g(x). Observe that the fact thatw ∈ C([0,T)× O,^d+1) implies that v ∈C([0,T)× O,). To prove that v iscontinuous in T let(t_n)_n∈⊆ [0,T) satisfylim sup_n→∞t_n-T=0.Note that <cit.> and (<ref>) imply that for allt∈ [0,T], s∈ [t,T], x∈ O it holds that E[V(s,X^x_t,s)] ≤ V(t,x).Combining this with Fubini's theoremand the assumption that for all t ∈ [0,T], x_1,x_2 ∈ O,a_1,a_2∈, w_1,w_2 ∈^d it holds that | f(t,x_1,a_1,w_1)- f(t,x_2,a_2,w_2) |≤ L (a_1, w_1) -(a_2, w_2) demonstrates that for all n∈, x∈ O it holds that[| ∫_t_n^T f(r,X^x_t_n,r,w(r,X^x_t_n,r))ṛ|] ≤[∫_t_n^T f(r,X^x_t_n,r,w(r,X^x_t_n,r)) ṛ]≤[ ∫_t_n^T [f(r,X^x_t_n,r,w(r,X^x_t_n,r)) -f(r,X^x_t_n,r,0,0) +f(r,X^x_t_n,r,0,0)] ṛ]≤[∫_t_n^T [L w(r,X^x_t_n,r) +f(r,X^x_t_n,r,0,0)] ṛ]=[∫_t_n^T [ L w(r,X^x_t_n,r) +f(r,X^x_t_n,r,0,0)V(r,X^x_t_n,r)√(T-r)V(r,X^x_t_n,r)√(T-r)] ṛ] ≤[sup_s∈ [0,T)sup_y∈ OL w(s,y) +f(s,y,0,0)V(s,y)√(T-s)] ∫_t_n^T E[V(r,X^x_t_n,r)]√(T-r) ṛ≤[sup_s∈ [0,T)sup_y∈ OL w(s,y) +f(s,y,0,0)V(s,y)√(T-s)] ∫_t_n^T V(t_n,x)√(T-r) ṛ= [sup_s∈ [0,T)sup_y∈ OL w(s,y) +f(s,y,0,0)V(s,y)√(T-s)][sup_s∈ [0,T] V(s,x)]2√(T-t_n).Item <ref>, the assumption that inf_r ∈ (0, ∞)[ sup_t ∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r(| f(t,x,0,0) |/V(t,x)√(T-t))]= 0, and the fact thatV∈ C^1,2([0,T]× O, (0,∞)) therefore show that for all x∈ O it holds thatlim sup_n→∞[| ∫_t_n^T f(r,X^x_t_n,r,w(r,X^x_t_n,r))ṛ|] =0.In addition, note that <cit.> demonstrates that there exists compactly supported 𝔤_n∈ C(O, ), n ∈, which satisfylim sup_n →∞[sup_x ∈ O( |𝔤_n(x)-g(x) |/V(T,x)) ] = 0.This, the triangle inequality, and (<ref>) show that for all k,n∈, x∈ O it holds that[g(X^x_t_n,T)-g(x)]≤[g(X^x_t_n,T) -𝔤_k(X^x_t_n,T)] +[𝔤_k(X^x_t_n,T) -𝔤_k(x)] +[𝔤_k(x)-g(x)]= [g(X^x_t_n,T) -𝔤_k(X^x_t_n,T)/V(T,X^x_t_n,T )V(T,X^x_t_n,T)] +[𝔤_k(X^x_t_n,T) -𝔤_k(x)] +[ 𝔤_k(x) -g(x)/V(T,x)V(T,x)]≤[sup_y∈ Og(y)-𝔤_k(y)/V(T,y)] [V(T,X^x_t_n,T)] +[𝔤_k(X^x_t_n,T) -𝔤_k(x)]+[sup_y∈ O𝔤_k(y) -g(y)/V(T,y)] V(T,x)≤[sup_y∈ Og(y)-𝔤_k(y)/V(T,y)] V(t_n,x) +[𝔤_k(X^x_t_n,T)-𝔤_k(x)] +[sup_y∈ O𝔤_k(y) -g(y)/V(T,y)]V(T,x)≤ 2[sup_y∈ Og(y)-𝔤_k(y)/V(T,y)] [sup_s∈ [0,T] V(s,x)] +[𝔤_k(X^x_t_n,T) -𝔤_k(x)].This, <cit.>, the Portemonteau theorem, and (<ref>) demonstrate for allx∈ O thatlim sup_n→∞[g(X^x_t_n,T)-g(x)] =0. Combining this with (<ref>) proves that for all x∈ O it holds thatlim sup_n→∞v(t_n,x)-g(x) =0.The assumption thatfor all x∈^d it holds that v(T,x)=g(x)and the fact that v ∈C([0,T)× O,) hence demonstrate that v∈ C([0,T]× O, ). Next note that items <ref> and <ref> of Theorem <ref> (applied for allt∈ [0,T), r∈ (t,T] with O↶^d, f↶ g and T↶ r, f↶ (^d ∋ x ↦ f(r,X^x_t,r,w(r,X^x_t,r)) ∈) in the notation of Theorem <ref>), Leibniz integral rule, Fubini's theorem, item <ref>, and the assumption that f∈ L^2([0,T]× O××^d,) and g∈ L^2(O,) show that v∈ C^0,1([0,T)× O,) and for all t∈ [0,T), x∈^d it holds that(∇_x v)(t,x) = ∇_x ( [g(X^x_t,T)]) + ∇_x (∫_t^T [f(r,X^x_t,r, w(r,X^x_t,r))]ṛ)= [g(X^x_t,T)Z^x_t,T] + ∫_t^T ∇_x( [ f(r,X^x_t,r, w(r,X^x_t,r)) ])ṛ=[g(X^x_t,T)Z^x_t,T] + ∫_t^T [ f(r,X^x_t,r, w(r,X^x_t,r))Z^x_t,r] ṛ=[g(X^x_t,T)Z^x_t,T] +[∫_t^T f(r,X^x_t,r, w(r,X^x_t,r))Z^x_t,r ṛ]. Item <ref> therefore implies that for all t∈ [0,T), x∈^d it holds that (w_2, w_3, …, w_d+1)(t,x) = (∇_x v)(t,x). This,items <ref>-<ref>, and the fact that v∈ C([0,T]× O, ) establish item <ref>. Next we prove items <ref> and <ref>. For this leth [0,T]× O → satisfy for all t∈[0,T), x∈ O that h(t,x)=f(t,x,v(t,x), (∇_x v)(t,x)). Note thatitem <ref>, the fact that for all t ∈ [0,T], x ∈ O,a_1,a_2∈, w_1,w_2 ∈^d it holds that | f(t,x,a_1,w_1)- f(t,x,a_2,w_2) |≤ L(a_1, w_1) -(a_2,w_s), and the fact that inf_r ∈ (0, ∞) [ sup_t ∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r (| f(t,x,0,0) |/V(t,x)√(T-t))]= 0 imply that h∈ C([0,T)× O,)and lim sup_r→∞[sup_t∈ [0,T)∖ K_rsup_x∈ O∖ O_r(h(t,x)V(t,x)√(T-t))]=lim sup_r→∞[sup_t∈ [0,T)∖ K_rsup_x∈ O∖ O_r(f(t,x,v(t,x),(∇_x v)(t,x))V(t,x)√(T-t))]≤lim sup_r→∞[sup_t∈ [0,T)∖ K_rsup_x∈ O∖ O_r(f(t,x,0,0)V(t,x)√(T-t) +f(t,x,v(t,x),(∇_x v)(t,x))-f(t,x,0,0)V(t,x)√(T-t))]≤lim sup_r→∞[sup_t∈ [0,T)∖ K_rsup_x∈ O∖ O_r(f(t,x,0,0) +L (v,∇_x v)(t,x)V(t,x)√(T-t))] =0.Proposition <ref>, (<ref>), (<ref>), and (<ref>) therefore demonstrate that v is a viscosity solution of(∂ v∂ t)(t,x) +⟨μ(t,x), (∇_x v)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x v)(t,x)) +h(t,x) =0for (t,x)∈ (0,T)× O. This ensures that for allt∈(0,T), x∈ O, ϕ∈ C^1,2((0,T)× O,) with ϕ≥ v and ϕ(t,x)=v(t,x) it holds that(∂ϕ∂ t)(t,x) +⟨μ(t,x), (∇_x ϕ)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x ϕ)(t,x)) +h(t,x) ≥ 0.Moreover, observe that(<ref>) shows that for all t∈(0,T), x∈ O, ϕ∈ C^1,2((0,T)× O, ) with ϕ≤ v andϕ(t,x)=v(t,x)it holds that(∂ϕ∂ t)(t,x) +⟨μ(t,x), (∇_x ϕ)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x ϕ)(t,x)) +h(t,x) ≤ 0.Combining this with(<ref>) proves that v is a viscosity solution of (∂ v∂ t)(t,x) +⟨μ(t,x), (∇_x v)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x v)(t,x)) +f(t,x,v(t,x),(∇_x v)(t,x)) =0for (t,x)∈ (0,T)× O. Proposition <ref>, (applied with u_1↶ v in the notation ofProposition <ref>), (<ref>), and the fact thatv∈{𝐮∈ C([0,T]× O,) ∩ C^0,1([0,T)× O,) lim sup_r→∞[ sup_t∈ [0,T)∖ K_r sup_x∈ O∖ O_r ((𝐮,∇_x𝐮)(t,x)/V(t,x)√(T-t))]=0} therefore establish items <ref> and <ref>.The proof of Theorem <ref> is thus complete.The following Corollary applies the results in Theorem <ref> to a function V that is independent of the time component. The proof of the Corollary <ref> is similar to the one of <cit.>. Let d ∈,α, c, L, T ∈ (0, ∞), ρ∈, let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·^d+1→[0,∞) be the standard Euclidean norm on ^d+1, let ·_F^d× d→ [0,∞) be the Frobenius norm on ^d× d, let O⊆^d be an open set, for every r ∈ (0, ∞) let K_r⊆ [0,T), O_r ⊆ O satisfy K_r=[0,max{T-1/r,0}] and O_r = {x ∈ Ox≤ rand { y ∈^dy-x< 1/r}⊆ O }, let (Ω, ℱ, ℙ, (𝔽_s)_s ∈ [0,T])be a filtered probability space satisfying the usual conditions, let W[0,T] ×Ω→^dbe a standard (𝔽_s)_s ∈ [0,T]-Brownian motion, let μ∈C^0,1([0,T] × O, ^d),σ∈ C^0,1 ([0,T] × O, ^d × d) satisfy for alls∈[0,T], x, y ∈ O, v∈^d thatmax{⟨ x-y,μ(s,x)-μ(s,y)⟩,12σ(s,x)-σ(s,y)_F^2}≤c2x-y^2and v^* σ(s,x) (σ(s,x))^* v ≥αv^2, assume for all r∈ (0,∞), j∈{1,2,…, d} thatsup( {∂μ/∂ x(t,x)-∂μ/∂ x(t,y)_F +∂σ/∂ x_j(t,x) -∂σ/∂ x_j(t,y)_Fx-y t∈[0,T], x,y∈ O_r, x≠ y }∪{0}) <∞,for everyt∈ [0,T], x ∈ O let X^x_t = (X^x_t,s)_s ∈ [t,T] [t,T] ×Ω→ Obe an(𝔽_s)_s ∈ [t,T]-adaptedstochastic process with continuous sample paths satisfying that for alls ∈ [t,T] it holds a.s. thatX^x_t,s = x + ∫_t^s μ(r, X^x_t,r)ṛ+ ∫_t^s σ(r, X^x_t,r)Ẉ_r,assume for all t∈ [0,T], ω∈Ω that ([t,T] × O ∋ (s,x)↦ X^x_t,s(ω) ∈ O ) ∈ C^0,1([t,T] × O, O), for every t∈[0,T],x ∈ O let Z^x_t = (Z^x_t,s)_s ∈ (t,T] (t,T] ×Ω→^d+1be an (𝔽_s)_s ∈ (t,T]-adapted stochastic processwith continuous sample paths satisfying thatfor all s ∈ (t,T]it holds a.s. thatZ^x_t,s =[1; 1/s-t∫_t^s (σ(r, X^x_t,r))^-1 (∂/∂ x X^x_t,r)Ẉ_r ], let V ∈ C^2(O,(0, ∞)) satisfy for allt∈ [0,T], x∈ O that ⟨μ(t, x), (∇ V)(x)⟩ +12Tr(σ(t, x)[σ(t, x)]^*(HessV)(x)) +12[(∇ V)(x)]^* σ(t, x)^2V(x)≤ρ V(x)and ⟨μ(t,x), (∇ V)(x) ⟩ +12Tr(σ(t,x)[σ(t,x)]^* (HessV)(x))+L(∇ V)(x)≤ρ V(x),let f ∈ C([0,T] × O ××^d, ) ∩ L^2([0,T] × O ××^d, ), g ∈ C(O, ) ∩ L^2(O, )satisfy for all t ∈ [0,T], x_1,x_2 ∈ O,a_1,a_2∈, w_1,w_2 ∈^d that | f(t,x_1,a_1,w_1)- f(t,x_2,a_2,w_2) |≤ L (a_1, w_1) -(a_2, w_2),and assume that inf_r ∈ (0, ∞) [ sup_t ∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r ( g(x) +f(t,x,0,0)√(T-t)/V(x) )]= 0,lim inf_r→∞ [inf_x∈ O∖ O_r V(x)]=∞,andinf_x∈ O V(x) >0. Then* there exists a uniquev∈ C([0,T]× O,) ∩ C^0,1([0,T)× O,) which satisfies for allt∈[0,T), x∈ O that lim sup_r →∞[ sup_s ∈ [0,T)∖ K_rsup_y ∈ O ∖ O_r((v,∇_x v)(s,y)/V(y)√(T-s) ) ]= 0, [g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r,v(r, X^x_t,r), (∇_x v)(r,X^x_t,r)) Z^x_t,r ṛ ] <∞, and(v, ∇_x v)(t,x)=[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, v(r, X^x_t,r), (∇_x v)(r,X^x_t,r))Z^x_t,r ṛ], *there exists a unique viscosity solutionu∈{𝐮∈ C([0,T]× O,) ∩ C^0,1([0,T)× O,) lim sup_r →∞ [ sup_t ∈ [0,T)∖ K_rsup _x ∈ O ∖ O_r ((𝐮,∇_x𝐮)(t,x) /V(x) √(T-t) ) ] = 0}of(∂ u∂ t)(t,x) +⟨μ(t,x), (∇_x u)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x u)(t,x)) +f(t,x,u(t,x), (∇_x u)(t,x)) =0with u(T,x)=g(x) for (t,x)∈ (0,T)× O, and* for all t∈[0,T], x∈ O it holds that u(t,x)=v(t,x).Throughout this proof let [0,T]× O→ satisfy for all t∈ [0,T], x∈ O that (t,x)= e^-ρ t V(x). Note that the product rule andthe fact thatV∈ C^2(O,(0,∞)) ensure that∈ C^1,2([0,T]× O,(0,∞)). This and (<ref>) demonstrate that for all t∈ [0,T], x∈ O it holds that(∂∂ t)(t,x) +⟨μ(t, x), (∇_x )(t,x) ⟩ +12Tr(σ(t, x)[σ(t, x)]^*(Hess_x)(t,x)) +12(∇_x )(t, x)σ(t, x)^2_F(t,x)= e^-ρ t( -ρ V(x) +⟨μ(t, x), (∇ V)(x)⟩ +12Tr(σ(t, x)[σ(t, x)]^*(HessV)(x)) +12[(∇ V)(x)]^* σ(t, x)^2V(x)) ≤ 0. Moreover, note that (<ref>)shows that for all t∈ [0,T], x∈ O it holds that(∂∂ t)(t,x) +⟨μ(t, x),(∇_x )(t,x) ⟩ +12Tr(σ(t, x)[σ(t, x)]^*(Hess_x)(t,x)) +L(∇_x )(x)= e^-ρ t( -ρ V(x) +⟨μ(t, x), (∇ V)(x) ⟩ +12Tr(σ(t, x)[σ(t, x)]^*(HessV)(x)) +L(∇ V)(x)) ≤ 0.Furthermore, note that thefact that inf_r ∈ (0, ∞) [sup_t∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r(|g(x) |/V(x)+ | f(t,x,0,0) |/V(x) ·√(T-t))]= 0 ensures thatinf_r ∈ (0, ∞)[sup_t ∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r(|g(x) |/(T,x)+| f(t,x,0,0) |/(t,x)√(T-t))] = inf_r ∈ (0, ∞)[sup_t ∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r(|g(x) |/e^-ρ T V(x)+| f(t,x,0,0) |/e^-ρ t V(x)√(T-t))]≤ e^ρ Tinf_r ∈ (0, ∞)[sup_t ∈ [0,T)∖ K_rsup_x ∈ O ∖ O_r(|g(x) |/V(x)+| f(t,x,0,0) |/V(x)√(T-t))] = 0.In addition, observe thatthe assumption that lim inf_r→∞ [inf_x∈ O∖ O_r V(x)] =∞andinf_x∈ O V(x)>0 guarantee thatlim inf_r→∞ [inf_t∈ [0,T]inf_x∈ O∖ O_r(t,x)]=∞andinf_t∈[0,T]inf_x∈ O(t,x)>0.Combining this with (<ref>)-(<ref>) and Theorem <ref> (applied withb↶ 0, K↶ 0, V↶ in the notation of Theorem <ref>) establishes item <ref>-<ref>. The proof of Corollary <ref> is thus complete. In the following corollary we show that the function ^d ∋ x ↦ (1+x^2)^p/2 for p∈ (0,∞) satisfies the conditions(<ref>) and (<ref>) in Theorem <ref> and cantherefore - under the right assumptions on the coefficients μ and σ- ensure that there exists a solution to the SFPE in(<ref>) which is also a viscosity solution to thecorresponding PDE. The proof of the following corollary,Corollary <ref>,is similar to the proof of<cit.>. Let d ∈,α, c, L, T ∈ (0, ∞),let ⟨·,·⟩^d×^d→ be the standard Euclidean scalar product on ^d, let ·^d→[0,∞) be the standard Euclidean norm on ^d, let ·^d+1→[0,∞) be the standard Euclidean norm on ^d+1, let ·_F^d× d→ [0,∞) be the Frobenius norm on ^d× d, let (Ω, ℱ, ℙ, (𝔽_s)_s ∈ [0,T])be a filtered probability space satisfying the usual conditions, let W[0,T] ×Ω→^dbe a standard (𝔽_s)_s ∈ [0,T]-Brownian motion, let μ∈C^0,1([0,T] ×^d, ^d),σ∈ C^0,1 ([0,T] ×^d, ^d × d) satisfy for alls∈[0,T], x, y ∈^d, v∈^d thatmax{⟨ x-y,μ(s,x)-μ(s,y)⟩,12σ(s,x)-σ(s,y)_F^2}≤c2x-y^2,max{⟨ x, μ(t,x)⟩, σ(t,x)_F^2 }≤ c (1+x^2), and v^* σ(s,x) (σ(s,x))^* v ≥αv^2, assume for all r∈ (0,∞), j∈{1,2,…, d} thatsup({ ∂μ/∂ x(t,x)-∂μ/∂ x(t,y)_F +∂σ/∂ x_j(t,x) -∂σ/∂ x_j(t,y)_Fx-y t∈[0,T],x,y ∈{ z∈^d, z≤ r},x≠ y }∪{0}) <∞,for everyt∈ [0,T], x ∈^d let X^x_t = (X^x_t,s)_s ∈ [t,T] [t,T] ×Ω→^dbe an(𝔽_s)_s ∈ [t,T]-adaptedstochastic process with continuous sample paths satisfying that for alls ∈ [t,T] it holds a.s. thatX^x_t,s = x + ∫_t^s μ(r, X^x_t,r)ṛ+ ∫_t^s σ(r, X^x_t,r)Ẉ_r,assume for all t∈ [0,T], ω∈Ω that ([t,T] ×^d ∋ (s,x)↦ X^x_t,s(ω) ∈^d ) ∈ C^0,1([t,T]×^d, ^d), for every t∈[0,T],x ∈^d let Z^x_t = (Z^x_t,s)_s ∈ (t,T] (t,T] ×Ω→^d+1be an (𝔽_s)_s ∈ (t,T]-adapted stochastic processwith continuous sample paths satisfying thatfor all s ∈ (t,T]it holds a.s. thatZ^x_t,s =[1; 1/s-t∫_t^s (σ(r, X^x_t,r))^-1 (∂/∂ x X^x_t,r)Ẉ_r ], let f ∈ C([0,T] ×^d ××^d, ) ∩ L^2([0,T] ×^d ××^d, ), g ∈ C(^d, ) ∩ L^2(^d, ) be at most polynomially growing, and assume for all t ∈ [0,T], x_1,x_2 ∈^d,a_1,a_2∈, w_1,w_2 ∈^d that | f(t,x_1,a_1,w_1)- f(t,x_2,a_2,w_2) |≤ L (a_1, w_1) -(a_2, w_2). Then* there exists a uniquev∈ C([0,T]×^d,) ∩ C^0,1([0,T)×^d,) which satisfies that ((v, ∇_x v)(t,x) ·√(T-t))_t∈ [0,T), x∈^d grows at most polynomiallyand for allt∈[0,T), x∈^d it holds that [g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r,v(r, X^x_t,r), (∇_x v)(r,X^x_t,r)) Z^x_t,r ṛ ] <∞ and(v, ∇_x v)(t,x)=[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, v(r, X^x_t,r), (∇_x v)(r,X^x_t,r))Z^x_t,r ṛ], *there exists a unique viscosity solutionu∈{𝐮∈ C([0,T]×^d,) ∩ C^0,1([0,T)×^d,) ((𝐮,∇_x𝐮)(t,x)√(T-t))_t∈ [0,T), x∈^d grows at most polynomially} of(∂ u∂ t)(t,x) +⟨μ(t,x), (∇_x u)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x u)(t,x)) +f(t,x,u(t,x), (∇_x u)(t,x)) =0with u(T,x)=g(x) for (t,x)∈ (0,T)×^d, and* for all t∈[0,T], x∈^d it holds that u(t,x)=v(t,x).Throughout this proof let V_q^d→ (0,∞), q∈ (0,∞), satisfy for all q∈ (0,∞), x∈^d that V_q(x)=(1+x^2)^q/2. Firstnote that <cit.> (applied for every q∈ (0,∞) with p↶ q, 𝒪↶^d in the notation of <cit.>) and the assumption that for all t∈ [0,T], x∈^d it holds that max{⟨ x, μ(t,x)⟩, σ(t,x)_F^2 }≤ c (1+x^2) demonstrate that*for all q∈ (0,∞) it holds that V_q∈C^∞(^d, (0,∞)) and*for all q∈ (0,∞), t∈ [0,T], x∈^d it holds that⟨μ(t,x), (∇ V_q)(x) ⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess V_q)(x))≤cq2max{q+1, 3} V_q(x). Observe that item <ref> and the product rule imply thatfor allq∈ (0,∞),x∈^d it holds that(∇ V_q)(x) = qx (1+x^2)^q/2-1 = q V_q(x) x1+x^2. The fact that for all a ∈ [0,∞) it holds that a ≤ 1+ a^2 therefore shows that for all q∈ (0,∞), x∈^d it holds that(∇ V_q)(x) = q V_q(x) x1+ x^2≤ q V_q(x).Combining this with item <ref> proves that for allq∈ (0,∞), t∈ [0,T], x∈ O it holds that ⟨μ(t,x), (∇ V_q)(x)⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess V_q)(x))+L(∇ V_q)(x)≤ (cq2max{q+1, 3}+Lq) V_q(x).Moreover, note that(<ref>) and the assumption that for all t∈ [0,T], x∈^d it holds that max{⟨ x, μ(t,x)⟩, σ(t,x)_F^2 }≤ c (1+x^2) demonstrate that for all q∈ (0,∞), t∈ [0,T], x∈^d it holds that[(∇ V_q)(x)]^*σ(t,x)^2/V_q(x) = q^2 V_q(x)^2x^* σ(t,x)^2/V_q(x)(1+x^2)^2≤ q^2 V_q(x) x^2 σ(t,x)_F^2/(1+x^2)^2≤ c q^2 V_q(x) x^2(1+x^2)/(1+x^2)^2≤ c q^2 V_q(x) .Item <ref> hence imples that for all q∈ (0,∞), t∈ [0,T], x∈^d it holds that⟨μ(t, x), (∇ V_q)(x) ⟩ +12Tr(σ(t, x)[σ(t, x)]^*(Hess V_q)(x)) +12[(∇ V_q)(x)]^* σ(t, x)^2_FV_q(x)≤cq2max{q+1, 3} V_q(x) + c q^22 V_q(x) ≤ cq max{q+1, 3} V_q(x).Combining this with (<ref>) ensures that for allq∈ (0,∞) there exists ρ_q∈ [0,∞) which satisfies for all t∈ [0,T], x∈^d that⟨μ(t,x), (∇ V_q)(x)⟩ +12Tr(σ(t,x)[σ(t,x)]^* (Hess V_q)(x))+L(∇V_q)(x)≤ρ_q V_q(t,x)and⟨μ(t, x), (∇ V_q)(x) ⟩ +12Tr(σ(t, x)[σ(t, x)]^*(Hess V_q)(x)) +12[(∇ V_q)(x)]^* σ(t, x)^2_FV_q(x)≤ρ_q V_q(x).Next note that for all q∈ (0,∞) it holds thatlim inf_r→∞ [inf_x∈^d, x>r V_q(x)]=∞andinf_x∈^d V(x)>0.Furthermore, observe that the assumption that f and g are at most polynomially growing ensures that there exists p∈ (0,∞) which satisfiessup_t∈ [0,T]sup_x∈^d(g(x) +f(t,x,0,0)√(T-t)/V_p(x)) <∞.This shows that for allq∈ [p,∞) it holds thatlim sup_r→∞[sup_t∈ [max{T-1/r,0},T)sup_x∈^d, x>r( g(x)+f(t,x,0,0)√(T-t)/V_q(x)) ] =0.Combining this with (<ref>), (<ref>), (<ref>), and item <ref> of Corollary <ref> (applied with ρ↶ρ_2p, O↶^d, V↶ V_2p in the notation ofCorollary <ref>) demonstrates that there exists a unique viscosity solutionu∈{𝐮∈ C([0,T]×^d,) lim sup_r →∞ [ sup_t ∈ [0,T)∖ K_rsup _x ∈^d, x>r ((𝐮,∇_x𝐮)(t,x)/V_2p(x) √(T-t) ) ] = 0}of(∂ u∂ t)(t,x) +⟨μ(t,x), (∇_x u)(t,x) ⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x u)(t,x))+f(t,x,u(t,x), (∇_x u)(t,x)) =0with u(T,x)=g(x) for (t,x)∈ (0,T)×^d. Let v∈ C([0,T]×^d,) satisfy that((v,∇_x v)(t,x) ·√(T-t))_t∈ [0,T), x∈^d grows at most polynomially and be a viscosity solution of(∂ v∂ t)(t,x) +⟨μ(t,x), (∇_x v)(t,x)⟩ +12Tr(σ(t,x)[σ(t,x)]^*(Hess_x v)(t,x))+f(t,x,v(t,x), (∇_x v)(t,x)) =0with v(T,x)=g(x) for (t,x)∈ (0,T)×^d. Observe that the assumption that ((v, ∇_x v)(t,x) ·√(T-t))_t∈ [0,T), x∈^d grows at most polynomially ensures that there exists β∈ [2p,∞) which satisfies thatlim sup_r →∞[ sup_t ∈ [0,T)∖ K_rsup _x ∈^d, x>r((v, ∇_x v) (t,x)/V_β(x) √(T-t)) ]= 0.Item <ref> of Corollary <ref> (applied with ρ↶ρ_β, O↶^d, V↶ V_β in the notation ofCorollary <ref>), (<ref>), and (<ref>) therefore demonstrate that u=v. This establishes item <ref>. Next observe that item <ref> of Corollary <ref> (applied withρ↶ρ_2p, O↶^d, V ↶ V_2p in the notation ofCorollary <ref>) shows that for all t∈ [0,T], x∈^d it holds that [g(X^x_t,T) Z^x_t,T+ ∫_t^Tf(r, X^x_t,r,u(r, X^x_t,r), (∇_x u)(r,X^x_t,r)) Z^x_t,r ṛ ] <∞ and(u, ∇_x u)(t,x)=[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, u(r, X^x_t,r), (∇_x u)(r,X^x_t,r))Z^x_t,r ṛ].Let w∈ C([0,T]×^d,) satisfythat ((w, ∇_x w) (t,x)√(T-t))_t∈ [0,T), x∈^d grows at most polynomially and that for all t∈ [0,T], x∈^d it holds that[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, w(r, X^x_t,r),(∇_x w)(r,X^x_t,r)) Z^x_t,r ṛ ]<∞ and(w, ∇_x w)(t,x)=[g(X^x_t,T) Z^x_t,T + ∫_t^Tf(r, X^x_t,r, w(r, X^x_t,r), (∇_x w)(r,X^x_t,r))Z^x_t,r ṛ].Note that the fact that ((w, ∇_x w)(t,x) √(T-t))_t∈ [0,T), x∈^d grows at most polynomially implies that there exists γ∈ [β,∞) which satisfies thatlim sup_r →∞[ sup_t ∈ [0,T)∖ K_rsup _x ∈^d, x>r((w, ∇_x w) (t,x)/V_γ(t,x) √(T-t)) ] = 0.Items <ref> and <ref> in Corollary <ref> (applied with ρ↶ρ_γ, O↶^d, V↶ V_γ in the notation of Corollary <ref>), (<ref>), and (<ref>) therefore demonstrate that u=v=w. This establishes items <ref> and <ref>. The proof of Corollary <ref> is thus complete. §.§ AcknowledgementsThis work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the research grant HU1889/6-2. acm
http://arxiv.org/abs/2310.18197v1
{ "authors": [ "Martin Hutzenthaler", "Katharina Pohl" ], "categories": [ "math.PR" ], "primary_category": "math.PR", "published": "20231027151529", "title": "On nonlinear Feynman-Kac formulas for viscosity solutions of semilinear parabolic partial differential equations with gradient-dependent nonlinearities" }
Generalized Firefly Algorithm for Optimal Transmit Beamforming Tuan Anh Le and Xin-She Yang T. A. Le and X.-S. Yang are with the Faculty of Science and Technology, Middlesex University, London, NW4 4BT, UK. Email: {t.le; x.yang}@mdx.ac.uk.This paper has been presented in part at the IEEE Vehicular Technology Conference (VTC 2023-Spring), Florence, Italy, June, 20-23, 2023.January 14, 2024 ===============================================================================================================================================================================================================================================================================================================================We present , a tool for integrating a language model into the Lean proof assistant. is a Lean 4 tactic that sends a user's proof state to a server hosting a language model.The language model generates suggestions, which are checked in Lean and displayed to a user in their development environment. We provide a baseline language model, along with code for fine-tuning and evaluation to support further development. We provide server implementations that run on CPU, a CUDA GPU, or a Google Colab notebook, as a step towards fast, effective language model suggestions for any user.§ INTRODUCTIONInteractive proof assistants such as Lean<cit.>, Isabelle<cit.>, and Coq<cit.> enable the verification of mathematics and software using specialized programming languages <cit.>. The emerging area of neural theorem proving integrates neural language models with interactive proof assistants <cit.>. Doing so can be mutually beneficial: proof assistants provide correctness guarantees on language model outputs, while language models may help make proof assistants easier to use. A fundamental part of proof development is determining which step to take next at each state of a proof (i.e., whichtactic to use). Therefore, a tool that suggests useful next steps within a user's development environment could significantly ease proof development.We present , a tool for suggesting proof steps (i.e., tactics) with a language model in the Lean proof assistant (<ref>). is a Lean 4 tactic that sends a user's proof state to a server hosting a language model.The language model generates suggestions, which are checked in Lean and displayed to a user in their development environment. is agnostic to the choice of language model, learning framework, and evaluation framework.We provide a baseline language model and example code for fine-tuning and evaluation. The baseline language model is fine-tuned for a standard tactic-prediction task <cit.>, and outperforms recent open-source tactic-prediction models <cit.>. Finally, supports several runtimes, with servers that run on CPU, a GPU, or in a Google Colab notebook, as a step towards fast, powerful language model suggestions for any user.[<https://github.com/wellecks/llmstep>] § RELATED WORK Automatically generating proof steps with language modelsis an active area of research (e.g., <cit.>).Closest to our work is thetactic from<cit.>, which generates suggestions in Lean 3 by calling a (now disabled) Open-AI API. is inspired by the idea of language-model based suggestions, but differs in several ways: (1) is built with open-source components, and can run on a user's own device. (2) supports prefixed and checked suggestions, detailed below. (3)is in Lean 4, requiring an implementation using Lean 4 metaprogramming.(4) provides code for fine-tuning and evaluating future models.Recently, after the release of , LeanInfer <cit.> provides the ability to run supported language models on CPU through Lean's Foreign Function Interface (FFI).Similar to it provides tactic suggestions, and uses 's utilities to check, format, and display its suggestions.additionally supports prefixed suggestions, offers fast GPU inference, and is agnostic to the language model implementation.Proofster <cit.> is a related tool offering machine-learning based proof synthesis in Coq via a web interface, while offers Lean 4 language-model tactic suggestions in the development environment.§ APPROACH is called by writingwithin a proof, which returns suggestions that start with(for instance,or ).uses Lean to check whether each suggestion is valid and/or completes the proof. The suggestions are displayed in the Lean 4 VS Code Infoview, which is a standard interface used in proof development. A user can click a suggestion, which places it in the proof. The proof is either complete, or it transitions to the next state and the user continues writing the proof.We detail 's implementation below. §.§ Implementation consists of three parts: (1) a Lean tactic, (2) a language model, (3) a server.Lean tactic. Writing a proof can be seen as a sequential process (x_1,y_1),(x_2,y_2),… of states x_t and tactics y_t.A state contains what is left to prove (the goal), and available information (the hypotheses). A tactic transitions the proof to a new state. If the state contains no remaining goals, the proof is complete.Concretely, a user applies tactics by writing Lean code, Lean keeps track of the state, and the development environment shows the state and the written code.is itself a tactic. takes a prefix as an argument, i.e., a sequence of tokens that will start the suggested tactics. For instance,would lead to suggested tactics that start with , such as . sends the current state x_t and the prefix to a server, receives suggestions in response, and checks each suggestion using Lean. Namely, a checked suggestion is valid if applying the tactic leads to a state with no errors and at least one goal. A tactic is complete if it leads to a state with no errors and no goals. Otherwise the tactic is invalid. displays complete, valid, and invalid tactic suggestions using different colors. Language model. uses a language model to predict the next tactic given the current tactic state.While the language model can be arbitrary, one approach is to use a model that has been fine-tuned on (state, next-tactic) examples.For instance, the default language model in is fine-tuned on sequences of the form: [sharp corners=all] This format corresponds to the proofstep objective described in<cit.>.By default, uses alanguage model <cit.> fine-tuned on (state, next-tactic) examples extracted from Lean Mathlib<cit.> via the LeanDojo Benchmark 4 dataset <cit.>. Themodel is publicly available on Huggingface(https://huggingface.co/wellecks/llmstep-mathlib4-pythia2.8blink).is agnostic to the language model implementation, and includes direct support for ReProver<cit.> andother Huggingface models. Server. uses a server to handle requests from the Lean tactic and host the language model.The server queries the language model and relays responses back to the Lean tactic.The server is the key computational bottleneck in , as it hosts a (possibly large) language model.supports a variety of compute constraints, with server implementations that run on CPU, a CUDA GPU, or aGoogle Colab notebook with GPU, as well as a server with fast inference via vLLM <cit.>. Usage. First, the user starts a server based on their hardware. Currently, servers can run on CPU, CUDA GPU, or Google Colab notebooks. Second, the user imports as a Lean 4 package.Third,the user calls by writing, which returns suggestions that start with the prefix passed to . The suggestions are displayed in the Infoview.§ EVALUATION First, we benchmark the default language model's utility for providing suggestions via proof search–i.e., attempting to fully provetheorems using the language model and a search algorithm.Proof search. Proof search requires a search algorithm and a method for interacting with Lean. We use best-first search, and provide a self-contained implementation withLeanDojo<cit.> interaction. Best-first search is parameterized by the maximum number of generated tactics, defined as the number of attempts× expansion size per iteration ×maximum iterations, subject to a timeout. We use a 10 minute timeout as in<cit.>, and use beam search with expansion size 32 based on memory constraints. We compare the Pythia model to ReProver<cit.> without retrieval. To equal the 64 expansions used in<cit.>, we also report results with a second attempt of 32 samples (i.e., 2×32). We report mathlib4-test from<cit.> and miniF2F-test from<cit.>, since miniF2F without retrieval was not available in<cit.>.We validate the Pythia model in <ref>, finding that it can exceed the number of closed theorems by ReProver on Lean Dojo Benchmark 4 and miniF2F. Note that supports suggestions from either model.The ReProver model is particularly useful on CPU, as we discuss next. Runtime. We tested the runtime of with different compute environments and hardware. The experiments were done on a set of 17 examples, each containing a tactic state and a prefix.For each example, we measured the time for the server to return N suggestions, using llmstep with theand ReProver () language models.As shown in <ref>, with GPU-based inference can yield suggestions in 1 second or less, with vLLM inference approaching real-time (0.11s). As shown in <ref>, vLLM remains fast as the number of suggestions increases. Note that vLLM does not support the model architecture used in ReProver. However, on CPU the ReProver model is much faster due to its small parameter count. Therefore, we suggest using Pythia when a GPU is available, and Reprover on CPU.Qualitative examples. Figures <ref> and <ref> show example suggestions made by .§ CONCLUSION In this paper, we present , a tool designed to make it easy for a Lean 4 user to obtain tactic suggestions from a language model.In addition, we provide a fine-tuned language model that achieves strong performance on mathlib and miniF2F.Active areas of work include fast CPU inference, improved models<cit.>, and tasks beyond tactic prediction. We hope that 's simple, model-agnostic recipe opens up new research avenues on generative tools for formalized mathematics. § ACKNOWLEDGEMENTSWe thank Mario Carneiro, Zhangir Azerbayev, and Scott Morrisonfor valuable guidance and feedback. plainnat
http://arxiv.org/abs/2310.18457v1
{ "authors": [ "Sean Welleck", "Rahul Saha" ], "categories": [ "cs.AI", "cs.LG", "I.2.2; I.2.5; I.2.7" ], "primary_category": "cs.AI", "published": "20231027201056", "title": "LLMSTEP: LLM proofstep suggestions in Lean" }
firstpage–lastpage * Christian Bauer^1, Yoshiyuki Sakai^2 and Markus Uhlmann^3 2023-10-25 =============================================================We use deep JWST/NIRSpec R∼ 1000 slit spectra of 113 galaxies at , selected from the mass-complete Blue Jay survey, to investigate the prevalence and typical properties of neutral gas outflows at cosmic noon. We detect excessabsorption (beyond the stellar contribution) in 46% of massive galaxies , with similar incidence rates in star-forming and quenching systems. Half of the absorption profiles are blueshifted by at least 100 , providing unambiguous evidence for neutral gas outflows. Galaxies with strongabsorption are distinguished by enhanced emission line ratios consistent with AGN ionization. We conservatively measure mass outflow rates of 3 – 100 M_⊙ yr^-1; comparable to or exceeding ionized gas outflow rates measured for galaxies at similar stellar mass and redshift. The outflows from the quenching systems ) have mass loading factors of , and the energy and momentum outflow rates exceed the expected injection rates from supernova explosions, suggesting that these galaxies could possibly be caught in a rapid blowout phase powered by the AGN. Our findings suggest that AGN-driven ejection of cold gas may be a dominant mechanism for fast quenching of star formation at z∼ 2.galaxies: evolution – galaxies: star formation – galaxies: nuclei § INTRODUCTION Determining the physical mechanism(s) responsible for quenching star-formation in massive galaxies is key to our understanding of galaxy evolution. Cosmological simulations typically quench massive galaxies via feedback from active galactic nuclei (AGN) which both expels cold gas from galaxies and heats halo gas, preventing it from cooling and being re-accreted to replenish the reservoir of fuel for star-formation <cit.>. However, definitive observational evidence for a link between AGN feedback and star-formation quenching has yet to be established (seeand references therein). Over the past decade, large galaxy surveys have significantly improved our understanding of outflows during the peak epoch of star-formation and black-hole growth at z∼ 1 – 3, when feedback is expected to be most active. It is now well established that outflows are ubiquitous in massive star-forming galaxies at this epoch <cit.>. The strongest outflows are powerful enough to rapidly suppress star-formation in their host galaxies <cit.>, but these are generally associated with the most luminous AGN which are present in a small fraction of massive galaxies at any given time. It remains unclear whether outflows driven by more typical AGN are capable of quenching star-formation in their host galaxies. Measurements based on optical emission lines (tracing ionized gas) suggest that most outflows remove gas less rapidly than it is consumed by star-formation <cit.>, whilst UV absorption line measurements (tracing neutral gas) suggest that the mass outflow rates are comparable to the star-formation rates of the host galaxies <cit.>. Observations based on a single gas phase provide a very incomplete picture of outflows which contain gas at a range of temperatures and densities including hot (10^6-7 K) X-ray emitting gas, warm (10^4-5 K) ionized gas, cool (100 K) neutral gas and cold (10 K) molecular gas <cit.>. Despite significant observational advances, the vast majority of outflows have only been observed in one gas phase, and constraining the total mass of gas ejected by outflows remains very challenging. State-of-the-art simulations of star-formation driven outflows predict that the majority of the outflowing mass is carried in the neutral and molecular phases <cit.> (although it remains unclear whether cool clouds survive to large galactocentric radii or are shredded by the hot wind; e.g. ). Outflowing molecular gas is notoriously difficult to detect <cit.> but is often found to carry much more mass than the ionized phase <cit.>. Cool neutral gas in outflows is commonly probed using low ionization rest-frame far-UV absorption lines. However, it is difficult to detect the far-UV continuum of massive, dusty AGN host galaxies, and UV absorption line measurements at high redshift are generally restricted to the strongest transitions which are often saturated, providing only lower limits on the outflowing mass <cit.>.An alternative tracer of neutral outflows is the resonant  λλ 5891,5897Å doublet. With a first ionization potential of 5.1 eV, Na i exists primarily in neutral regions where it is shielded by significant columns of gas and dust <cit.>. Due to its location in the rest-frame optical spectrum, observations ofalready exist for many thousands of nearby galaxies. A few percent of local massive star-forming galaxies show blueshiftedabsorption indicative of neutral gas outflows <cit.>. In galaxies with both neutral and ionized outflows, the neutral outflow rates are 10 – 100 times larger <cit.>, confirming that ionized gas likely represents a small fraction of the total mass budgets of typical nearby outflows. Theabsorption originates on spatial scales ≲ 10 kpc <cit.>, indicating that it traces recently launched outflows rather than gas in the circumgalactic medium. Althoughis easily accessible in the local Universe, it has been significantly harder to detect at z∼ 1 – 3 where the line shifts into the observed near-infrared. The unprecedented infrared sensitivity of JWST enables the detection ofin distant galaxies, providing a new probe of neutral outflows in the early Universe. Initial observations have already revealedabsorption tracing neutral outflows in three AGN host galaxies at , of which one is a quasar <cit.> and two are post-starburst galaxies <cit.>. These outflows are also detected in ionized gas emission lines, enabling direct comparisons of the mass outflow rates in different gas phases. Focusing on the post-starburst galaxies, both <cit.> and <cit.> find that the neutral mass outflow rates are about two orders of magnitude larger than the ionized outflow rates and exceed the current star formation rates (SFRs) of the host galaxies. The rapid ejection of cold gas by powerful AGN-driven outflows may have led to the recent, fast quenching of star-formation in these galaxies. The detection of strong neutral gas outflows in two post-starburst AGN host galaxies provides tantalizing evidence that ejective AGN feedback may be an important mechanism for quenching massive galaxies at cosmic noon. However, it is unclear whether these objects are representative of the overall galaxy population. In this paper, we characterize the incidence and typical properties of neutral outflows across the galaxy population using 113 galaxies atfrom the mass-selected Blue Jay survey. We discuss the sample and observations in Section <ref>, present the census ofabsorption in Section <ref> and examine the neutral outflow properties in Section <ref>. We discuss the connection between neutral outflows, AGN activity and star-formation quenching in Section <ref> and present our conclusions in Section <ref>.§ OBSERVATIONS AND DATA REDUCTION §.§ Blue JayThis work is based on observations from the JWST Cycle 1 program Blue Jay (GO 1810; PI Belli). The NIRSpec micro-shutter assembly (MSA; ) was used to obtain R ≃ 1000 spectra of 151 galaxies spread over two masks in the COSMOS field. Four of these galaxies are filler targets at z∼ 6, and the remaining 147 galaxies form a mass-selected sample () at cosmic noon (). All galaxies were observed using the three medium resolution gratings (G140M, G235M and G395M) with exposure times of 13h, 3.2h and 1.6h respectively. A slitlet made of at least 2 MSA shutters was placed on each target and we employed a 2-point A-B nodding pattern along the slit. The data were reduced using a modified version of the JWST Science Calibration Pipeline v1.10.1, and version 1093 of the Calibration Reference Data System. Master background subtraction was performed using a spectrum measured from dedicated background slits and galaxy 1D spectra were optimally extracted <cit.>. The spectrum extraction failed for 6 galaxies which are excluded from our sample. Full details of the Blue Jay sample selection, observations and data reduction will be provided in the survey paper (Belli et al., in prep).The individual grating spectra were combined to produce wide spectra covering rest-frame wavelengths of at least 3000Å – 1.2μm for all galaxies. Figure <ref> shows the spectrum of an example galaxy at z = 1.81 over rest-frame wavelengths of . The wavelength coverage is not continuous due to gaps between the NIRSpec detectors, and we excluded 7 galaxies for whichfalls within a detector gap. Finally, we excluded 21 galaxies for which no spectroscopic redshift could be determined due to an absence of identifiable emission or absorption line features. Our final sample consists of 113 galaxies and includes COSMOS-11142, the post-starburst galaxy analysed by <cit.>. §.§ Stellar Population FittingThe goal of this paper is to search for neutral gas outflows traced by interstellarabsorption. However,absorption can also originate in stellar atmospheres and is particularly prominent in late-type stars <cit.>. Therefore, it is imperative to accurately remove the stellar absorption contribution prior to our analysis. We model the stellar continuum using Prospector, a Bayesian stellar population inference code designed to simultaneously fit photometry and spectroscopy spanning UV to mid-IR wavelengths <cit.>. We adopt the synthetic stellar population library fsps <cit.>, the mist isochrones <cit.>, and the Chabrier initial mass function. The stellar metallicity is free to vary and individual elemental abundances are assumed to be solar scaled. We note that varying the stellar [Na/Fe] ratio within a reasonable range does not significantly impact our results (see Section <ref>). We adopt a non-parametric star-formation history with 14 bins spaced logarithmically in time except for the lowest age bin which is placed at . Prospector accounts for dust absorption and re-emission which are assumed to be in energy balance. The dust absorption model consists of a primary component which applies to all stars and follows the <cit.> attenuation curve, as well as a multiplicative term representing extra attenuation towards young stars (with ages ). The Prospector model also includes a multiplicative `jitter' term that scales the measurement errors to better represent the statistical fluctuations in the data, as well as a polynomial distortion term that corrects for shape mismatches between the spectra and the stellar templates resulting from imperfect flux calibration and/or slit losses. We use Prospector to fit the JWST spectra along with publicly available HST/ACS+WFC3 <cit.> and Spitzer/IRAC <cit.> photometry. The NIRSpec observations cover many age-sensitive spectral features including the 4000Å break and the Balmer absorption series (see Figure <ref>), providing strong constraints on the stellar population properties. During the fitting, we mask prominent emission lines as well as theand Ca ii h + k absorption lines which can have significant contributions from interstellar gas. Full details of the Prospector fitting will be provided in Park et al. (in prep). The orange curve in Figure <ref> shows the best-fit stellar continuum model for the example galaxy. The model provides a very good fit to the well-detected Balmer absorption series and enables us to accurately quantify the stellar contribution to the observedabsorption. In this case, the observedabsorption is significantly stronger than expected from the stellar continuum alone. Unless otherwise noted, all subsequent references toabsorption refer to absorption in excess of the stellar contribution. Prospector outputs probability distribution functions which are used to calculate the best-fit values (median) and corresponding uncertainties (16th-84th percentile range) for all model parameters. These include the distortion polynomial (which we use to produce flux-calibrated spectra), the best-fit jitter term (used to scale the measurement errors), the stellar mass (M_*) and the non-parametric star-formation history. The SFRs reported in this paper refer to the SFR in the youngest age bin (averaged over the last 30 Myr), but similar results are obtained using SFRs averaged over 100 Myr or SFRs computed from theemission line luminosity. §.§ Emission and absorption line fittingWe fit the spectrum of each Blue Jay galaxy over the wavelength region between 3800 – 6700Å, including contributions from the stellar continuum (F_ *, Prospector), ionized gas emission lines (F_ gas) and excessabsorption (F_ Na D, excess):F(v) = [ F_ *, Prospector + F_ gas] × F_ Na D, excessThe emission and absorption line components must be fit simultaneously because the  λ 5876Å emission line falls in close proximity to(see Figure <ref>). The emission and absorption line models are convolved with the wavelength-dependent NIRSpec line spread function prior to fitting[We adopt the nominal resolution for uniform slit illuminationfrom JDox, but note that the true resolution is notably higher than this for compact sources <cit.>. As a consequence, the measured velocity dispersions represent lower limits on their true values.].The emission line model includes the strongest emission lines within the fitted wavelength region (,  λλ 4959,5007Å,  λλ 6548,6583Å, and ) as well as  λ 5876Å. Outflowing gas can produce redshiftedemission due to resonant scattering off gas in the receding side of the outflow <cit.>, but this has only been observed in a handful of objects, <cit.>, and we do not find clear evidence foremission in any of our spectra. We initially fit each emission line with a single Gaussian profile, constraining all lines to have the same velocity offset and dispersion. A single Gaussian component is sufficient to explain the vast majority of observed line profiles given the relatively low spectral resolution of the observations. Two galaxies have forbidden line profiles that are clearly too complex to be represented by a single kinematic component, and for these objects we add an extra Gaussian component to all emission lines. Two other galaxies show prominent AGN broad line region emission which is modelled as a broad Gaussian component in the ,andlines. Interstellarabsorption is parametrized using the standard partial covering model <cit.>:F_ Na D, excess(v) = 1 - C_f + C_f exp( - τ_b(v) - τ_r(v) )Here, C_f is the covering fraction of the absorbing gas against the background continuum source, and τ_b(v) and τ_r(v) are the optical depth profiles of the blue ( λ 5891Å) and red ( λ 5897Å) doublet lines, respectively. We assume that the optical depth has a Gaussian velocity distribution:τ(v,σ) = τ_0exp(-v^2/2σ^2)The optical depth at the centre of the blue line (τ_0, b) is fixed to be twice the optical depth at the centre of the red line (τ_0, r), reflecting the known doublet ratio.[The equivalent width ratio is not fixed and varies between 2 in the optically thin regime and 1 in the optically thick regime. This is because the curve of growth representing the relationship between optical depth and equivalent width is non-linear.] Theabsorption kinematics are allowed to vary independently of the emission line kinematics. The absorption optical depth and covering fraction can become degenerate when thedoublet is blended (e.g. ; see discussion in Appendix <ref>). To obtain accurate constraints on the parameter uncertainties and degeneracies, we perform the fitting using emcee <cit.>, an Affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler. The walkers are initialised in small regions around the best-fit values obtained from preliminary least squares fitting. Similarly to the Prospector parameters, the best-fit emission and absorption line parameters represent the medians of the emcee posterior distributions and the error bars reflect the 16th – 84th percentile ranges. The best-fit emission line andabsorption line profiles for the example galaxy are shown by the magenta and blue curves in Figure <ref>, respectively. § A CENSUS OFABSORPTION AT Z∼ 2 §.§ IncidenceWe visually inspect the fits for all Blue Jay galaxies and identify sources with significant interstellarabsorption. For the first time, we report evidence for widespreadabsorption in z∼ 2 galaxies, as shown in Figure <ref>. The absorption profiles are grouped into four categories based on the velocity shift of theabsortion feature (see Section <ref>). Within each category, each pair of panels represents an individual galaxy and shows the observed spectrum over the region covering ± 100Å around thedoublet (top, black), the best-fit continuum-only (orange) and continuum + line (magenta) models, and the residual spectrum after removing the continuum and line emission (bottom, grey). From our initial sample of 113 galaxies, 30 galaxies (27%) haveabsorption much stronger than expected from the stellar populations alone. The MCMC posterior distributions confirm that the excess absorption is detected at ≥ 3σ significance in all cases. The main panel of Figure <ref> shows how thedetections (colored squares) are distributed as a function of stellar mass (M_*) and specific SFR (sSFR). Galaxies withoutabsorption are shown with filled grey circles. It is clear that interstellarabsorption is detected almost exclusively in massive galaxies. 27/59 (46%) of the log(M_*/M_⊙) > 10 galaxies with spectroscopic redshift measurements showabsorption, whereas the detection fraction in lower mass galaxies is 6%. Interestingly, the detections are spread almost uniformly over four orders of magnitude in sSFR, from highly star-forming galaxies to quenching systems. The prevalence of interstellarabsorption in the mass-selected Blue Jay sample indicates that large neutral gas reservoirs are prevalent in massive z∼ 2 galaxies.The incidence of interstellarabsorption is not strongly dependent on the assumed sodium abundance. We generate stellar absorption profiles for a range of sodium abundances using the alf code <cit.>, and find that extremely sodium-enhanced stellar populations () can produce excess absorption with a rest-frame equivalent width of up to 1.2Å. This is weaker than all our observed excess absorption profiles which have measured equivalent widths (after removing the stellar contribution) of 1.5 – 11.4 Å (median 4.4Å; see Table <ref>).§.§.§ Stacking to search for weaker NaD absorptionThe detection of interstellarabsorption requires a robust measurement of the stellar continuum, so oursample may be biased towards the brightest continuum sources. We search for interstellar absorption in galaxies lacking individualdetections by stacking them in two bins of stellar mass (above and below log(M_*/M_⊙) = 10). The spectra are continuum-normalized prior to stacking and weighted by the average continuum signal-to-noise ratio within 150Å of theline. Unweighted stacks are noisier but lead to the same overall conclusions.The bottom panels of Figure <ref> show stacked spectra of galaxies with and without individualdetections (black and grey, respectively). Even after stacking 51 galaxies, we do not find any evidence of excessabsorption in low mass galaxies lacking individualdetections. There are at least two possible reasons for this. Firstly, Na I has an ionization potential of 5.1 eV and therefore cannot exist in large quantities without a significant amount of dust shielding. In the local Universe there is a well-known relationship betweenabsorption strength and dust attenuation (quantified by e.g. the V band line-of-sight attenuation A_V, the color excess , or the Balmer decrement f()/f(); e.g. , and this correlation is also seen in our sample (see left-hand panel of Figure <ref>). Most low mass galaxies likely do not have enough dust and metals to retain detectable amounts of neutral sodium. A second possibility is that we do not see strong excessabsorption because it primarily traces AGN-driven outflows (see Sections <ref> and <ref>), and these are rare in low mass galaxies <cit.>.There is some evidence for weak excessabsorption in the stack of 32 massive galaxies lacking individual detections. This excess absorption falls at approximately zero velocity and could plausibly be explained by additional stellar absorption (see Section <ref>). Regardless, it is significantly weaker than the absorption seen in individually detected galaxies. The fact that we see strong interstellarabsorption in 46% of massive galaxies whilst the remaining 54% show weak or no absorption suggests that the distribution of absorption strengths is not continuous, and may be more bimodal. §.§.§ Link between strongabsorption and galaxy propertiesThe presence of strongabsorption in massive galaxies does not appear to be governed by dust properties: when restricting the sample to galaxies with , there is no significant difference between the A_V or E(B-V) distributions galaxies with and withoutabsorption (see histograms in the upper left panel of Figure <ref>). We investigate whether theabsorption strength may instead be related to galaxy inclination. Detections of strong excessabsorption in the local Universe seem to be preferentially associated with outflows <cit.>, and stacking analyses indicate that blueshifted wind material is primarily observed in face-on galaxies where minor axis outflows are observed down-the-barrel <cit.>. In the Blue Jay sample, we do not find any clear relationship between the presence ofabsorption and the galaxy axis ratio measured from CANDELS HST imaging <cit.>. However, this imaging covers rest-frame UV and blue-optical wavelengths where young stellar populations dominate. Ongoing JWST NIRCam and MIRI imaging of the COSMOS field will enable more complete mapping of the galaxy stellar mass distributions and thus more accurate axis ratio measurements in the future. Intriguingly, the most striking difference between the galaxies with and withoutabsorption is their / ratios, which are discussed further in Section <ref>.§.§ Origin of absorbing gasThe origin of the interstellarabsorption can be determined by examining the velocity of the absorption relative to the galaxy systemic velocity (shown by the vertical dotted grey lines in Figure <ref>). In many cases, the excessabsorption is clearly offset from the galaxy systemic redshift, with measured velocity shifts ranging fromto +400 . We use the absorption velocity posterior probability distributions to classify the galaxies into three categories: blueshifted absorption (84th percentile velocity less than zero), redshifted absorption (16th percentile velocity greater than zero), and systemic absorption (consistent with zero). These classifications are shown in Figure <ref> and determine the colors of the markers in Figure <ref>. The absorption velocities for the two broad line AGN are very sensitive to the fitting of the broademission, so we cannot reliably classify them. Of the remaining 28 galaxies, 14 (50%) show blueshifted absorption (discussed in Section <ref>), 11 (39%) are consistent with the systemic velocity (Section <ref>), and 3 (11%) show redshifted absorption (Section <ref>).§.§.§ Outflowing gasHalf of the classifiable absorption profiles (14/28 or 50%) are blueshifted by at least 100 , which is an unambiguous sign of neutral gas outflows <cit.>. The high fraction of outflows among the -detected galaxies is consistent with low redshift results showing that excessabsorption is preferentially blueshifted and associated with winds <cit.>. The overall incidence ofoutflows across the full mass range of our sampleis 12% (14/113). This is notably higher than the ∼ 1% incidence of neutral outflows among local galaxies <cit.>, but lower than theincidence of ionized outflows traced by optical emission lines among massive galaxies at cosmic noon <cit.>, and significantly lower than the ∼ 100% incidence of neutral outflows traced by rest-frame UV absorption lines in UV-bright galaxies at the same epoch <cit.>. We note that theoutflow fraction may be underestimated by up to a factor of 2 at high stellar masses due to the low spectral resolution of the observations (R ∼ 1000). Galaxies with strong systemic absorption could have weaker outflow components that would not be detectable in our data (see Section <ref>). The low incidence ofoutflows in low mass galaxies is likely driven to a large degree by the requirement for dust shielding which prevents the detection ofabsorption in UV-bright galaxies <cit.>. Rest-frame UV andabsorption lines may probe outflows in almost entirely separate populations of galaxies. The incidence of neutral outflows in our sample appears to be independent of star-formation activity. The outflow sources, indicated by the blue squares in Figure <ref>, are distributed over a wide range in sSFRs, extending all the way to the quenching galaxy regime. This is somewhat surprising given that in the local Universe, blueshiftedabsorption is preferentially found in highly star-forming systems <cit.>. Excessabsorption has been found in many nearby massive quiescent galaxies <cit.>, but this absorption is typically consistent with the systemic velocity. Furthermore, theabsorption strength in local early type galaxies is found to correlate with that of thetriplet, which only arises in stellar atmospheres <cit.>. Therefore, the excessabsorption is generally attributed to enhanced stellar absorption, perhaps due to elevated [Na/Fe] <cit.> and/or a bottom-heavy initial mass function <cit.>. The excessabsorption we detect in low sSFR galaxies at z∼ 2 has distinctly different properties from the excess stellar absorption seen at z∼ 0. Firstly, all sources classified as outflows haveabsorption blueshifted by at least 100 , indicating that a non-systemic component is required to explain the observed absorption profile. Secondly, the observedabsorption is too strong to explain with excess stellar absorption (see Section <ref>). <cit.> similarly detected excessabsorption in a lensed quiescent galaxy at z∼ 2 and showed that it was too strong to be explained by enhanced Na abundance. Thirdly, the excessabsorption is not associated with excessabsorption. The right-hand panel of Figure <ref> shows stacked residual spectra of galaxies with neutral gas outflows, zoomed in on the region around theline. There is no evidence for significant excess absorption, suggesting that the stellar absorption contribution has been fully accounted for in the Prospector fitting. Furthermore, the outflow host galaxies in our sample fall above the  –  correlation observed in local early-type galaxies <cit.>, indicating that theabsorption is much stronger than expected based on theabsorption. Finally, theabsorption strength (quantified by the rest-frame equivalent width W( Na D)) is positively correlated with A_V (Figure <ref>, left), indicating that the excessabsorption is interstellar in origin. The combination of these factors provides strong evidence that the blueshiftedabsorption observed in massive, low sSFR galaxies at z∼ 2 traces neutral gas outflows.§.§.§ Systemic absorptionA further 39% (11/28) of the classifiedabsorption profiles have centroid velocities consistent with the galaxy systemic velocity. This means that most of the neutral gas follows the bulk motion of the galaxy i.e. it is likely to be located primarily in the interstellar medium (ISM). Massive, high redshift galaxies are known to harbour large cold gas reservoirs <cit.>, and these could be responsible for producing the strong systemic absorption we observe. However, it is also possible that a non-negligible fraction of the absorption classified as systemic could arise from outflows. Firstly, we are only able to robustly identify outflows with velocity offsets exceeding ∼ 100  due to the relatively low spectral resolution of the observations and the close relative proximity of thedoublet lines. Secondly, our classification based on the average absorption velocity would not identify outflow components that are hidden underneath strong ISM absorption. Observations at higher spectral resolution (i.e. R = 2700 with JWST/NIRSpec) would enable us to perform 2-component fitting and separate neutral gas in the ISM from outflowing material (see Section <ref>).§.§.§ Infalling gasThe remaining 11% (3/28) of the classifiedabsorption profiles are redshifted with velocity offsets of 25 – 400 , indicating that there is neutral gas flowing towards these galaxies. The infalling material could originate in bulk flows within interacting systems or could be directly accreting onto the galaxies, providing cold gas that may sustain (or rejuvenate) star-formation. Redshiftedabsorption has been observed in local galaxies. <cit.> find that redshifted absorption is prevalent in massive edge-on star-forming galaxies, consistent with a picture where the absorption traces gas accreting along the disk plane, potentially originating from galactic fountains. <cit.> find evidence for neutral gas inflows in a substantial fraction of passive `red geyser' galaxies, again likely originating from internal recycling and/or minor mergers (see also ).We investigate the likely origin of redshifted absorption in the Blue Jay galaxies by examining their star formation histories and morphologies (using HST/ACS+WFC3 imaging from the 3D-HST survey and JWST/NIRCam imaging from the PRIMER survey; GO 1837, PI Dunlop). One of the galaxies shows evidence for a nearby companion and could be an interacting system. This galaxy also shows strong N ii and S ii line emission which could plausibly trace shocks induced by tidal forces. High spatial resolution maps of emission line fluxes and kinematics would help to determine whether interactions are significantly impacting the dynamical state of this system. The remaining two galaxies do not show any evidence for multi-component structure, suggesting they may host neutral gas inflows. Interestingly, both galaxies are at the peak of their star-formation histories, suggesting that the high SFRs may be fuelled by ongoing cold gas accretion.§ OUTFLOW PROPERTIESWe have reported the first evidence for widespreadabsorption in massive (log(M_*/M_⊙ > 10) galaxies at cosmic noon, revealing that these galaxies have large neutral gas reservoirs. Approximately half of the detected absorption profiles are blueshifted, providing unambiguous evidence of neutral gas outflows. Other galaxies may have weaker outflows which are undetected at R ∼ 1000 due to the presence of strong ISM absorption. In this section, we investigate the properties and principal driving mechanisms of the detected neutral outflows. The first clue regarding the driving mechanism comes from the outflow demographics: the Blue Jay neutral gas outflows are spread almost uniformly over more than four orders of magnitude in sSFR (see Figure <ref>). This is in tension with expectations for star-formation driven outflows, for which the incidence should increase with SFR. However, the incidence of AGN-driven ionized gas outflows at cosmic noon is observed to be independent of sSFR <cit.>, suggesting that the neutral gas outflows we detect may be AGN-driven. We further explore the link between neutral outflows and AGN activity by examining galaxy emission line ratios (Section <ref>) and the outflow velocities, mass outflow rates and energetics (Sections <ref>, <ref> and <ref>, respectively).§.§ Emission line ratiosOptical emission line ratios are valuable diagnostics of the principal power sources within galaxies <cit.>. Figure <ref> shows that massive galaxies with strong interstellarabsorption (black) have distinctly different emission line ratios from those without strongabsorption (brown). The line ratios of individual galaxies are shown in small markers and the histograms show the line ratio distributions for the two populations. Large triangles show the ratios measured from median stacked profiles. We have verified that similar values are obtained from mean stacked profiles and by averaging the individual line ratio measurements. Galaxies lacking significantabsorption typically lie in the star-forming and composite regions of the / vs. / diagnostic diagram <cit.> and fall close to the locus of z∼ 2.3 star-forming galaxies from the MOSDEF survey <cit.>. In contrast, galaxies with detectedabsorption have significantly larger / ratios consistent with AGN host galaxies at similar redshifts <cit.>. In the local Universe, elevated / ratios can alternatively trace shock-excitation in star-formation driven outflows <cit.>. However, star-formation driven outflows at z∼ 2 typically do not show very elevated / ratios <cit.>, perhaps because the detected line emission primarily originates from regions close to the galaxy disk where ionizing radiation from young stars dominates. Therefore, we hypothesize that strongabsorption is preferentially associated with AGN activity.§.§ Outflow VelocityObservations of outflows in ionized, molecular and neutral gas have shown that AGN-driven outflows typically have more extreme velocities than star-formation-driven outflows, where velocities ≳ 1000  are primarily associated with AGN activity <cit.>. We estimate the outflow velocities for the Blue Jay targets using the velocity offset v and dispersion σ of the absorption profiles: . The measured outflow velocities range from , with a median value of ∼ 500 . The fastest of the Blue Jay outflows are more likely to be AGN-driven than star-formation-driven, but the velocity information is insufficient to determine the driving mechanisms of the more moderate velocity outflows. §.§ Mass Outflow Rates§.§.§ CalculationsWe estimate the neutral gas outflow rates using the time-averaged shell model presented in <cit.> and updated in <cit.>:Ṁ_ out ( M_⊙ yr^-1) = 11.45( C_Ω C_f/0.4) ( N(HI)/ 10^21 cm^-2)× (r_ out/ 1 kpc)( v_ out/ 200 km s^-1)where C_Ω is the large-scale covering factor related to the opening angle of the wind,is the hydrogen column density, r_ out is the outflow radius and v_ out is the outflow velocity. The small-scale covering fraction C_f is obtained directly from the line fitting (see Equation <ref>). The measured values range from 0.1 – 0.9 (median 0.3); similar to what has been found in the local Universe <cit.>. We assume that the outflows cover 50% of the solid sphere (i.e. ), consistent with the incidence of neutral outflows in local infrared galaxies <cit.>. The geometry of neutral outflows at z∼ 2 is very uncertain, but the fact that we detect outflows in ≳ 25% of massive galaxies suggests that the covering fraction cannot be much smaller than 0.25. We therefore consider the systematic uncertainty on C_Ω to be a factor of 2 . We calculate N(Na i) from the optical depth at the centre of the redline, τ_0,r, using the relationship from <cit.>:N(Na i) = 10^13cm^-2(τ_0,r/0.7580) (0.4164/f_ lu) × ( 1215Å/λ_ lu) ( b/10 km s^-1)where f_ lu = 0.32 and λ_ lu = 5897Å are the oscillator strength and rest-frame wavelength of the transition, respectively, and b is the Doppler parameter, equivalent to √(2)σ. By directly converting τ_0,r to N(Na i), we are assuming that the observed absorption comes primarily from outflowing gas, with no significant contribution from gas in the ISM. The low spectral resolution of our observations means that we are unable to constrain multi-component fits allowing for contributions from both the ISM and outflows. <cit.> found that ISM gas could account for up to 44% of the Ca ii k (and by extension ) absorption in COSMOS-11142. It is unlikely that the ISM component contributes more than half of the observed absorption for sources classified as outflows, because the outflow component must dominate to produce the observed negative velocity shift.We convert N(Na i) toassuming Milky-Way-like Na abundance and dust depletion factors, and a 10% neutral fraction (see ). This neutral fraction is based on values measured towards Milky Way stars <cit.> and a cold extragalactic H i cloud <cit.>, and is likely to underestimate the ionization fraction in more extreme outflow environments. <cit.> measured a 5% ionization fraction in a local AGN-driven outflow, which would increase the mass outflow rates by a factor of 2 compared to our calculations. The radial extent of the outflowing neutral gas cannot be measured from our observations. We estimate the likely radial extent using size measurements of 1) neutral outflows in the local Universe and 2) ionized outflows at cosmic noon. Resolved studies ofoutflows in the local Universe suggest that they typically extend a few kiloparsecs, with measured sizes ranging from<cit.>. Similarly,absorption in quasar spectra is only observed within 15 kpc of galaxies <cit.>. The twooutflows to have been spatially analyzed at cosmic noon have sizes ≤ 1 kpc <cit.> and 2.7 kpc <cit.>. In comparison, ionized gas outflows at cosmic noon typically extend to at least the galaxy effective radius, on the order of a few kpc <cit.>. We conservatively adopt a 1 kpc extent and note that the mass outflow rates could be up to 10 times higher if the outflows are significantly larger than this.When calculating the mass outflow rates we use the full Monte Carlo posterior probability distributions for v, σ, C_f and τ_0,r. As mentioned in Section <ref>, C_f and τ_0,r are degenerate because thedoublet lines are blended in our observations. However, the mass outflow rate scales with the product of these two parameters (Equation <ref>), and the posterior probability distributions for the mass outflow rate are well constrained (see Appendix <ref> for more details).§.§.§ ResultsThe measured properties of the detectedabsorption profiles and the derived neutral gas masses and outflow rates are listed in Table <ref>. We measure outflow masses ranging from(median 7.6) and mass outflow rates spanning(median ). These are consistent with neutral outflow properties measured for star-forming and AGN host galaxies in the local Universe <cit.> as well as at z∼ 2 <cit.>. The neutral gas outflow rates are also comparable to the ionized gas outflow rates measured for galaxies at similar stellar mass and redshift <cit.>. We emphasize that the estimates presented here are based on conservative assumptions for the outflow extent and Na ionization fraction, and the true outflow rates could plausibly be an order of magnitude larger. In the twogalaxies for which neutral and ionized mass outflow rates have been directly compared, the neutral outflow rates exceed the ionized outflow rates by approximately a factor of 100 <cit.>. Our results emphasize that it is important to account for the neutral phase in order to paint a complete picture of ejective feedback. Interestingly, we do not find any correlation between neutral gas outflow rate and galaxy SFR, as shown in the left-hand panel of Figure <ref>. As a consequence, the outflow mass loading factors (η, defined as mass outflow rate divided by SFR) differ strongly between the star-forming and quenching populations (Figure <ref>, right). The outflows launched from the most actively star-forming galaxieshave mass-loading factors of η≲ 1, consistent with expectations for star-formation-driven outflows <cit.>. In contrast, the mass loading factors for the lower SFR galaxies range from 4 – 360. It is unlikely that energy injection by young stars could remove gas so much faster than the stars themselves are forming. However, many of the low SFR galaxies in our sample show strong Balmer absorption lines indicative of a recent rapid decline in SFR. We therefore investigate whether it is possible that the outflows were launched during a recent starburst phase, in which case the high mass loading factors could be an artefact of the time delay between the launch of the outflows and the SFR measurement (which is averaged over the last 30 Myr). The slowest outflow in our sample has a velocity of 210 , and absorption line measurements have found thatabsorption is only observed within 15 kpc of galaxies <cit.>. This corresponds to a maximum reasonable outflow travel time of 70 Myr. We compute mass-loading factors using SFRs in different age bins from the Prospector fitting. Mass-loading factors of order unity are only found when using SFRs more than 100 Myr in the past. This is at least 50% longer than the maximum reasonable outflow travel time, suggesting that past star-formation could not reasonably have powered these outflows and providing further evidence that they are driven by AGN activity. §.§ Energy and momentum ratesNext, we investigate whether the current levels of star-formation and AGN activity in the Blue Jay galaxies are sufficient to explain the energetics of the neutral gas outflows. Figure <ref> compares the kinetic energy and momentum rates of the outflows (Ė_ out and ṗ_ out, respectively) with the rates of energy and momentum injection by supernovae and AGN. For the supernovae, we adopt the mechanical energy and momentum rate scalings from <cit.> based on solar metallicity Starburst99 models <cit.>: , . The AGN bolometric luminosity is estimated from theluminosity applying a bolometric correction factor of 600 <cit.>. Theluminosity is corrected for extinction using the median A_V from the Prospector posterior probability distribution (including extra attenuation towards towards young stars), and the uncertainty on A_V is propagated through to the uncertainty on theluminosity. Theemission in most of the outflow host galaxies is dominated by AGN activity (see Figure <ref>), and <cit.> show that even for composite galaxies where more than half of the Balmer line emission is due to star-formation, the totalluminosity predicts the bolometric luminosity to within a factor of 2. Energy conserving AGN-driven outflows are expected to have kinetic energy rates equivalent to 5% of the AGN bolometric luminosity <cit.>. The AGN momentum flux output is , but in energy-conserving outflows, the momentum outflow rate can be boosted by a factor of ∼ 5-20 due to entrainment of ISM gas in the wind <cit.>. To account for this, we plot lines for outflow momentum rates equivalent to L_ AGN/c and 20 × L_ AGN/c.The top row of Figure <ref> compares the outflow energetics with predictions for star-formation-driven outflows. In the most actively star-forming galaxies , the energy injected by star-formation is likely sufficient to power the observed outflows. However, in the lower SFR systems, the energy (momentum) injection rates are up to 60 (280) times larger than the predicted inputs. It is unlikely that this discrepancy can be explained by systematic uncertainties on the mass outflow rates. The shaded grey regions show the range of plausible values accounting for uncertainties on the ISM absorption contribution, ionization fraction and wind opening angle (see Section <ref>). We adopt a very conservative outflow size of , and assuming a larger extent would only increase the outflow energy further above the energy and momentum injection by supernovae. This, together with the implausibly large mass-loading factors (see Section <ref>), provides strong evidence to suggest that the neutral gas outflows from the low SFR galaxies are unlikely to be powered by star-formation. The bottom row of Figure <ref> compares the outflow energetics with predictions for AGN-driven outflows. We see that in all cases, the AGN are powerful enough to drive the observed outflows.§ DISCUSSIONOur investigation of the outflow driving mechanisms indicates that AGN activity likely plays a major role in powering the observed neutral gas outflows in massive z∼ 2 galaxies. The incidence of neutral outflows is independent of (s)SFR (Figure <ref>). Galaxies with strongabsorption show high / ratios consistent with AGN ionization, whereas galaxies withoutabsorption show lower / ratios consistent with photoionization by young stars (Figure <ref>). Some outflows have velocities exceeding 500  (Section <ref>). The case for AGN-driven outflows is particularly strong for low SFR galaxies where the mass loading factors range from(Figure <ref>) and the outflows are removing energy and momentum tens to hundreds of times faster than they can be injected by young stars (Section <ref>).In summary, the neutral gas outflows in the Blue Jay sample are evenly distributed across star-forming and quenching galaxies, and AGN accretion appears to play a major role in driving these outflows. This is in contrast to the local Universe where the majority of neutral outflows are found in star-forming galaxies and are consistent with being star-formation-driven <cit.>. There is some evidence that outflows in local low SFR galaxies are preferentially associated with AGN activity <cit.>, consistent with our findings. The role of AGN in driving outflows may be enhanced in our sample because we are probing significantly brighter AGN: the median AGN luminosity of the Blue Jay outflow hosts (6 × 10^44 erg/s) is about two orders of magnitude higher than that of optically selected samples at z∼ 0 <cit.>; consistent with the known redshift evolution in AGN luminosity <cit.>. In the local Universe, neutral outflows from quasar host galaxies are faster and have higher mass outflow rates than outflows from star-forming galaxies <cit.>, suggesting that luminous AGN play a significant role in driving outflows at all redshifts.Neutral outflows from low sSFR galaxies are much more prevalent at cosmic noon than in the local Universe. This may be because low sSFR galaxies at z∼ 2 have had a lot less time to grow and quench than their z∼ 0 counterparts, and as a result they have much younger stellar populations <cit.>. 85% of the massive, low sSFR galaxies in the Blue Jay sample have light-weighted ages less than 1 Gyr (Park et al., in prep) and could therefore be post-starburst galaxies. The molecular gas reservoirs of post-starburst galaxies have been observed to decline rapidly as a function of time since quenching <cit.> and the most evolved galaxies have very little cold gas, making it unlikely to observe neutral gas outflows from these sources.The rapid depletion of the molecular gas reservoirs is thought to be driven by powerful outflows, which have been observed in many post-starburst galaxies (both at z∼ 0 and z∼ 1; e.g. ). The low sSFR outflow host galaxies in our z∼ 2 sample may similarly trace a `blowout' phase where strong AGN-driven outflows are ejecting large amounts of cold gas, leading to rapid quenching of star-formation. This picture is supported by detailed analyses of two post-starburst galaxies at z∼ 2 – 3 <cit.>. Both galaxies experienced a burst of star-formationago followed by a rapid decline in star-formation activity. These galaxies host powerful, AGN-driven neutral gas outflows that are ejecting cold gas 10 – 100 times faster than it can be converted into stars. The rapid quenching of these systems may therefore be fully explained by ejective AGN feedback.We have shown that similarly powerful neutral gas outflows are prevalent across the massive galaxy population at cosmic noon. The outflows from the quenching galaxies in our sample have mass-loading factors of(see Figure <ref>), consistent with the case studies above. Our work indicates that AGN-driven neutral gas outflows may represent a dominant avenue for fast quenching at z∼ 2. Chemical evolution modelling of massive quiescent galaxies at z∼ 1 has shown that mass-loading factors of order 10 are required to explain the stellar magnesium abundances, suggesting that these galaxies experienced powerful outflowsprior to quenching <cit.>. Many high redshift quiescent galaxies show emission line ratios consistent with AGN ionization (e.g. , Bugiani et al., in prep), providing additional evidence that AGN feedback plays a crucial role in quenching star-formation. It is important to note that only a small fraction of the outflowing gas we detect may be able to escape the galaxy halos. The escape velocity is expected to betimes the galaxy circular velocity v_ circ <cit.>. We are not able to measure circular velocities directly because we only have slit spectra, so we adopt v_ circ∼ 300  which is typical of AGN host galaxies at this redshift <cit.>. This corresponds to escape velocities of . Only 4/14 (29%) of the outflows in our sample exceed these velocities, suggesting that the majority of gas ejected in the outflows will remain in the halo and may eventually be re-accreted onto the galaxies. <cit.> found that a large fraction of radio-detected quiescent galaxies show infalling neutral gas probed by redshiftedabsorption, and the authors suggest that radio jets may be responsible for heating the accreted gas and preventing it from forming new stars. Quenching may therefore involve a combination of processes: the powerful ejection of gas through outflows is crucial to explain the observed rapid quenching of galaxies in the early Universe <cit.>, whilst maintenance mode feedback is required to prevent rejuvination and keep galaxies quiescent over long timescales.§ SUMMARY AND CONCLUSIONSWe have used JWST/NIRSpec observations of 113 galaxies atselected from the mass-complete Blue Jay survey to investigate the demographics and properties of neutral gas outflows, traced byabsorption, at cosmic noon. Our observations have revealed for the first time that interstellarabsorption is widespread in massive (log(M_*/M_⊙) > 10) galaxies at z∼ 2. Our main findings are as follows: * We detect interstellarabsorption in 30/113 galaxies. The detections are almost exclusively associated with massive (log(M_*/M_⊙) > 10) galaxies, for which the detection fraction is 46%. Lower mass galaxies likely have insufficient columns of gas and dust to shieldagainst ionization.* 50% of theabsorption profiles are blueshifted by at least 100 , providing unambiguous evidence for neutral gas outflows. These neutral outflows are observed across the entire massive galaxy population, with similar incidence rates in star-forming and quenching galaxies. * 39% of theprofiles are consistent with the galaxy systemic velocity. These primarily trace cool gas in the ISM, but may also have weaker underlying outflow components that are hidden at R ∼ 1000.* 3 galaxies (11%) show redshifted absorption profiles indicative of infalling gas. Of these, one galaxy shows a complex morphology and emission-line ratios consistent with shock excitation, suggesting that the redshifted absorption may trace bulk flows of gas within an interacting system. The other two galaxies appear isolated and are at the peaks of their star-formation histories, suggesting that their star-formation may be fuelled by ongoing accretion of cool gas.* Assuming a conservative outflow extent of 1 kpc, we compute neutral mass outflow rates of . These are comparable to or greater than ionized gas outflow rates previously reported for other galaxies with similar stellar masses and redshifts. Existing measurements of neutral gas outflow sizes range from , so the true outflow extents and mass outflow rates from the Blue Jay galaxies could plausibly be up to an order of magnitude larger than we report.* Multiple lines of evidence indicate that the outflows are likely to be AGN-driven. Galaxies with strong interstellarabsorption have enhanced / ratios indicative of AGN activity. The outflow incidence does not depend on the level of star-formation activity. Star-formation cannot power the outflows from the low SFR galaxies, where the outflow mass loading factors range fromand the energy and momentum outflow rates exceed the injection rates from supernovae by at least an order of magnitude. The presence of strong neutral outflows in quenching systems could indicate that they are undergoing a post-starburst `blowout' phase powered by the AGN.* The outflow velocities range fromand are generally lower than the estimated halo escape velocities, suggesting that most of the outflowing material will remain in the galaxy halos. Nevertheless, the strong AGN-driven ejection of cold gas provides a mechanism to explain the rapid quenching of star-formation in massive quiescent galaxies in the early Universe. Maintenance mode feedback (e.g. through radio jets) may also be required to prevent the re-accretion of cold gas and keep the galaxies quiescent. Our results indicate that powerful, AGN-driven neutral gas outflows are prevalent across the massive galaxy population at z∼ 2 and are likely to be a dominant channel for fast quenching at this epoch.§ ACKNOWLEDGEMENTSWe thank Karl Glazebrook for thought-provoking discussions. RD is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. SB is supported by the the ERC Starting Grant “Red Cardinal”, GA 101076080. RE acknowledges the support from grant numbers 21-atp21-0077, NSF AST-1816420, and HST-GO-16173.001-A as well as the Institute for Theory and Computation at the Center for Astrophysics. RW acknowledges funding of a Leibniz Junior Research Group (project number J131/2022).This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program GO 1810. The Blue Jay Survey is funded in part by STScI Grant JWST-GO-01810. This work also makes use of observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.§ DATA AVAILABILITY The JWST/NIRSpec MSA spectra used in this paper were obtained through the Cycle 1 program Blue Jay (GO 1810, PI Belli). The data are currently proprietary and will become public between November and December 2023.mnras§ OUTFLOW PARAMETER CONSTRAINTSWe fit theabsorption profiles using a partial covering model parametrized by the gas covering fraction C_f, optical depth τ, velocity v and dispersion σ (Equation <ref>). The optical depth modulates the shape and depth of the absorption profile as well as the relative strength of the red and blue doublet lines. The covering fraction also impacts the depth of the observed absorption. At low spectral resolution, these parameters become degenerate <cit.>, raising the question of how well the total mass outflow rate can be constrained. From Equation <ref>, we see that the maximum absorption depth is at most 1 - C_f. In other words, if the absorption depth is 80% (with absorption reaching down to 20% of the continuum level), it implies . The left-hand panel of Figure <ref> shows C_f as a function of the maximum absorption depth, with the dotted line indicating a 1:1 relation. All points lie on or above the 1:1 line, as expected from Equation <ref>. The right-hand panel of Figure <ref> shows single and joint posterior probability distributions for the four outflow model parameters and the derived mass outflow rate for an example galaxy. Focusing on C_f, we see that the lower boundary is well constrained by the maximum absorption depth, with a long tail towards larger values. The optical depth τ_0,r is very poorly constrained but varies inversely with C_f because both parameters impact the absorption depth. The mass outflow rate scales with the product of C_f and τ_0,r (Equation <ref>), and because these parameters are inversely dependent, the mass outflow rate is well constrained.
http://arxiv.org/abs/2310.17939v1
{ "authors": [ "Rebecca L. Davies", "Sirio Belli", "Minjung Park", "J. Trevor Mendel", "Benjamin D. Johnson", "Charlie Conroy", "Chloë Benton", "Letizia Bugiani", "Razieh Emami", "Joel Leja", "Yijia Li", "Gabriel Maheson", "Elijah P. Mathews", "Rohan P. Naidu", "Erica J. Nelson", "Sandro Tacchella", "Bryan A. Terrazas", "Rainer Weinberger" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231027073332", "title": "JWST Reveals Widespread AGN-Driven Neutral Gas Outflows in Massive z ~ 2 Galaxies" }
Parameter estimation for second-order SPDEs in multiple space dimensions Patrick Bossert Institute of Mathematics Julius-Maximilians-Universität Würzburg Würzburg, 97074, Germany 20 October 2023 ========================================================================================================================== In recent years, there has been growing interest in text-to-SQL translation, which is the task of converting natural language questions into executable SQL queries. This technology is important for its potential to democratize data extraction from databases. However, some of its key hurdles include domain generalisation, which is the ability to adapt to previously unseen databases, and alignment of natural language questions with the corresponding SQL queries. To overcome these challenges, we introduce SQLformer, a novel Transformer architecture specifically crafted to perform text-to-SQL translation tasks. Our model predicts SQL queries as abstract syntax trees (ASTs) in an autoregressive way, incorporating structural inductive bias in the encoder and decoder layers. This bias, guided by database table and column selection, aids the decoder in generating SQL query ASTs represented as graphs in a Breadth-First Search canonical order. Comprehensive experiments illustrate the state-of-the-art performance of SQLformer in the challenging text-to-SQL Spider benchmark. Our implementation is available at https://github.com/AdrianBZG/SQLformerhttps://github.com/AdrianBZG/SQLformer. § INTRODUCTION Relational databases are essential tools within various critical sectors like healthcare and industry among others. For those with technical expertise, accessing data from these databases using some form of structured query language, such as SQL, can be efficient. However, the intricate nature of SQL can make it daunting for non-technical users to learn, creating significant barriers to use..Consequently, there has been a surge in interest in the field of text-to-SQL <cit.>, which aims to convert natural language questions (NLQs) directly into SQL queries. This has the potential to dramatically reduce the obstacles faced by non-expert users when interacting with relational databases (DBs).Early work in the field primarily focused on developing and evaluating semantic parsers for individual databases <cit.>. However, given the widespread use of DBs, an approach based on creating a separate semantic parser for each database does not scale. One of the key hurdles in achieving domain generalisation <cit.> is the need for complex reasoning to generate SQL queries rich in structure. This involves the ability to accurately contextualise a user query against a specific DB by considering both explicit relations (like the table-column relations defined by the DB schema) and implicit relations (like determining if a phrase corresponds or applies to a specific column or table).Recently, there has been a release of large-scale datasets <cit.> comprising hundreds of DBs and their associated question-SQL pairs. This has opened up the possibility of developing semantic parsers capable of functioning effectively across different DBs <cit.>. However, this demands the model to interpret queries in the context of relational DBs unseen during training, and precisely convey the query intent through SQL logic. As a result, cross-DB text-to-SQL semantic parsers cannot simply rely on memorising observed SQL patterns. Instead, they must accurately model the natural language query, the underlying DB structures, and the context of both.Current strategies for cross-DB text-to-SQL semantic parsers generally follow a set of design principles to navigate these challenges. First, the question and schema representation are contextualised mutually by learning an embedding function conditioned on the schema <cit.>. Second, pre-trained language models (LMs), such as BERT <cit.> or RoBERTa <cit.>, have been shown to greatly improve parsing accuracy by enhancing generalisation over language variations and capturing long-range dependencies. Related approaches <cit.> have adopted pre-training on a BERT architecture with the inclusion of grammar-augmented synthetic examples, which when combined with robust base semantic parsers, have achieved state-of-the-art results.In this paper, we present SQLformer, which integrates the above design principles into a novel Transformer variant for text-to-SQL translation. We conceptualize each NLQ as a graph with multiple relationships, including syntactic dependencies and part-of-speech. The database schema is depicted as a graph, described by the metadata for the tables, columns, and their relations. Drawing inspiration from the image domain <cit.>, we incorporate two learnable token embeddings for table and column representations into the encoder. These are used to select a set of k_1 and k_2 tables and columns over the target database. Our model learns embeddings for the suggested tables and columns, enriching the decoder input with database information. This guides the decoder by contextualizing the input with the most relevant tables and columns from the given NLQ. Finally, we propose an autoregressive decoder, that predicts the SQL query as an AST. Experimental results on the Spider benchmark show that SQLformer achieves 78.2% exact match (EM) accuracy, surpassing multiple state-of-the-art baselines.§ RELATED WORK In earlier research, a sketch-based slot filling approach was commonly used, which employs different modules to predict distinct parts of the generated SQL query. This approach breaks down the task of SQL generation into several independent sketches and utilises different classifiers to predict the separate parts, as shown in methods such as SQLNet <cit.>, TypeSQL <cit.>, SQLOVA <cit.>, X-SQL <cit.> or RYANSQL <cit.>. However, most of these methods only address simple queries and struggle to generate accurate queries in the more complex scenarios found in the Spider dataset <cit.>. The main challenge lies in the multi-table relations in the Spider dataset queries.There have been multiple approaches to address the challenges brought by these complex SQL tasks. A common approach has been the use of attention-based architectures for question-schema encoding, and rule-based structural architectures for query decoding. For instance, IRNet <cit.> separately encodes the question and schema using a LSTM and a self-attention mechanism respectively. Schema linking is accomplished by enhancing the question-schema encoding with custom type embeddings. The rule-based decoder from <cit.> was then used in order to decode a query into an intermediate representation, attaining a high-level abstraction for SQL.On the other hand, multiple works make use of graph structures to encapsulate a range of complex relationships. For instance, Global-GNN <cit.> models the database as a graph, while RAT-SQL <cit.> introduces schema encoding and linking, attributing a relation to every pair of input items. Further developments include LGESQL <cit.>, which distinguishes between local and non-local relations using a line graph enhanced hidden module; SADGA <cit.> which utilises contextual and dependency structure to jointly encode the question graph with the database schema graph; S^2SQL <cit.> which incorporates syntactic dependency information in a relational graph attention network architecture <cit.>, and RASAT <cit.> which integrates a relation-aware self-attention module into a T5 model <cit.>.Recent work has demonstrated the effectiveness of fine-tuning pre-trained models. For instance, <cit.> showed that fine-tuning a pre-trained T5-3B model could yield competitive results. Building on this, <cit.> introduced PICARD, a technique that constrains the auto-regressive decoder by applying incremental parsing during inference time. This approach filters out grammatically incorrect sequences in real time during beam search, improving the quality of the generated SQL.§ PRELIMINARIES§.§ Problem Formulation Given a natural language question Q and a schema S = < T, C > for a relational database, our objective is to generate a corresponding SQL query Y. Here, the sequence Q = q1 … q|Q| is a sequence of natural language tokens or words, where |Q| is the length of the question. The database schema is comprised of tables T = {t1, …, t|T|} and columns C = {c1, …, c|C|}, where |T| and |C| are the number of tables and columns in the database, respectively. Each column name ci ∈ C, is comprised of tokens ci,1, …, ci,| C_i| , where |Ci| is the number of tokens in the column name, and similarly table names are also comprised of tokens ti,1, …, ti,| t_i| , where | t_i| is the number of tokens in the table name. §.§ Query Construction In contrast to previous work, we model the output SQL query Y as a graph, representing the AST of the query in the context-free grammar of SQL, which our model learns to generate in an autoregressive fashion. The query is an undirected graph G = (V, E), of vertices V and edges E. Its nodes V = P ∪ T ∪ C are the possible SQL context-free grammar rules, P, such as UNION, SELECT, FROM, INTERSECTION, etc, as well as the tables (T) and the columns (C) of the database schema. P are used to represent non-terminal nodes, depicting rules of the grammar, whereas T and C are used for terminal nodes, such as when selecting table or column names to be applied within a specific rule. The edge set E = {(vi,vj) | vi, vj ∈ V} defines the connectivity between the different nodes in the graph.We represent the graph using an adjacency matrix, under a Breadth-First-Search (BFS) node ordering scheme π that maps nodes to rows of the adjacency matrix as a sequence <cit.>. This approach permits the modelling of graphs of varying size, such as the ones representing the ASTs of complex SQL queries. Formally, given a mapping f_S from graphs (G) to sequences (S), and a graph G with n nodes under BFS node ordering π, we can formulate S^π = f_S(G, π) = (S^π_1, … , S^π_n) where S^π_i ∈ {0, 1}i-1, i ∈ {1, …, n} depicts an adjacency vector between node π(vi) and the previous nodes π(vj), j ∈ {1, …, i - 1} already existing in the graph, so that: S^π_i = A(^π_1,i, …, A^π_i-1,i)^T, ∀i∈{2, …, n} Then, using S^π, we can determine uniquely the SQL graph G in a sequential form and learn to predict it autoregressively.§ SQLFORMER§.§ Model OverviewIn light of recent advancements in the field <cit.>, we approach the text-to-SQL problem as a translation task by using an encoder-decoder architecture. We extend the original Transformer encoder (see Subsection 4.3) by incorporating learnable table and column tokens in the encoder, used to select the most relevant tables and columns in the database schema given the NLQ. This information is injected as input to the decoder, so that it can be enriched with the representation of the schema-aware question encoding and the most relevant tables and columns in the database schema selected by the model. The SQLformer decoder extends the original Transformer decoder (see Subsection 4.4) in a way that integrates both node adjacency and type embeddings for generating a SQL query autoregressively. The overall architecture of our SQLformer model is described in Fig. <ref>. §.§ Model Inputs In this section, we detail how the inputs to our model are constructed, in particular, the construction of both the NLQ and schema graphs are explained. Question Graph Construction. The natural language question can be formulated as a graph GQ = <Q, R> where the node set Q are the natural language tokens, and R = {r1, …, r|R|}, refers to one-hop relations between words. In this work, we employ two groups of relations for the question graph. First, we use syntactic dependencies between the words in the question. Second, we use part-of-speech tagging to incorporate grammatical meaning across the words in the question. We create a joint question graph using both types of relations. This graph is then linearized as a Levi graph. Fig. <ref> shows an example question graph with some illustrative relationships. To encode the question graph we use a GAT <cit.>, obtaining an embedding for each of the question tokens, Zi ∈ ℝ^d, with i ∈ {1, …, |Q|}, where d is the hidden size. Database Schema Graph Construction. Similarly, a database schema graph can be represented by GS = <S, R> where the node set S = <T, C> represents the tables, T, and the columns, C, in the schema. The edge set R = {r1, …, r|R|} depicts the structural relationships among tables and columns in the schema. Similarly to previous works, we use the common relational database-specific relations, such as primary/foreign key for column pairs, column types, and whether a column belongs to a specific table. Fig. <ref> shows an example database schema graph. We encode the schema graph using a GAT <cit.> and use global average pooling to obtain a single embedding to represent each database schema. §.§ Table and Column Selection Encoder The SQLformer encoder receives as input the previously described 1-D sequence of natural language token embeddings, Z, and we prepend two learnable tokens to the sequence of embeddings: Z_tables and Z_cols. The state of these tokens at the output of the Transformer encoder, depicted here as X̂tables and X̂columns for tables and columns, respectively, serves as input to two Multi Layer Perceptron (MLP) blocks, that are responsible for, given the NLQ, selecting k_1 and k_2 tables and columns, respectively. k_1 and k_2 are both hyperparameters to the model. Sinusoidal vectors are added to the sequence embeddings to retain the original positional information of the question. The Transformer encoder <cit.> consists of alternating layers of multi-head self-attention (MHA) and Fully-connected Forward Network (FFN) blocks. Before every block, Layer Normalisation (LN) is applied, and after every block, a residual connection is added. More formally, in the ℓ^th encoder layer, the hidden states are represented as X^ℓ_S = {x^ℓ_1, …, x^ℓ_N}, where N is the maximum length of the inputs. First, a MHA block maps X into a query matrix Q ∈ ℝ^n× d_k, key matrix K ∈ ℝ^n× d_k and value matrix V ∈ ℝ^n× d_v, where m is the number of query vectors, and n the number of key or value vectors. Then, an attention vector is calculated as follows Attention(Q, K, V) = softmax(A) VA = Q KT/√(dk) In practice, the MHA block calculates the self-attention over h heads, where each head i is independently parametrized by W^Q_i ∈ ℝ^d_m× d_k, W^K_i ∈ ℝ^d_m× d_k and W^V_i ∈ ℝ^d_m× d_v, mapping the input embeddings X into queries and key-value pairs. Then, the attention for each head is calculated and concatenated, as follows H_i = Attention(Q W^Q_i, K W^K_i, V W^V_i) MHA(X^ℓ_S) = Concat(H_1, …, H_h) W^O X^ℓ_S = MHA(X^ℓ_S) where WO ∈ ℝ^d^h_m× d_m is a trainable parameter matrix. Next, to acquire the semantic hidden states of the input, a FFN block is applied, as follows FFN(X^ℓ_S) = max(0, X^ℓ_S W1 + b1) W2 + b2 where W1 ∈ ℝ^d_m× d_ff and W2 ∈ ℝ^d_ff× d_m are linear weight matrices. Finally, layer normalisation and residual connection are applied as follows X̂^ℓ_S = LayerNorm(X^ℓ_S + FFN(X^ℓ_S))Therefore, after L encoder layers, we obtain the input question embedding as X̂ . Where the first and second tokens, X̂_̂0̂ and X̂_̂1̂, correspond to X̂_̂t̂âb̂l̂êŝ and X̂_̂ĉôl̂ûm̂n̂ŝ, and the remaining tokens correspond to the natural language question tokens embeddings, depicted as X̂_̂Q̂ ∈ ℝ^d× Q. X̂_̂t̂âb̂l̂êŝ and X̂_̂ĉôl̂ûm̂n̂ŝ are the input of two MLP blocks, MLPtables ∈ ℝ^d× T and MLPcolumns ∈ ℝ^d× C, where d is the hidden size of the token embeddings, and T and C are the sizes of the tables and columns vocabularies, respectively. Both MLP blocks project the embeddings for the additional tokens into two separate vectors of probabilities, as follows P_tables = softmax(MLP^tables(X̂^tables)) P_columns = softmax(MLP^columns(X̂^columns))Then, the top k_1 and k_2 tables and columns, respectively, are selected according to P_tables and P_columns. Next, two embedding lookup tables, ET ∈ ℝ^T× d_t and EC ∈ ℝ^C× d_c, are used for mapping the k top tables and columns, respectively, into embeddings, as X^k_tables ∈ ℝ^k_1× d and X^k_columns ∈ ℝ^k_2× d, where d is the size of the learnable embeddings. These are aggregated and concatenated, giving the final representation for the schema, depicted as X̂_̂ŝĉĥêm̂âFinally, X̂_̂Q̂ and X̂_̂ŝĉĥêm̂â are aggregated to effectively contextualize the natural language question embedding by the embedding of the most likely tables and columns in the schema being mentioned. The result of this aggregation is given as input to the decoder module. §.§ Autoregressive Graph Generation Decoder During the decoding phase, previous works (e.g. <cit.>) widely adopt the LSTM-based tree decoder in <cit.> to generate SQL grammar rules. In contrast, the SQLformer decoder (see Fig. <ref>) extends the original Transformer decoder to predict the SQL AST autoregressively. This approach has two main advantages. First, it is able to maintain the context of previously generated parts of the query for longer sequences than LSTM-based decoders. This is especially important for long SQL queries, such as these containing sub-queries. Second, it encourages the generation of valid SQL queries by constraining the decoder to directly generate SQL ASTs. Also, the Transformer permutation invariance is desirable for processing the node embeddings of the SQL graph, as the graph is invariant under any permutation of the nodes.In the SQLformer decoder, the node embeddings are represented as a linear transformation of the node adjacency vectors, here called node adjacency channels. Formally, given a query graph G, we represent the node adjacencies A = {A0, A1, …, AN}, where N is the number of nodes and Ai ∈ {0, 1}M. M is the maximum frontier of the BFS ordering. The node adjacency channels of A, represented as HA, are calculated as follows H_A = A W_A where W_A ∈ ℝ^|A|× d_A is a learnable weight matrix with a hidden size of d_A.In addition, we introduce the node types, represented as V = {V0, V1, …, VN}, where N is the number of nodes and Vi is a one-hot representation of the node type for node i. The objective of V is to include the information about the query graph node types into the decoding process. Similarly to A, we transform V by using a linear transformation into the node type channels, HV, as follows H_V = V W_V where W_V ∈ ℝ^|V|× d_V is a learnable weight matrix with a hidden size of d_V, and |V| is the number of possible node types. However, the basic Transformer does not have a direct way to incorporate both channels. Consequently, in order to alleviate this issue and incorporate both node adjacency channels and node type channels into the SQLformer decoder, we extend the original Transformer decoder architecture. In particular, inspired by <cit.>, we include the node type channels in the multi head self-attention aggregation process as a bias term (see Fig. <ref>).Formally, we modify Eq. <ref> so that HV acts as a bias term in the attention calculation, such that A = Q KT/√(dk) + UU = W_U× H_V where W_U ∈ ℝ^d× d_U is a learnable weight matrix that updates the input node type embeddings HV into U, and d_U is the node type embeddings dimensionality.In addition to the original Transformer output projection layer, depicted here as OA ∈ ℝ^h× h, we add an additional output projection layer for updating the residuals of the node type embeddings, defined here as OV ∈ ℝ^d_U× h. Specifically, the update of the embeddings H^ℓ_A and H^ℓ_V, for node adjacency and node type channels, respectively, at layer ℓ, can be formalised as H^ℓ_A = H^ℓ-1_A + 𝐎_A^ℓ_k=1^K∑_j=1^N G^k,ℓ (𝐕^k,ℓ h_A^ℓ)H^ℓ_V = H^ℓ-1_V + 𝐎_V^ℓ_k=1^K A^k,ℓG^k,ℓ = softmax(A^k,ℓ) where ∥ means concatenation, and K is the number of attention heads.Finally, the representation of both HA and HV after the last decoder layer is then fed to two distinct MLP heads, which emit the predicted (soft) node adjacencies a^t+1, and node types v^t+1, for timestep t+1 as follows a^t+1 = σ(W_a_2ReLU(W_a_1 H^t_A)) v^t+1 = softmax(W_v_2ReLU(W_v_1 H^t_V)) where W_a_1, W_v_1 ∈ ℝ^512× d, W_a_2 ∈ ℝ^M× 512, W_v_2 ∈ ℝ^|V|× 512, M is the maximum size of the BFS-ordering frontier, |V| the size of the node type vocabulary, and σ represents the sigmoid operation.§ EXPERIMENTS In this section, we show our model performance on the Spider text-to-SQL dataset <cit.>. Also, we present ablation studies to analyse the importance of the different components of the SQLformer architecture. §.§ Experimental SetupDataset. Our experiments use the Spider dataset, a large-scale cross-domain text-to-SQL benchmark. This dataset also incorporates multiple text-to-SQL datasets. The Spider dataset contains 8,659 training examples of question and SQL query pairs (along with the corresponding database schemas) and 1,034 development (dev) examples, spanning 200 complex databases across 138 different domains. The test set is not available for examination. Evaluation Metrics. Following <cit.>, we report results using the same metrics. In particular we compute Exact Match (EM) accuracy on all examples, as well as grouped by difficulty levels. EM can evaluate how much a predicted SQL query is comparable to the ground truth query. Similarly to previous work <cit.> on Spider, these metrics do not take into account the model’s performance on generating the constant values in the SQL query. In our ablation study experiments, we also use the EM accuracy metric over the development set. Implementation Details. We implemented SQLformer in PyTorch <cit.>. For the graph neural network components, we use PyTorch Geometric <cit.>. The questions, column and table names are tokenized and lemmatized using stanza <cit.>. For dependency parsing and part-of-speech tagging, stanza <cit.> is used. To transform the SQL queries into their corresponding ASTs, we use sqlglot. We find the best set of hyperparameters on a randomly sampled subset of 10% samples from the dev dataset. For training, we set the maximum input length as 1024, maximum number of generated AST nodes to 200, maximum previous AST nodes in the BFS ordering as 30, batch size as 16, and training steps to 20,000. The number of layers for the encoder and decoder are both set to 6, number of heads is 8. The dimensionality of the encoder and the decoder are set to 512. k_1 and k_2 are set to 20. The embedding sizes for tables and columns are set to 512. The node adjacency and type embeddings sizes are 512. The output MLP for generating the node adjacency and types have 2 layers and dimensionality of 512. Tokens embeddings are initialized with ELECTRA <cit.> using the official weights from the HuggingFace library <cit.>. We use teacher forcing in the decoder. Results are on the dev set unless stated otherwise. §.§ Overall Performance The EM accuracy results on the Spider benchmark are presented in Table <ref>. As shown in the table, our proposed model SQLformer achieves competitive performance in EM accuracy. On the development set, compared with RAT-SQL <cit.>, our model’s EM increases from 73.7% to 75.6%, achieving 1.9% absolute improvement. When compared to approaches that fine-tune a Language Model (LM) with a much larger amount of parameters, such as T5-3B (71.5%), we achieve a 4.1% absolute improvement. This effectively shows the benefit of our proposed architecture for solving text-to-SQL tasks.Furthermore, we provide a breakdown of accuracy by query difficulty level, i.e. easy, medium, hard and extra hard, as defined by <cit.>. In Table <ref> we provide a comparison between our approach and state-of-the-art baselines on the EM accuracy metric, for the four query difficulty subsets. As expected, performance drops significantly with increasing query difficulty, dropping from 92.7% to 51.2% accuracy on easy and extra queries, respectively. Focusing on the most complex types of queries, when compared with RAT-SQL, SQLformer achieves an absolute improvement of 9.7% and 8.3% on hard and extra queries, respectively. This consolidates our motivation to employ a Transformer-based SQL decoder, allowing the model to capture longer dependencies. Therefore, SQLformer surpasses the baseline methods across all four subsets by a significant margin, giving supporting evidence for the effectiveness of our approach. §.§ Ablation Study In order to better validate the importance of each component in our architecture, we perform a series of ablation studies on the best performing SQLformer model. In Table <ref>, we compare 4 different design choices that we believe are critical in our architecture. In particular, we assess the impact of removing the table and column selection component from the encoder, the part-of-speech question encoding, and the dependency graph question encoding.As shown in Table <ref>, the component that has the biggest impact in the architecture is the table and column selection. Upon removing this component, the EM accuracy drops from 78.2% to 72.3%, leading to a 5.9% absolute performance drop. We hypothesise that such mechanism injects the notion of schema-question linking, which has been demonstrated to be critical. Therefore, without schema linking, the joint contextualisation of question and schema is missing, increasing significantly the difficulty of the task. On the other hand, the effect of removing the dependency graph and part-of-speech question encodings have less impact on performance, leading to an absolute performance decrease of 0.7% and 0.9%, respectively. When swapping our decoder with the one in <cit.>, performance decreases by 4%.§ CONCLUSION In this work, we introduced SQLformer, a new model for text-to-SQL generation, unique compared to previous models due to its autoregressive prediction of the SQL AST. With a specially designed encoder, SQLformer links questions and schema, utilizing pre-trained models for effective representation. A novel decoder layer integrates node adjacency and type information during learning, and is conditioned on top-selected tables, columns, and schema-aware question encoding to generate SQL queries. We anticipate that this architecture can generate queries in other languages modelled as graphs, such as SPARQL. Notably, SQLformer outperformed other competitive text-to-SQL baselines, showcasing its state-of-the-art performance.§ LIMITATIONS One of the main limitations of our work is its focus on the English language, as it is the language used by most publicly available datasets. A potential way to alleviate this is by using multi-language PLMs for processing the questions. Another relevant drawback is the requirement to be able to transform queries into ASTs, such that model training is possible. However, most popular modern query languages have libraries available for performing such transformations. Finally, it is worth noting the significant GPU resource requirements for training the architecture.acl_natbib
http://arxiv.org/abs/2310.18376v1
{ "authors": [ "Adrián Bazaga", "Pietro Liò", "Gos Micklem" ], "categories": [ "cs.CL", "cs.LG" ], "primary_category": "cs.CL", "published": "20231027001359", "title": "SQLformer: Deep Auto-Regressive Query Graph Generation for Text-to-SQL Translation" }
The impact of convective criteria on the properties of massive starsSibony et al.Observatoire de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland [email protected] of stellar models computed with either the Ledoux or the Schwarzschild criterion to determine the sizes of convective regions are available in the literature. It is still not clear, however, which of these two criteria should be used, although many works have been devoted to that question in the past.In the framework of the evolution of single rotating stars, we study the differences between models computed with Ledoux and Schwarzschild criteria on the internal structure, evolutionary track in the Hertzsprung-Russell diagram(HRD), lifetimes, evolution of the surface abundances and velocities, and masses of the He and CO cores. We investigate the consequences on the nature of the supernova (SN) progenitors and the type of SN events, as well as on the stellar yields of light elements. We also study the impact on the outputs of population synthesis models.Models with initial masses between 7 and 120 M_⊙ at solar metallicity (Z=0.014) and with an initial rotation equal to 0 or 0.4 times the critical velocity at the zero-age main sequence were computed with either the Schwarzschild or the Ledoux criterion until the end of the C-burning phase.Models with initial masses between 15 and 32 M_⊙ computed with the Schwarzschild criterion show larger intermediate convective zones attached to the H-burning shell than models computed with the Ledoux criterion. Their CO cores and outer convective zones in the red supergiant (RSG) phase are also smaller. This impacts many outputs of stars during the core He-burning phase. Schwarzschild models have smaller CO cores and outer convective zones in the RSG phase, and their blue-to-red supergiant ratio is much higher than for Ledoux models. They also produce longer crossings of the Hertzsprung gap and favour blue loops. The upper luminosity of RSGs is little affected by the change in the convective criterion. The maximum luminosity of RSG progenitors for type II-P SN events is lowered from 5.2 to 4.95 when the Ledoux criterion is used instead of the Schwarzschild criterion in non-rotating models. The Schwarzschild criterion predicts longer-lasting, less nitrogen-enriched, and faster-rotating Cepheids. Rotational mixing tends to decrease the differences between Schwarzschild and Ledoux models.The results of this paper can be used as first guidelines to set up observational programs that may help to distinguish between these two model families.The impact of convective criteria on the properties of massive stars Y. Sibony C. Georgy S. Ekström G. Meynet====================================================================§ INTRODUCTION Numerous difficulties arise when stars are modeled. The treatment of convection has long been one of these difficulties. The classical way of dealing with convection in one-dimensional (1D) stellar evolution codes is the following: 1) find the boundaries of the convective zone by applying an instability criterion, 2) empirically increase the size of (some of) the convective zones (a process often called “overshooting”), and 3) compute a thermal gradient to be applied inside the convective zone. For the first point, two criteria are widely used in the literature to determine the stability of a thermally stratified medium with the gravity as the restoring force: the so-called Schwarzschild criterion, which writes (for stability against convection)∇_rad < ∇_ad,where ∇_rad = (dln T/dln P)_rad is the radiative thermal gradient, and ∇_ad = (dln T/dln P)_ad is the adiabatic thermal gradient; and the Ledoux criterion, which writes (also for stability)∇_rad < ∇_ad + φ/δ∇_μ,where φ = (∂lnρ/∂lnμ)_T,P, δ = -(∂lnρ/∂ln T)_T,μ, and ∇_μ = (dlnμ/dln P). It is possible that some layers inside the star are Ledoux-stable but Schwarzschild-unstable. In case of a thermally dissipative medium, however, <cit.> showed that Eq. (<ref>) is reduced to Eq. (<ref>) due to the onset of oscillatory convection. For this reason, the Schwarzschild criterion is most of the time preferred to the Ledoux criterion in stellar evolution computations. Another option is to apply a partial mixing of the layers encompassed by the two criteria. This is called “semiconvection” <cit.>. In this framework, models computed with the Ledoux criterion correspond to a totally inefficient semiconvection, while models computed with the Schwarzschild criterion correspond to an infinitely efficient semiconvection.Point 2 was introduced in stellar evolution codes as a necessity to reproduce some observational features, such as the width of the main sequence (MS) of open clusters <cit.>. Different implementations for this additional mixing (co)exist in stellar evolution codes: penetrative overshoot <cit.>, diffusive overshoot <cit.>, entrainment <cit.>, or more recently deduced from 2D fully compressible time-implicit simulations <cit.>. A thorough discussion of these various implementations and their link to hydrodynamics simulation can be found in <cit.>.Finally, point 3 requires the computation of a thermal gradient to be applied inside the convective regions. For deep convection, the timescale for radiative exchanges between the convective cells and the environment are longer than the convective instability timescale, and the process is very close to being adiabatic, thus the adiabatic gradient ∇_ad can be applied. In convective regions closer to the surface, more complex theories need to be used. A usual choice in this case is the mixing-length theory <cit.>.To obtain a complete picture of convection in stellar interiors, 3D hydrodynamics simulations are required. Convection in stars is very turbulent: the estimated Reynolds number is as high as 10^9 <cit.>. The mass flux is therefore highly asymmetric, requiring a multi-dimensional treatment. Efforts to do this have been made during the past decade for various physical environments <cit.>. However, all of these simulations were limited to a small region and/or a short time compared to the size and lifetime of a star. These results can nevertheless be used to build new algorithms that can obtain a better agreement between the 1D stellar evolution codes and the 3D convection simulations. Some attempts have been made <cit.>, but these procedures are not yet fully mature and still need to be improved for routine use in stellar evolution calculations.It is not clearly established which of the criteria for convection should be used in stellar evolution computations. Moreover, when the Ledoux criterion is used, the efficiency of semiconvection is not known either. Since the pioneering works of <cit.>, very little progress has been made. <cit.> simulated and compared the evolution and supernova (SN) light curves of Population III stars computed with either criterion and found Schwarzschild stars to be much larger and cooler than their Ledoux counterparts. <cit.> calibrated the mixing-length parameter with red supergiants (RSGs) of different metallicities and predicted a decrease in the hydrogen content of stellar envelopes with increasing metallicity when using the Schwarzschild criterion, but not the Ledoux criterion. <cit.> computed grids of Ledoux models at the metallicity of the Small Magellanic Cloud (SMC; Z=0.002) and found that more efficient semiconvection increases the ratio of blue to red supergiants. <cit.> computed non-rotating models of 15, 20, and 25 M_⊙ at solar metallicity to study the relative impact of various convective parameters. They found the strength of convective boundary mixing to be more effective than the choice of criterion for convection, except for determining the initial location of intermediate convective zones. <cit.> performed a 3D hydrodynamical simulation of a convective zone adjacent to a semiconvective region. They reported that both criteria gave the same results when the evolutionary timescale exceeds the convective-overturn timescale (e.g. during the main sequence), but that differences can emerge when the evolution occurs on a rapid timescale (e.g. the Hertzsprung gap is crossed). We pursue our efforts and to try to determine which criterion reproduces the observed features of massive stars better <cit.>. To do this, we provide the first detailed comparison of Schwarzschild and Ledoux criterion models over a wide mass range at solar metallicity, including the effects of rotation.In Sect. <ref>, we present the main physical ingredients of the stellar models. We discuss the impact of changing the convective criterion on the cores and convective structures of stars in Sect. <ref>. Section <ref> presents the impact on various aspects of post-MS stellar evolution. We compare the properties of stars at the end of their evolution in Sect. <ref>. We present models for population synthesis in Sect. <ref>. We compare our results to previous works on the subject and to an observed population of evolved massive stars in Sect. <ref>. The discussion and conclusions are given in Sect. <ref>.§ INGREDIENTS OF THE MODELSThe models computed with the Schwarzschild criterion were presented in <cit.>. The convective regions are determined starting from the centre of the star and going outwards. In each shell of the model, the adiabatic and the radiative gradient are computed and compared, which determines whether the shell is to be radiative or convective. During central H- and He-burning, the size of the core is artificially increased by a length corresponding to 10% of the local pressure scale height at the edge of the formal core (overshooting). This is not done for subsequent burning phases or for convective shells. In the internal convective zones, the thermal gradient is imposed to be adiabatic, and the chemical species are instantaneously mixed (so that the chemical composition of the convective zones is always homogeneous). In the envelope, the classical mixing-length theory <cit.> is used to obtain the thermal gradient, with a mixing-length parameter α = 1.6. For models more massive than 40M_, the mixing-length parameter is set to 1, and the mixing-length is computed according to the density scale height instead of the pressure scale height. Moreover, the turbulent pressure is accounted for by adding an acoustic flux term in the mixing-length formulation <cit.>.The Ledoux models are new models that are computed with the same physical ingredients as the Schwarzschild models, except that the μ-gradient is computed and accounted for when a shell is to be determined as radiative or convective. All the other ingredients are kept the same. In particular, the treatment of overshooting and of mixing is the same, and the mass-loss prescriptions are the same as well.Rotation is also treated in the same way in both sets of models. The full advection-diffusion equation for the transport of angular momentum is solved during the main sequence <cit.>. The horizontal diffusion coefficient that intervenes in this formalism was proposed by <cit.>, and the diffusion coefficient associated with the shear mixing was proposed by <cit.>. All the rotating models shown in this paper have an initial rotation rate υ_ini/υ_crit = 0.4, where the critical velocity υ_crit is computed as in <cit.>.Mass-loss is applied to all of our models according to different recipes, which are summarised here <cit.>. For non-Wolf-Rayet (WR) stars hotter than log(T_eff) = 4.0, the mass-loss rates by <cit.> are applied. In this case, a correction factor accounting for the effects of rotation on the mass loss is also applied <cit.>. For models up to 9M_ and when log(T_eff) < 3.8, the <cit.> rates are used, with the parameter η = 0.6. For more massive models, the mass-loss rate is a linear fit of observational data from <cit.> and <cit.>. When the surface abundance of H drops below X_s < 0.3 and the effective temperature is above T_eff > 4.0, we switch to WR-kind mass-loss rates <cit.>, unless they are lower than the <cit.> rates. Finally, we apply the <cit.> rates when the above recipes are not applicable.For massive RSGss, some layers in the external layers, very close to the surface, reach luminosities far above the Eddington luminosity. To facilitate the computation in this case, we therefore increase the mass-loss rate as described above by a factor of 3 when the over-Eddington luminosity is overcome by a factor of 5 for models more massive than 20 M_.§ CONVECTIVE CRITERION AND INTERNAL STRUCTURE OF THE MODELSChanging the convective criterion impacts the sizes of the convective zones and thus the chemical structure of a star. Before discussing the impacts on the observable outputs of the stellar models using different criteria, we discuss here the direct impact on various internal properties of our stellar models. In Table <ref>, we present for each model the masses of the He- and CO cores at the end of the central H- and He-burning phases, respectively, the maximum extent (in Lagrangian mass coordinates) of the intermediate convective zone (ICZ) before the beginning of core He-burning, and the deepest reach of the outer convective zone (OCZ) at different evolutionary stages. We define the cores in the following way: the He core is the region inside the mass coordinate where at the first time the stellar layers are scanned from the surface towards the interior the mass fraction of helium Y>0.9. The boundary of the CO core is where for a similar scanning, it crosses Y<10^-2. The masses of the He cores at the end of the MS phase obtained with the two different criteria show very small differences. For non-rotating models, the differences are always smaller than 3%[We deduce this number by computing the Schwarzschild mass minus the Ledoux mass divided by the Ledoux mass.]. The same difference is obtained by comparing rotating models. These differences are blurred by other uncertainties pertaining to the models and are consistent with no effect due to the change in the convective criterion. This is consistent with the fact that in phases during which the convective core size decreases in mass, no differences are expected when the Schwarzschild or the Ledoux criterion are applied because there is no mean molecular weight gradient just above the core. The only exception is the case of the rotating 120 M_ model. The core mass obtained with theSchwarzschild criterion (33 M_⊙) is much lower than the core mass obtained with the Ledoux model (55.6 M_⊙). The structure evolution of the two rotating 120 M_ models shows intermediate convective shells that appear just above the core in the Schwarzschild models and quickly merge with the core. This grants more fuel to the core, prolongs the main sequence by ∼100 kyr, and causes greater mass-loss of the star. Rotation increases the mass of the He core at the end of the MS phase. This increase is very similar for the Schwarzschild and Ledoux criteria. It is 5-6% for the 9 M_⊙, reaches a maximum for a mass about 40 M_⊙ where it amounts to 26-27%, and then decreases because mass loss by stellar winds becomes important and thus blurs the effects of rotation. The rise in the He-core mass with initial mass is linked to the higher efficiency of rotational mixing when the initial mass increases <cit.>.Figure <ref> shows how the mass of the convective core evolves during the core He-burning phase in the non-rotating stellar models for initial masses between 9 and 15 M_⊙. The rotating cases (not shown here) exhibit very similar qualitative behaviours as those shown in Fig. <ref>: Their core masses increase during helium burning, and there are frequent breathing pulses[These breathing pulses are commonly agreed to be numerical artefacts and not of a physical nature.] that inject helium from the H-burning shell into the core. Interestingly, the convective cores are more massive in the 12 and 15 M_⊙ models when the Ledoux criterion is applied. At first sight, this appears to be counter-intuitive. A stricter criterion for instability (Ledoux) should produce smaller convective cores than a less strict criterion (Schwarzschild). However, changing the criterion also affects the sizes of other convective zones in the models, and the changes in these other convective zones in turn affect the size of the convective core (see below). The CO-core masses (see Table <ref>) present modest differences for masses below and including 15 M_⊙. The largest differences in the higher-mass star range are due to differences in the mass lost by stellar winds.As expected, when the Schwarzschild criterion is used, larger intermediate convective zones associated with the H-burning shell appear. This has important consequences for the evolution of the He core, the occurrence of a blue loop <cit.>, and for the surface abundances of stars, which evolve back to the blue after having lost a large amount of mass during the red supergiant stage <cit.>. The ICZ tends to increase the contribution of the H-burning shell to the total luminosity (as it transports energy more efficiently), and as a result, the star reacts to this by decreasing the energy generated in the He-burning core. The Schwarzschild core has a lower central temperature than the Ledoux core and is therefore less dense. The Schwarzschild core has the same extent in radius as the Ledoux core, but its mass is lower, as shown in the bottom panels of Fig <ref>.A convective envelope does not appear in the high-mass range (above 60 M_⊙) as a result of mass loss, which prevents the stars from evolving in the red part of the Hertzsprung-Russell diagram (HRD). The outer convective zones for the 40 M_⊙ models are very small. The 32 M_⊙ initial mass is a transition case, without a convective envelope for the rotating models and with a significant envelope, reaching down to 25.5 M_⊙ for Schwarzschild and 16 M_⊙ for Ledoux, in the non-rotating case.For masses below and including 25 M_⊙, all the models have significant convective envelopes during the core He-burning phase. Models between 12 and 25 M_⊙ computed with the Ledoux criterion have deeper outer convective zones, but this is not the case for the 7 and 9 M_⊙ (see Sect. <ref>). It might appear surprising that adopting a criterion that hinders convection can increase the extent of the outer convective zone. This extent also strongly depends on the position of the star in the HRD during the core He-burning phase, however: A hotter position during that phase decreases the extent of the outer convective zone. The position in the HRD at which core He-burning occurs is very sensitive to the variation in the abundances above the core as well as to the presence or absence of an intermediate convective zone. The intermediate convective zone typically tends to make the star more compact (i.e. blue), which then reduces the size of the convective envelope. In this manner, a criterion favouring convection (Schwarzschild) can produce a larger ICZ and consequently a less extended OCZ. § EVOLUTIONARY TRACKS AND LIFETIMES We show the evolutionary tracks for all models in Fig. <ref>. Main-sequence tracks are as expected to be similar in the Schwarzschild and Ledoux models. The main difference occurs in the rotating 120 M_⊙ models, as mentioned in Sect. <ref>. After the MS phase, the largest differences between the tracks (whether rotating or not) computed with the Schwarzschild and Ledoux criteria occur for the mass range between 7 and 40 M_. In the upper mass range (between 60 and 120 M_), stellar winds are the main factor governing the evolution of the stars. In the next paragraphs, we discuss in more detail how changing the convection criterion affects the first crossing of the Hertzsprung gap (Sect. <ref>), the blue loops (Sect. <ref>), the properties of the Cepheids (Sect. <ref>), the properties of stars ending their evolution as red supergiants (Sect. <ref>), and the properties of stars ending their evolution as yellow (YSG), blue supergiants (BSG), or as Wolf-Rayet stars (Sect. <ref>). §.§ First crossing of the Hertzsprung gap Figure <ref> shows the evolution of the effective temperature as a function of the mass fraction of helium at the centre Y_c for all the models with initial masses between 7 and 40 M_⊙.Models with masses below and including 12 M_⊙ all begin their core He-burning phase when the star is a red supergiant, with log(T_ eff [K])∼ 3.6. For the 15-25 M_⊙ mass range, the Ledoux criterion favours a quick crossing of the Hertzsprung gap, whether the models are computed with or without rotation. This is likely to be related to the fact that the use of the Ledoux criterion restrains the apparition and/or the extension of the intermediate convective zone associated with the H-burning shell, as shown in Fig. <ref>, which shows Kippenhahn diagrams for the post-MS phases of the four 15 M_⊙ models. This tends to produce steeper gradients of H and He in that zone and favours helium ignition in the core in the red supergiant stage<cit.>. The models of 15 and 20 M_⊙ with the Schwarzschild criterion (as well as the non-rotating 25 M_⊙ model) show a different behaviour. These models ignite helium in their cores when the star still has a high effective temperature (logT_ eff [K] > 4.0), whether it rotates or not. This is linked to the larger intermediate convective zone in these models when the Schwarzschild criterion is used. A tentative explanation is that convection is more effective at transporting energy than radiation, and therefore, the presence of this ICZ allows the increased luminosity generated by the contraction of the core to reach the surface more easily. In the Ledoux models, however, energy is transported by radiation, and a larger fraction of it is deposited into the envelope itself, causing its rapid expansion and a drop in its luminosity. In both cases, the core contraction timescales are similar; the ICZ only affects the timescale of the envelope expansion. This means that at the onset of core helium burning, the Schwarzschild star remains in a blue part of the HRD.The cases of the more massive models (equal to or higher than 25 M_⊙) result from intricate interactions between convection, rotation, and mass loss. In general, helium ignition in the core of these models always occurs at an effective temperature below logT_ eff [K]=4.0. The only exception is the non-rotating 25 M_⊙ Schwarzschild model. The evolution of the luminosity during the first crossing varies depending on the convective criterion. In general, the Ledoux models for masses between 15 and 20 M_⊙ experience a stronger decrease in luminosity before going up the Hayashi line than Schwarzschild models because these models are in a stronger radiative disequilibrium than the Schwarzschild models. Because they expand more rapidly, a larger amount of energy per unit time is tapped from the gravitational contraction of the core to expand the envelope. This tends to decrease the amount of energy that is radiated away. This decrease is to some extent a consequence of the time taken to cross the Hertzsprung gap. If it occurs on a sufficiently long timescale, the decrease in luminosity is modest. When it occurs on a rapid timescale, the luminosity drop is marked. To summarise, the Schwarzschild criterion favours the apparition of an ICZ above the H-burning shell after the main sequence, and for the reasons already described just above, this favours He ignition in the core when the star is still in the blue part of the HRD. The crossing of the Hertzsprung gap occurs on a longer timescale (nuclear, about 500 kyr) than with the Ledoux criterion (thermal, about 30 kyr). With the Schwarzschild criterion, the envelope has more time to radiate the excess energy produced by the contraction of the core away, and the Hertzsprung gap crossing occurs at roughly constant luminosity. The Ledoux criterion suppresses the ICZ, implying a shorter timescale for the Hertzsprung gap crossing. A large part of the energy radiated by the core contraction is absorbed by the envelope. As a result, the luminosity at the surface decreases significantly during the crossing. §.§ Blue loops Fig. <ref> shows that for initial masses above 20 M_⊙ the effective temperature can increase during core He-burning. These phases are not considered as blue loops for two reasons, however. First, these stars end their lifetimes at high effective temperatures as BSG or WR stars. Second, this evolution is mainly driven by mass loss and not by some internal evolution due to changes in the hydrostatic structure. In the rest of this section, we thus focus on the mass domain between 7 and 15 M_⊙. For the lower-mass models, blue loops, characterised by an increase in effective temperature (blueward evolution) around Y_c∼0.4-0.3, appear and are followed by a decrease (redward evolution) at the end of core He-burning. The physics of the blue loops was discussed in detail in previous papers <cit.>. Many aspects of the stellar models influence the blue loops: for instance the rate of the ^14N(p,γ)^15O reaction <cit.>, of the ^12C(α,γ)^16O reaction <cit.>, the undershooting below the outer convective zone <cit.>, the helium and metal content <cit.>, and rotation <cit.>. Recently, the occurrence of blue loops was used to constrain the magnetic moment of massive neutrinos <cit.> or the impact of axions <cit.>. Stellar models accounting for these effects undergo an additional loss of energy, and this tends to suppress the blue loops. This demonstrates that if the evacuation of energy from the core is facilitated, then this favours the suppression of the blue loops. Here, we focus on only one aspect, namely the impact of the convective criterion.It has been shown that the blue loops may be linked to subtle differences in the abundance distributions near the H-burning shell, and that excess helium above the shell will suppress the blue loop <cit.>. <cit.> have proposed a criterion based on the gravitational potential of the core in order to decide whether a model produces a blue loop. Stars whose core has a gravitational potential higher than a given critical value remain along the Hayashi line, while those whose core has a gravitational potential lower than this critical value develop a blue loop. Interestingly, the quantity to be compared to the critical value is the core potential only if there is a steep chemical gradient just above the He core or near the H-burning-shell. Otherwise, the gravitational potential of the core has to be multiplied by a factor that is larger for a milder gradient, thus inducing the model to stay along the Hayashi line. The non-rotating 7 and 9 M_⊙ models computed with the Ledoux criterion show no blue loops, while the 7 and 9 M_⊙ models with the Schwarzschild criterion have well-developed loops. Because these two masses behave very similarly in the features of their overall evolution, we focus on the 7 M_⊙ models, and the results qualitatively apply to the 9 M_⊙ models. Figure <ref> shows the ^4He profiles at different times before and during core He-burning for the two non-rotating 7 M_⊙ models. In the Schwarzschild model, the outer convective zone has equalised the abundances from the surface down to a mass near 1.6 M_⊙, reducing the extent of the zone with a chemical composition gradient (see the line corresponding to ^4He_c where the gradient is between the mass coordinates1.3 and 1.6 M_⊙). This occurs at the very beginning of the core He-burning phase. In the Ledoux model, the base of the outer convective zone always remains above the mass coordinate of 2.1 M_⊙ and the zone of the chemical composition gradient is larger (from 1.3 to 2.1 M_⊙), thus milder, than in the Schwarzschild model. This acts to prevent the formation of a blue loop. The Ledoux model also has an excess of helium above the hydrogen-burning shell (at 1.7-2.1 M_⊙), which suppresses the blue loop <cit.>. The Ledoux models have this excess helium but the Schwarzschild models lack it because the mean molecular weight gradient above the hydrogen-burning shell stabilises this region against convection when the Ledoux criterion is used. As a result, the convective envelope does not reach down far enough to equalise the helium abundance to the surface value, leaving a helium excess above the shell.Models of 7 and 9 M_⊙ with rotation present less marked differences when the convective criterion is changed from the Schwarzschild to the Ledoux criterion. Rotational mixing blurs the differences that arise from different choices of the criterion for convection, and chemical composition gradients as well as helium abundances above the H-burning shell are similar for those models.For the non-rotating 12 M_⊙, we do not observe any blue loop or any difference between the Schwarzschild and the Ledoux models. The only 12 M_⊙ model showing a blue loop is the rotating Ledoux model. This model would thus predict the most luminous Cepheids of those discussed in this paper (excluding the 15 M_⊙ models that cross the Cepheid instability strip very briefly).§.§ Cepheid propertiesHere, we discuss results concerning stars that cross the Cepheid instability strip (shown as the grey shaded region in Fig. <ref> and in the small panels of Figs. <ref> and <ref>). The initial masses of these stars are between 7 and 15 M_⊙.Figure <ref> shows the durations of the different Cepheid phases. As is already well known, the first crossing is very short in all cases: It is much shorter than the blue loop when one occurs. As a result, most observed Cepheids should be undergoing a blue loop rather than crossing the Hertzsprung gap for the first time. For non-rotating models, the Schwarzschild criterion in general favours a longer duration of the Cepheid phase. The 7 and 9 M_⊙ Schwarzschild models undergo blue loops and the Ledoux models do not. Moreover, the 15 M_⊙ Schwarzschild models start core helium-burning at an effective temperature of log(T_ eff [K])∼4 and thus subsequently evolve on a helium nuclear timescale, whereas the Ledoux stars begin core helium-burning as red supergiants and thus cross on a much shorter Kelvin-Helmholtz timescale.For rotating models, the situation is less different between the Schwarzschild and Ledoux models, although the case of the 9 M_⊙ shows very significant differences: the Schwarzschild model spends 110 kyr as a Cepheid, while the Ledoux model spends only 20 kyr. We find no obvious correlation between the overall duration of the Cepheid phase and the stellar mass.Figure <ref> shows the surface abundances of ^14N before and during the blue loop. In non-rotating models, there is no nitrogen surface enrichment during the MS phase, and as a result, no model shows nitrogen enrichment during the first crossing. Non-rotating models show an increase in surface nitrogen when they go through the RSG stage, where some nitrogen is dredged up to the surface through convection. This dredge-up occurs before the blue loop and is thus visible during the blue loops for the 7 and 9 M_⊙ models. For rotating models, the impact of rotational mixing during the MS is apparent in the surface ^14N during the first crossing as it is larger than the initial value for all models. Interestingly, in the nitrogen surface abundance of the 9 M_⊙ Schwarzschild models, the blue-loop value obtained for the non-rotating model is equivalent to that given by the rotating model before the blue loop. This shows that the surface nitrogen abundance by itself is a poor indicator of the stellar phase (before or after the blue loop), unless its surface velocity is measured (see more on this point below). The nitrogen surface abundance increases with the initial mass, and it is slightly higher for Schwarzschild than for Ledoux models. It is higher in rotating models than in non-rotating models during the blue loop. Figure <ref> shows the surface velocities of rotating stars at the three moments described above, as well as at the hottest point of the blue loop. In the 9 M_⊙ Schwarzschild model, the surface rotation speed along the blue loop strongly increases during blueward evolution (from 10 km/s up to nearly 100 km/s or nearly 70% of the critical velocity). Interestingly, it decreases very fast during the redward evolution after reaching the hottest point of the blue loop. The strong surface acceleration arises because during the RSG phase, an extended convective envelope is present. This convective zone rotates as a solid body, which means that the angular momentum is highest at the outer border of the convective zone. When the star contracts, the angular momentum accumulated by convection in the outer layers produces a rapid acceleration. The star loses angular momentum by winds during the blue loop, and it has a larger radius after than before (the effective temperature is similar, but the luminosity has increased). As a result, when it evolves back to the red, its situation is not the same as before the blue loop: Its surface velocity is lower. The Ledoux model shows a qualitatively similar behaviour, but the different evolution of the outer convective zone makes the evolution of the surface rotation slightly different. §.§ Red supergiantsIn this and the next section, we examine the impact of the convective criterion on different subclasses of post-MS stars. These subclasses are defined in Table <ref>.Here, we discuss whether the convective criterion affects the maximum luminosity of red supergiants. Fig. <ref> shows that the maximum luminosity reached for red supergiants is not much affected by the convective criterion, but it is affected by rotation. Rotating models produce an upper limit for the RSG luminosity of about log(L [L_⊙])=5.4, while the non-rotating models extend this upper limit to values of about log(L [L_⊙])=5.7.When we now compare the upper luminosity of red supergiants that remain red until the end of their evolution and thus will be progenitors of a type II SN, then the upper limits are lower than those in the previous paragraph because mass loss by stellar winds induces a blueward evolution and thus produces yellow or blue progenitors in the higher-luminosity range (see Sect. <ref>). This upper limit in luminosity for red supergiant core-collapse progenitors is about log(L [L_⊙])=5.2 for the non-rotating Schwarzschild models, log(L [L_⊙])=4.95 for the non-rotating Ledoux models, log(L [L_⊙])=5.1 for rotating Schwarzschild models, and log(L [L_⊙])=5.05 for the rotating Ledoux models.We note that the models that spend a larger fraction of their core He-burning phase in the red supergiant phase have slightly lower upper limits. This is expected because their mass is reduced by strong RSG winds for a longer duration. This tends to lower the minimum mass above which stars later evolve blueward. For instance, in the non-rotating case (with the largest difference in maximum RSG luminosity), the most luminous Schwarzschild RSG has an initial mass of 20 M_⊙ while for the Ledoux RSG, the initial mass is 15 M_⊙ because the 20 M_⊙ model becomes blue after its mass-loss episode.§.§ Yellow and blue supergiants, luminous blue variables, and Wolf-Rayet stars The mass domain between 20 and 40 M_ is a transition domain between the stars that evolve into the red supergiant stage and remain a red supergiant until the end of their lifetimes and the stars, for instance the 60 M_ models, that never evolve into a RSG, but become a luminous blue variable before evolving into a WR phase. In this transition mass domain, the star can end its evolution as a red, yellow, or blue supergiant, or become a Wolf-Rayet star (in the latter case after having been a red supergiant for a while). The tracks show a complex behaviour resulting from intricate effects involving convection, rotational mixing, and mass loss. Below an initial mass M_ RSG∼15M_⊙, stars end their lives as red supergiants (see Sect. <ref>). Stars initially between M_ RSG and M_ RSG-WR∼20-25M_⊙ end their lives as a yellow or blue supergiants. For initial masses between M_ RSG-WR and M_ WR∼40M_⊙, stars end their lives as Wolf-Rayet stars after a red supergiant stage. We note that this mass range of stars that experience both an RSG and a WR phase is expected to be narrow because at the moment, apart from Westerlund-1 <cit.>, there is no single-age stellar population in which both red supergiants and Wolf-Rayet stars are observed at the same time. Finally, stars with initial masses above M_ WR enter the WR phase without previously becoming a RSG.These limits appear to be more sensitive to rotation than to the convective criterion (at least for the initial rotation speed considered here). In general, rotation tends to decrease the limits M_ RSG-WR and M_ WR (see more below). Because M_ WR is shifted to lower values than M_ RSG-WR in rotating models, this implies that rotation suppresses or at least disfavours the production of WR stars with RSG progenitors (at least for a single-star evolution).Mass loss by stellar winds (which increases when the luminosity increases and the effective temperature decreases) begins to become a dominant feature in this mass domain (it becomes an even more important feature for higher initial masses). As a numerical example, the 20 M_⊙ models lose significant amounts of mass during the post-MS phase (see Table <ref>), ending core He-burning with a total mass of about 7-9 M_⊙ (for comparison, the 15 M_⊙ models reach the end of He-burning at around 11-13 M_⊙). The masses of the 25 M_⊙ models at the end of He-burning are about 9-10 M_⊙ (this is higher than for the 20 M_⊙ models, but more mass has been lost by the 25 M_⊙ models). The 32 and 40 M_⊙ models experience even stronger mass loss (losing up to 22 and 28 M_⊙ for the 32 and 40 M_⊙ models, respectively). They all end core He-burning at high (logT_ eff [K]>4.4) effective temperatures, and some of them lose so much mass that they become Wolf-Rayet stars. While all the stars including and above 20 M_⊙ lose large amounts of mass, the mass of the core remains an increasing function of the initial mass. In other words, the 20 M_⊙ models reach the end of He-burning with a lower total mass than the 15 M_⊙ models, but their CO cores are more massive (3.65-4.44 M_⊙ compared with 2.11-2.66 M_⊙). Furthermore, the models between 20 and 40 M_⊙ experience much stronger surface nitrogen enrichment than those below 20 M_⊙.The mass domain above this transition mass range is dominated by mass loss. Not many differences are therefore visible between Schwarzschild and Ledoux models.The most noticeable difference occurs for the 120 M_ models, as discussed in Sect. <ref>.Figure <ref> shows the durations of the different Wolf-Rayet phases (stacked in chronological order) for the masses where at least one of the four models reaches a WR phase. A first striking difference appears between the rotating and non-rotating models. Rotation produces longer WR phases. This is in line with previous studies <cit.>. The time spent as WR stars is longer for rotating than for non-rotating models because rotation drives additional mass loss and mixing. More mixing implies that less mass loss is required to uncover hydrogen-poor layers. Rotation also favours the WNL phase, decreases WNE and WC durations, and prevents the WO phase. Interestingly, rotation increases the WNC-phase duration.Above and including 60 M_ the Schwarzschild and Ledoux models spend similar times (within 6%) as Wolf-Rayet stars because mass loss is the dominating effect here.This agrees with the fact that the differences between Ledoux and Schwarzschild models are mostly due to the ICZ. They do not appear in the most massive stars due to the strong mass loss. As a result, the choice of criterion for convection intervenes as a second-order effect.For the mass range between 25 and 40 M_⊙, the differences induced by changing from the Ledoux to Schwarzschild criterion are more marked. Without rotation, the Ledoux criterion produces Wolf-Rayet stars for lower initial masses than the Schwarzschild criterion. These Ledoux models spend a longer time of their core He-burning phase as an RSG than the Schwarzschild models. As a result, mass loss is stronger for the former, and they reach the WR phase earlier (as a result, the WR phase lasts longer).§ FINAL PROPERTIES OF THE MODELS§.§ Core masses Figure <ref> shows the CO-core masses and the mass ratios of the He and CO core as functions of the initial mass for the four sets of models at the end of their evolution. The masses of the cores are defined as in Table <ref> (here, we discuss their values at the last computed stage, however). The ratio of the mass of the He core and that of the CO core decreases from 2 to roughly 1.3 between 7 and 32 M_⊙ and then remains constant around 1.2-1.3, indicating that the He-burning shell sits relatively farther out from the C-burning core in lower-mass models than in higher-mass models. The differences among models of the same mass in the left panel of Fig. <ref> show that rotating models have a 10% larger core on average (except for the rotating 32, 40, and 120 M_⊙ Schwarzschild models) than non-rotating models. There is no clear trend in the differences between Schwarzschild and Ledoux models, however. The impact of rotation is stronger than that of changing the convective criterion.The strongest impact of changing from the Ledoux to Schwarzschild criterion appears for the 7 and 9 M_⊙ stellar model. The ratio of He- to CO-core mass can vary from 1 to 2, depending on the criterion. This mass domain covers the transitions between stars that would produce white dwarfs and those that would produce neutron stars at the end of their lifetimes. This transition has been studied for instance by <cit.>. This final fate depends on whether ignition of carbon occurs in degenerate, mildly degenerate, or non-degenerate conditions. A star whose central region is for a large part in mildly degenerate conditions can still succeed in igniting (off-centre) carbon, but this requires a more massive CO core than in non-degenerate conditions. It is therefore expected that a larger part of the central regions for models with the lowest ratio of the He- to the CO-core mass lies in the degenerate domain. Only models that ignite carbon in non-degenerate or very mildly degenerate conditions (i.e. the rotating 9 M_⊙ models) show high ratios of the He to CO core. §.§ Properties of the compact object progenitorsWe list integrated chemical abundances (in mass fractions) of ^1H, ^4He, ^12C, and ^16O in the models at the end of core C-burning (core He-burning for stars that do not reach the end of C-burning) in Table <ref>. This moment is close enough to the end of evolution that the abundances of the isotopes shown in the ejecta will not change dramatically <cit.>. Furthermore, explosive nucleosynthesis during the supernova is not expected to significantly affect the yields of these four elements. We computed the remnant types and masses using the models from <cit.>, whose predictions concerning the remnant type (neutron star or black hole) were based on the `Ertl criterion' <cit.>. For each star, we took the mass of the CO core M_ CO (defined, like in Table <ref>, as the region within which the mass fraction of helium Y<10^-2) and the mass fraction of ^12C in the core X_ ^12C at the end of helium-burning. When M_ CO<1.4 M_⊙, then we considered the remnant to be a white dwarf with a mass of M_ WD=M_ CO. When 1.4 M_⊙<M_ CO<2.5 M_⊙, the remnant is a neutron star with a mass of M_ NS=1.4 M_⊙. When 2.5 M_⊙<M_ CO<10 M_⊙, we consulted the top panel of Fig. 3 of <cit.> to determine whether the star will explode or implode (without ejecting anything). If it explodes, then we estimated the baryonic[We do not provide the gravitational masses of the neutron stars. One can compute them using the equation of state of their choice. One example for a relation between baryonic and gravitational mass can be found in Eq. (39) of <cit.>. Typically the gravitational mass of a neutron star will be 10-20% smaller than its baryonic mass.] mass of the resulting neutron star to be M_ NS = M_4, where M_4 is defined in <cit.>. If it implodes, then the mass of the resulting black hole is M_ BH =M_ fin. Finally, if 10 M_⊙<M_ CO<30 M_⊙, the outcome is a black hole, with M_ BH = M_ fin. This equality between the black hole mass and the final mass of the star may be an overestimation, but the underlying assumption is that the entire star collapses into the black hole and that no matter is ejected. Any value between M_ CO and M_ fin would be a reasonable estimate for M_ BH. The case M_ CO>30 M_⊙ could yield a pulsational or regular pair-instability supernova, but this applies to none of our stars. We then integrated the abundances of ^1H, ^4He, ^12C, and ^16O above the remnant (between its mass coordinate and the surface). If the remnant is a black hole, we list the integrated chemical abundances above the mass coordinate of the CO core. We grouped all other elements into the quantity 𝒵 (usually, the metallicity Z includes ^12C and ^16O, and therefore, we used the calligraphic 𝒵= 1-X-Y - X(^12C) - X(^16O)). We also provide the mass, effective temperature, and luminosity of the final models.Using the criteria defined above on M_ CO, as well as the grid of CO cores with varying ^12C core mass fractions computed by <cit.>, we can predict the type of compact object that would remain after the end of stellar evolution. All of the 7 and 9 M_⊙ models would yield white dwarfs (WD in Table <ref>). Of the 12, 15, and 20 M_⊙ models, the rotating 15 M_⊙ Schwarzschild model would become a black hole, while all the others would become neutron stars. In the 25 M_⊙ stars, the Ledoux models become black holes and the Schwarzschild models become neutron stars. All stars 32 M_⊙ and above become black holes. The composition of the envelopes shows that, as expected for stars that reach the WR phase, the models above 20 M_⊙ do not have any ^1H at the end of their evolution. More generally, the integrated abundances of both ^1H and ^4He decrease with increasing initial mass.Conversely, the quantity of metals (including ^12C and ^16O) in the envelope is an increasing function of the initial mass. We find no striking consistent difference between the Schwarzschild and Ledoux models. The differences mostly concern the integrated ^12C and ^16O abundances: in some cases, one model produces more ^12C and less ^16O than its counterpart, but there is no clear-cut effect.The case of the 25 M_⊙ models is interesting because the two Ledoux models are predicted to become black holes and the Schwarzschild models to become neutron stars. There is a striking difference in the compositions of their envelopes: the former are much richer in ^4He (Y_ env of 0.58 and 0.74 against 0.20 and 0.33 for the Schwarzschild models). The latter, conversely, are much richer in ^12C and ^16O (with the most marked differences being in the ^16O abundance: X(^16O)_ env of 0.525 and 0.417 against 0.188 and 0.105 for the Ledoux models).In the 32 and 40 M_⊙ models, the rotating 32 M_⊙ and non-rotating 40 M_⊙ Schwarzschild models present very different ^12C and ^16O abundances from the other models of the same mass. The former is much richer in ^12C and ^16O (and thus poorer in ^4He) than the other 32 M_⊙ models; the latter is much poorer in ^12C and ^16O (and thus richer in ^4He) than the other 40 M_⊙ models.From the ^1H and ^4He content of the models at the last computed stage, we can infer the type of core-collapse supernova that is expected to occur for all the exploding models. All the models of 12 and 15 M_⊙ contain significant amounts of ^1H (about 50% of the envelope mass), and therefore, we expect type II SNe from them (except for the rotating Schwarzschild 15 M_⊙ model, which we expect to directly collapse into a black hole). Of the 20 M_⊙ stars, only the non-rotating Schwarzschild model is expected to lead to a type II SN because it still contains a non-negligible amount of ^1H. We expect the other 20 M_⊙ models to die in type Ib SNe because they have no ^1H left, but still contain ^4He. The same holds for the 25 M_⊙ Schwarzschild models, for which we expect an explosion.Finally, we theoretically predict the remnant masses. The most massive neutron star comes from the non-rotating 20 M_⊙ Schwarzschild model, with a baryonic mass M_ NS=1.77M_⊙. The least massive black hole results from the non-rotating 25 M_⊙ Ledoux model, with M_ BH=9.3M_⊙. Acknowledging that taking M_ BH = M_ fin may be an overestimation of the black hole mass, we can estimate the absolute minimum predicted black hole mass from that of the CO cores (taking M_ BH = M_ CO). In this case, the 15 M_⊙ rotating Schwarzschild model yields the lowest-mass black hole, with M_ BH=2.68M_⊙.§ STELLAR POPULATION SYNTHESIS§.§ The population synthesis models The population synthesis results we present in this work were computed using the Syclist code <cit.>, using the stellar models from <cit.>, and from <cit.> and <cit.> as input. We added all the other Ledoux models that we computed for the current paper in order to cover masses from 7 to 120 M_⊙. Syclist generates a stellar population with initial masses sampled according to the Salpeter IMF, and it saves the state of the population at each requested time step. To mimic continuous star formation rates, we summed the populations at each time step, and we then counted stars and classified them into subtypes according to some of their properties. The subtypes are defined in Table <ref>.We generated two types of clusters. One type consisted of a burst of star formation at t=0, and the other type had continuous star formation for 60 Myr. The second type allowed us to simulate a stationary state and derive equilibrium relative abundances of stellar subtypes. We also generated isochrones of our populations to compare them to the observed populations of evolved massive stars in Westerlund-1. §.§ Case of a burst of star formationFigure <ref> shows the number of blue and red supergiants (top panels) and Wolf-Rayet stars (bottom panels) alive at each time in the case of a burst of star formation, normalised by the initial number of stars. For non-rotating stars, the Ledoux clusters produce far fewer blue and many more red supergiants than the Schwarzschild clusters.This is attributable to the different crossing durations of the Hertzsprung gap above 15 M_⊙ (the Schwarzschild models start core He-burning as BSGs, the Ledoux models start as RSGs), and to a larger extent (because of the IMF slope) to the blue loop behaviour, which is present in Schwarzschild stars between 7 and 12 M_⊙ but is absent in Ledoux models.This is not so much the case for rotating stars because, as mentioned previously, rotation tends to mitigate the differences between the two sets of models, and the blue loop behaviour is much more similar in rotating models. The difference in these models occurs just before 20 Myr, which is when the Ledoux 12 M_⊙ model undergoes its blue loop while the Schwarzschild model does not.Finally, we note that the non-rotating Wolf-Rayet populations (especially the WNL stars) are quite different, with Ledoux clusters clearly hosting more WR stars than Schwarzschild clusters. This is expcected because the WR stage is reached for non-rotating Ledoux stars of lower masses (down to 25 M_⊙), and it lasts longer for these lower masses (see the left panel of Fig. <ref>) than in the Schwarzschild models. WNE, WNC, and WO type stars are very rare in both types of clusters, and the differences in WC number densities are not significant. §.§ Stationary regime populations We now discuss populations in the stationary regime. Table <ref> shows the number ratios of different classes of stars in this equilibrium state, where we simulated populations with a constant continuous star formation episode lasting 60 Myr. The ratios of post-MS to MS stars are very similar in Schwarzschild and Ledoux models: we find differences of 5% (non-rotating) and 9% (rotating), and the Schwarzschild post-MS to MS ratio is higher than the Ledoux ratio because Ledoux models have a slightly longer main sequence and shorter helium-burning phases. The ratios of BSGs toRSGs are higher for the Schwarzschild populations regardless of the rotation scheme, although the difference is more marked for populations of non-rotating stars (the BSG to RSG ratio is almost five times higher for Schwarzschild than for Ledoux populations); this echoes what we mentioned in Sect. <ref>, and we attribute this difference to the different blue loop behaviour and to the durations of the Hertzsprung gap crossing.The WR to RSG ratio is quite interesting in the non-rotating case. Ledoux populations produce more RSGs, but also more WR stars than Schwarzschild populations. The overproduction is higher for the WR stars, as WR/RSG_ Ledoux >WR/RSG_ Schwarzschild. Finally, while the populations produce similar numbers of WC stars (see the lower panels of Fig. <ref>), the ratio of WC to WN stars is three times higher for the non-rotating Schwarzschild population because the Ledoux population contains more WN(L) stars.Figures <ref> (non-rotating) and <ref> (rotating) show density maps in the HRD of populations generated with continuous star formation. The densities are higher in the lower parts of the diagram because more lower-mass stars are generated due to the Salpeter IMF we used. The comparison of populations both with and without rotation shows that the MS regions are the same for the Schwarzschild and Ledoux models. We note for the non-rotating populations that they contain a region of higher density that corresponds to the blue loops of the Schwarzschild models, but this region is absent from the Ledoux models. We also see differences in the supergiant stars above log(L [L_⊙])∼4.5 because the 20 to 40 M_⊙ models experience a blue and/or redward evolution in diverse parts of the HRD. For the rotating populations, the blue loop stars are present in both sets of models, but their high-density regions extend to lower surface temperatures for the Ledoux models (logT_ eff [K]∼3.85) than for the Schwarzschild models (logT_ eff [K]∼3.95). This would not be noticeable from the HRD tracks alone (the loops even extend farther bluewards for the rotating Ledoux stars), which means that the Ledoux models spend less time at the blue edge of the loop than the Schwarzschild models do. Fig. <ref> shows that the higher-density region of the blue loops corresponds to the hottest point along the loop of 7 M_⊙ Schwarzschild stars, but the equivalent point of the Ledoux-star loop extends to a higher temperature. The high-luminosity part of the blue-loop region reaches higher luminosities for the Ledoux models (log(L [L_⊙])∼4.6) than for the Schwarzschild models (log(L [L_⊙])∼4.3) because the threshold for blue loops is between 12 and 15 M_⊙ for the former and between 9 and 12 M_⊙ for the latter. We also see differences in the higher luminosity supergiants, which we explain in a similar way to those of the non-rotating models.§ COMPARISON WITH PREVIOUS WORKS AND OBSERVATIONS§.§ Comparison with previous theoretical works While this paper presents the first grid that spans such a wide mass range in the comparison of Schwarzschild and Ledoux models at solar metallicity, with and without rotation, it is not the first to study the impact of various convective parameters on stellar evolution. In this section, we compare our conclusions with a few such previous theoretical works. <cit.> used MESA <cit.> to compute models between 9 and 100 M_⊙, with and without rotation, at SMC metallicity (Z=0.002). They used the Ledoux criterion and varied parameters related to the strength of overshooting and efficiency of semiconvective mixing. The models relevant to our comparison are those with overshooting parameter α_ ov=0.11, and the most extreme values of the semiconvection parameter α_ sc = 0.01 and 300 (or 100 because they discussed more results related to that value). They found that very efficient semiconvection (large α_ sc, Schwarzschild in the current paper) leads to more time being spent by stars as blue supergiants. For instance, these models start to burn helium in the core at a higher effective temperature than those with inefficient semiconvection. Fig. 2 (top panel for α_ ov=0.11) of <cit.> shows that blue loops occur in the lower-mass range for efficient semiconvection, but not when it is inefficient. All of these observations lead to a higher ratio of blue to red supergiants when semiconvective mixing is efficient. Qualitatively, this agrees very well with what we find. The specific values for the mass range of the blue loops, the effective temperatures at the beginning of core helium burning, and the ratio of blue to red supergiants differ from what we predict, but that is to be expected because their models were computed at SMC metallicity (which is almost an order of magnitude lower than ours). <cit.> also used MESA to compute non-rotating models of 15, 20, and 25 M_⊙ at solar metallicity. They computed models with both the Schwarzschild and Ledoux criteria and used a free parameter f_ CBM, where CBM stands for convective boundary mixing, to vary the amount of additional mixing at the convective boundary (larger f_ CBM corresponds to more mixing). This parameter intervenes in a diffusive model rather than the penetrative overshoot that we used in this study. They found that the main-sequence evolution is not affected by the choice of criterion for convective stability as long as CBM is included (in our case, as long as overshooting is included). They also discussed the intermediate convective zones and found that ICZs that are larger and remain for longer mean that stars continue to be blue until the ICZ recedes. Their stars computed with the Schwarzschild criterion have stronger (meaning larger and longer-living) ICZs than those computed with the Ledoux criterion. As a result, their Schwarzschild stars start core helium-burning as blue supergiants and the Ledoux stars start as red supergiants. Their 15 M_⊙ Ledoux models also cross the HRD much faster after the main sequence, and this is correlated with a drop in surface luminosity. Overall however, their results are more affected by the value of f_ CBM than by the choice of Schwarzschild and Ledoux criterion. <cit.> performed a 3D hydrodynamical simulation of a convective zone adjacent to a semiconvective (Ledoux-stable but Schwarzschild-unstable) region. They found that overshooting mixes the semiconvective region, increasing the size of the convective zone by a process they called entrainment. After a few thousand convective-overturn times, the Ledoux and Schwarzschild criteria predict the same convective boundary. The consequence is that as the title of <cit.> states, `Schwarzschild and Ledoux are equivalent on evolutionary timescales'. This is the case during the main sequence (when the evolutionary timescale is much longer than the convective-overturn timescale). However, when the two timescales are of similar order, Ledoux predicts the instantaneous location of the boundary, whereas Schwarzschild provides its location in a stationary state. This agrees with what we find: that the largest differences are introduced during the post-MS expansion, which occurs on a rapid timescale. §.§ Comparison with observationsWe briefly present here a comparison of our four sets of models with the evolved massive stars in the Westerlund-1 (Wd1) cluster. Wd1, at a distance of ∼4 kpc <cit.>, is the best-studied young cluster in the galactic disk,and it contains both RSG, YSG, and WR stars. Its metallicity is unknown because the gas associated with its formation has been dispersed. Based on its age, however, we may expect that its metallicity could be higher than that in our study (which is why <cit.> used it as a comparison for their super-solar metallicity (Z=0.020) models). We nevertheless chose to also compare our models to Wd1 because it is the best-studied cluster containing both cool supergiants and Wolf-Rayet stars. We again used SYCLIST <cit.>, this time, to generate isochrones for our four grids of models. We used the data from <cit.> for the observed evolved stars in Wd1.Figure <ref> shows isochrones for our four sets of grids and the observed data of evolved massive stars in Wd1. While no isochrone perfectly fits the observed distribution of stars, we find that the non-rotating Ledoux isochrone at an age of 6.3 Myr best incorporates the presence of RSG and WR stars, as well as the spread in luminosity of the RSGs. The 10 Myr Ledoux isochrones (both rotating and non-rotating) fit the observed YSGs. We recall the possibility that a cluster formation event lasted a few million years, typically between 10 and 6 Myr ago, and that real stellar populations have varied rotation rates. Moreover, multiple star evolution and cluster dynamics will impact the observable properties of stars and induce a scatter in their luminosities, and this can explain some of the observed data.§ CONCLUSIONS AND DISCUSSIONIn the above sections, we discussed the possible effect of changing the convective criterion on stellar model outputs with and without rotation.In many aspects, the effects appear to be relatively minor compared to those of rotation or mass-loss. We summarise the main results from this work below.* The main-sequence phase is not much affected by the choice of the convection criterion. * The Schwarzschild criterion does not necessarily lead to more extended convective zones. It does facilitate the formation of an extended intermediate convective zone at the end of the main sequence (especially for stars with initial masses between 15 and 32 M_⊙), however, which gives rise to significant differences in the post-MS evolution of these stars. * One such difference occurs during the first crossing of the HRD, where stars computed with the Schwarzschild criterion can start core He-burning as BSG, whereas the Ledoux stars start as RSG. This has important implications in the subsequent mass loss of the models. This in turn affects the ratio of blue to red supergiants as well as the way in which case B mass transfer occurs in close binary systems. Because the Schwarzschild criterion causes longer crossing times, it predicts more nitrogen-poor Cepheids (those undergoing their first crossing of the HRD) than the Ledoux criterion. Observations of nitrogen-poor Cepheids would definitely tip the scale in favour of the Schwarzschild criterion (or efficient semiconvection). * The occurrence of blue loops is affected by the choice of a Schwarzschild and Ledoux criterion: non-rotating 7-9 M_⊙ Ledoux stars do not have them, but Schwarzschild stars do. This is probably due to different chemical profiles and helium abundances near and above the H-burning shell, and it also affects the ratio of blue to red supergiants. * The duration of and the surface velocities (for rotating stars) during the Cepheid phase are influenced by the choice of the convection criterion. The Schwarzschild criterion predicts longer-lasting and faster-rotating Cepheids than the Ledoux criterion. The Ledoux criterion predicts the most massive Cepheids. * Red supergiants computed with the Ledoux criterion reach a lower maximum luminosity than those computed with the Schwarzschild criterion. * The impact of the criterion for convection on stars with initial masses between 20 and 40 M_⊙ is complex because of the interplay with rotation and mass loss. For instance, the Ledoux criterion produces non-rotating WR stars for lower initial masses than the Schwarzschild criterion, but the inverse occurs for rotating stars. * The impact of changing the convective criterion on stars whose initial masses are above 60 M_⊙ is modest because their evolution is dominated by mass loss. * The main differences relative to the endpoint of stellar evolution occur for stars with initial masses of 25 to 40 M_⊙. For the 25 M_⊙ the Ledoux criterion predicts the final compact objects to be black holes, while the Schwarzschild criterion predicts neutron stars. If these stars still undergo a supernova, the Ledoux criterion predicts much stronger helium signatures than the Schwarzschild criterion because the amount of helium in their envelope is larger. * Our population synthesis models imply different number ratios of evolved stars depending on the convection criterion, especially for non-rotating stars: The Schwarzschild criterion predicts much higher ratios of BSGs to RSGs and WCs to WNs than the Ledoux criterion. Conversely, the Ledoux criterion predicts a higher ratio of WRs to RSGs.We recall, however, that the comparison presented in this work was made for a given choice of mass-loss rates, step overshoot, physics of rotation, and metallicity, and only for single-star evolution. Without performing detailed evolutionary calculations, it is not possible to know how the picture we presented in this work would change when one, a few, or all of the physical ingredients would change. The authors would like to thank the anonymous referee for their very insightful comments which greatly improved the quality of this manuscript. The authors have received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 833925, project STAREX). aa
http://arxiv.org/abs/2310.18139v1
{ "authors": [ "Yves Sibony", "Cyril Georgy", "Sylvia Ekström", "Georges Meynet" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20231027134100", "title": "The impact of convective criteria on the properties of massive stars" }
GREMAN UMR 7347, Université de Tours, CNRS, INSA-CVL, 16 rue Pierre et Marie Curie, 37071 Tours, France Electronic Ceramics Department, Jožef Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia Jožef Stefan International Postgraduate School, Jamova cesta 39, 1000 Ljubljana, SloveniaGREMAN UMR 7347, Université de Tours, CNRS, INSA-CVL, 16 rue Pierre et Marie Curie, 37071 Tours, France GREMAN UMR 7347, Université de Tours, CNRS, INSA-CVL, 16 rue Pierre et Marie Curie, 37071 Tours, France Electronic Ceramics Department, Jožef Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia Jožef Stefan International Postgraduate School, Jamova cesta 39, 1000 Ljubljana, SloveniaIn this work, the effects of thermal annealing at 500 on aerosol-deposited 0.65Pb(Mg_1/3Nb_2/3)O_3-0.35PbTiO3 thick films on stainless-steel substrates are investigated using two complementary methods at high and low applied external electric fields. The first one is Positive Up Negative Down method, which allows us to obtain information about the switching and non-switching contributions to the polarization. It shows that the as-deposited film is ferroelectric before annealing, since it has a switching contribution to the polarization. After annealing, both the switching and non-switching contributions to polarization increased by a factor of 1.6 and 2.33, respectively, indicating stronger ferroelectric behavior. The second method is based on impedance spectroscopy coupled with Rayleigh analysis. The results show that post-deposition thermal annealing increases the reversible domain wall contribution to the dielectric permittivity by a factor 11 while keeping the threshold field similar. This indicates, after annealing, domain wall density is larger while domain wall mobility remains similar. These two complementary characterization methods show that annealing increases the ferroelectric behavior of the thick film by increasing the domain wall density and its influence is visible both on polarization versus electric field loop and dielectric permittivity. a)These authors contributed equally to this work b)Author to whom correspondence should be addressed: mailto:[email protected]@univ-tours.fr c)The following article has been accepted by Applied Physics Letters. It can be found at https://doi.org/10.1063/5.0087389https://doi.org/10.1063/5.0087389 footnote Effect of thermal annealing on dielectric and ferroelectric properties of aerosol-deposited 0.65Pb(Mg_1/3Nb_2/3)O_3-0.35PbTiO3 thick films Hana Ursic0000-0003-4525-404X January 14, 2024 ==========================================================================================================================================Thanks to their large relative dielectric permittivity and polarization, relaxor-ferroelectrics are promising materials for energy storage devices. Even though ferroelectric capacitors have lower energy density than batteries and supercapacitors, they can charge/discharge under large currents and can be integrated into compact pulsed-power and power-conditioning electronic devices using thin or thick films <cit.>. One of the most promising relaxor-ferroelectrics is 0.65Pb(Mg_1/3Nb_2/3)O3-0.35PbTiO3 (PMN–35PT) ceramics with a morphotropic phase boundary composition that exhibits excellent dielectric, piezoelectric and ferroelectric properties.<cit.>A recent aerosol deposition method has opened new opportunities in device engineering of rapidly advancing thick film technologies.The advantage of aerosol deposition is the deposition of highly dense and crack-free thick films at room temperature.<cit.> Since no high temperature processing is required, material compatibility is superior and allows integration of ceramics with new substrates that are morphologically unstable at high temperatures, such as polymers and metals.<cit.> In the aerosol deposition method, the powder containing micrometre-sized particles is accelerated to velocities between 150 and 500, under vacuum conditions before hitting the substrate <cit.>.Therefore, film deposition andgrowth occur as a result of sufficiently high kinetic energy of the impacting particles, which is converted into fracture energy, consolidation and plastic deformation <cit.>. The aerosol deposition produces ceramic thick films with properties inherently different from conventional ceramics prepared by sintering. After aerosol deposition, the thick films have high density (over 95 of the theoretical density) <cit.>, nano-sized pores <cit.>, and good adhesion to the substrate.The typical microstructure of aerosol-deposited ceramic films shows grains reaching only a few tens of nanometers in diameter <cit.>.The grain size of the ceramics is known to strongly influence the dielectric and piezoelectric properties of ferroelectric material<cit.>. Moreover, Damjanovic et al also show that the irreversible contribution of domain walls is smaller for small grains than for coarse grains <cit.>. In addition, the impacts generate internal compressive stresses that can be on the order of several hundred MPa to several GPa <cit.>.Both characteristics, reduced grain/crystallite size and internal stresses can be especially detrimental for as-deposited ferroelectric thick films and their functional properties (e.g. dielectric, piezoelectric and ferroelectric properties). To improve the functional properties, ferroelectric thick films are often subjected to a moderate thermal annealing.Already at temperatures of 500, a significant stress-relaxation occurs, which is believed to be responsible for the enhancement of the ferroelectric response <cit.>. Electrical characterization methods of ferroelectric materials can generally be divided into high (supercoercive) field and low (subcoercive) field measurements. High field range is considered when the applied electric field is three times the coercive field, and a low field range when the applied electric field is less than the half the coercive field<cit.>. The Positive Up Negative Down method (PUND) <cit.> measurement technique complements conventional polarization versus electric field (P(E)) loop measurements by deconvolving the switching. It consists of applying successive voltage pulses (pre-polarization then P, U, N and D for positive, up, negative, and down, respectively) to a ferroelectric capacitor, and recording the current flow during these pulses. Positive and up voltage pulses have the same polarity, positive with respect to the bottom electrode which is grounded, whereas negative and down voltage pulses have both negative polarity. For P and N pulses, the current is the sum of the different contributions (i) leakage, (ii) capacitive and (iii) switching, since the previous pulse had the opposite polarity. For U and D pulses, the current is the sum of only two contributions (i) leakage and (ii) capacitive, since the previous pulse had the same polarity. By subtracting the currents i_P - i_U and i_N - i_D, it is possible to extract only the switching contributions for both polarities. Then numerical integration over time is used to obtain the P(E) loop for the switching contribution only.Most often, low-field measurements are based on impedance spectroscopy, which can be performed, for example, as a function of frequency and temperature to reveal relaxor phenomena<cit.> or DC bias field to determine the tunability of the material<cit.>. In ferroelectric materials, the irreversible domain wall process contributes significantly to the relative permittivity at sub-coercive driving fields<cit.>. Thus, measurement as a function of driving field enables characterization of domain wall motions that depend on the structure of the material<cit.>.In this paper, the effect of the post-deposition by thermal annealing aerosol-deposited PMN–35PT thick films is investigated using methods at high and low fields. PUND analysis is used to characterize the magnitude of the ferroelectric switching component to the measured polarization before and after thermal treatment. The low field method is based on impedance spectroscopy as a function of driving field and as a function of frequency to find a dielectric relaxation. This is coupled with the Rayleigh analysis to further investigate domain wall mobility in the as-deposited and annealed films. PbO (99.9, Aldrich), MgO (99.95, Alfa Aesar), TiO2 (99.8, Alfa Aesar) and Nb2O5 (99.9, Aldrich) were used for the synthesis of 0.65Pb(Mg_1/3Nb_2/3)O_3-0.35PbTiO3 powder. First, two powder mixtures corresponding to PMN and PT were homogenized separately.Then, the powder mixture corresponding to PT was calcined at 750 for 2h to facilitate faster reaction kinetics.<cit.>Both powder mixtures were homogenized together and reacted for 24h in the mechanochemical-activation-assisted synthesis.The powder was then milled for 2h, annealed at 900 for 1h and finally milled for 0.5h.The powder was deposited onto 15x15x0.8 stainless-steel substrates (no. 304, American Iron and Steel Institute) at room temperature using aerosol deposition method. Full details of powder processing and aerosol deposition conditions are described elsewhere <cit.>. A schematic of the aerosol deposition apparatus is represented in ref. SadlJMECM2019. The as-deposited thick film samples were annealed at 500 for 1h using 2 heating and cooling rates under an air atmosphere. The as-deposited and annealed films are two distinct samples with film thickness of 3.6 and 4.1, respectively. For electrical characterization, gold was sputtered through a shadow mask to form circular top electrodes with diameter of 0.5. The stainless-steel substrate acted as a bottom electrode.Polarization vs. electric field (P(E)) and PUND measurements were performed using the AixACCT TF2000 ferroelectric analyzer. The current is measured as a function of time while a triangular waveform is applied with a slew rate of 4 and the delay between pulses is 1. The polarization is computed by the TF2000 analyzer software by numerical integration. The impedance measurement was performed using an Agilent 4294A impedance analyzer. The relative dielectric permittivity ε_r' of the material inside a metal-insulator-metal topology was calculated using the measured capacitance C, based on the parallel plates formula: ε_r' = t/Sε_0C.with ε_0 = 8.85e-12 the dielectric vacuum permittivity, S the area of the top electrode (0.284) and t the thickness of the material (3.6 for the as-deposited sample and 4.1 for the annealed sample, respectively). The imaginary part of the permittivity is given by:ε_r” = ε_r'tanδWith tanδ the dielectric losses. All electrical characterizations were performed at room temperature, i.e. T=25. For the impedance measurement as function of frequency, the driving field is 3.4 and for the Rayleigh analysis, the driving field goes from 0.034 to 3.4.The structural and microstructural analyses of the as-deposited and annealed PMN–35PT thick films were done with X-ray diffraction (XRD) and scanning electron microscopy (SEM) and are presented in the supplementary material.In summary, the results reveal no microstructural changes after annealing of the thick films.According to SEM the density and porosity remain the same.In addition, XRD analysis shows that annealing has a minor influence on crystallite size, but significantly (41) reduces the microstrain.Similar observations were made for other aerosol-deposited thick films.Annealing at moderate temperatures usually reduces the internal stresses, but the microstructure is preserved <cit.> Fig. <ref> shows the P(E) loops for the annealed and the as-deposited samples.The maximum polarization is larger for the annealed sample (36 vs. 17). The polarization value at maximum electric field (480) for as-deposited and annealed samples is similar to that reported by Park et al <cit.>.To determine the origin of this larger maximum polarization, a PUND measurement was performed and the results are shown in Figs.<ref> and <ref>. For both samples, the difference in polarization between switching and non-switching polarization yields the switching contribution, which is represented by thePUND-corrected loops (dotted black curves).Such loops have a typical shape, they have straight horizontal lines when the field returns to zero<cit.>. The difference between maximum and minimum polarization, Δ P_m, was calculated and the values are 15.6 and 9.8 for the annealed and as-deposited samples, respectively. The higher value for the annealed sample is consistent with the larger value of polarization related to stress relaxation of compressive in-plane stress <cit.>. Nevertheless, the as-deposited sample is also ferroelectric, even though higher stresses in the film decrease the maximum polarization and increase the coercive field.Similar observation of the decrease of coercive field after thermal annealing were reported also in other aerosol deposited PMN–35PT thick films on Si substrates, while the actual origin was not precisely determined<cit.>. Fig. <ref> shows the real and imaginary parts of the relative permittivity as a function of frequency for a driving field of 3.4. The real part of the permittivity of the annealed sample is about three times higher than for the as-deposited sample (540 vs. 219 at 100), showing the strong effect of annealing on the dielectric properties.The higher value of relative permittivity when the stress is reduced is similar to what has been reported for PMN-30PT <cit.> or for PZT <cit.> thin films. This large difference between annealed and as-deposited films is also visible in the imaginary part of the relative permittivity, 31.9 vs. 2.57 at 100, corresponding to a dielectric losses of 0.057 and 0.012, respectively. The much larger permittivity and dielectric losses are consistent with a higher ferroelectric contribution in the annealed sample found in the PUND measurements in the previous part. Another notable difference in relative permittivity concerns its frequency dependence.The imaginary part of the relative permittivity of the annealed sample exhibits a clearly visible maximum at about 20, corresponding to dielectric relaxation<cit.>, while for the as-deposited sample the imaginary part decreases with frequency and has no local maximum. For the real part, the decrease with frequency is more pronounced for the annealed sample, 652 to 502 from 100 to 1 than for the as-deposited sample, 233 to 216, which is also due to dielectric relaxation around 20.The measurement of relative permittivity as a function of the driving electric field, which allows to deconvolve the different contributions to permittivity<cit.> (lattice, reversible and irreversible domain wall contributions), was performed for a fixed frequency of 100 and the results are shown in Fig. <ref>. The real and imaginary parts of the relative permittivity follow the generalized Rayleigh law, named hyperbolic law:<cit.>ε_r = ε_𝑟-𝑙 + √(ε_𝑟-𝑟𝑒𝑣^2+(E_𝐴𝐶α_r)^2).Where ε_𝑟-𝑙 is the lattice contribution to the permittivity, ε_𝑟-𝑟𝑒𝑣 and α_r correspond to the reversible domain wall contribution, also called vibrations, and irreversible domain wall contribution, also called domain wall pinning/unpinning, respectively. E_𝐴𝐶 corresponds to the magnitude of the applied driving electric field. The Rayleigh parameter α_r corresponds to the slope of the asymptote at a high electric field. The higher the value, the higher is the irreversible domain wall contribution. When the electric field increases from 0.034 to 3.4, in the annealed sample, the real part increases from 538.2 to 541.6 and the imaginary part from 29.8 to 31.3, which is due to the irreversible domain wall contribution. This increase is almost not visible in the as-deposited sample indicating a very small irreversible domain wall contribution since the increases for real and imaginary parts are less than 0.2. The real and imaginary parts of the permittivity were fitted (using the Levenberg-Marquat method<cit.>) to equation (<ref>) to extract the coefficients, which are listed Table <ref>.The real and imaginary parts of the lattice contribution ε_𝑟-𝑙', ε_𝑟-𝑙”, which is the main contribution to the permittivity follow the same trend as the total permittivity given Fig. <ref>, i.e. larger values for the annealed sample. The irreversible domain wall contribution, represented by the Rayleigh parameter α_r, is strongly affected by annealing. In the annealed film, the value of the real part is 10 times higher (2.38(3) vs. 0.245(9)) and the value of the imaginary part is 18 times higher (0.86(2) vs. 0.048(2)) than the values of the as-deposited thick film. This large difference corresponds to the slope difference observed in Fig. <ref>. The higher value of the irreversible domain wall contribution when the stress is reduced is similar to what has been reported for PMN–30PT<cit.> or for PZT <cit.> thin films.The reversible domain wall contribution, which is proportional to the domain wall density<cit.>, is also affected by annealing. For the annealed sample, the real part of the reversible domain wall contribution ε_𝑟-𝑟𝑒𝑣' is 11 times higher than for the as-deposited sample indicating a much larger domain wall density for the annealed sample. Those findings are supported also by the PFM analysis (Supplementary material S3).There are no studies that would directly correlate between the stress relaxation and domain wall density.However, the phase composition of the PMN–35PT solid solution at the morphotropic phase boundary (MPB) is very sensitive to the presence of the stress.This has been demonstrated in screen-printed PMN–35PT thick films<cit.>.By using Rietveld refinement analysis, it was shown that the ratio between the monoclinic (Pm) and tetragonal (P4mm) phase varies with the magnitude of in-plane stress in PMN–35PT thick films.Therefore, the change in the stress magnitude promotes polarization rotation and thus phase transformation in PMN-35PT thick films, which was also previously observed for bulk PMN–PT <cit.> and other bulk materials at MPB<cit.>. In contrast, the aerosol-deposited thick films exhibit XRD peak broadening typical for this deposition method, which complicates the determination of the phase composition of the crystal phases.Nevertheless, we can assume that the stress relaxation in aerosol-deposited films also induces phase transformation in the MPB compositions, which leads to a change in domain structure affecting also the domain wall density. Based on the reversible and irreversible domain wall contributions, it is possible to calculate the threshold field:<cit.>E_𝑡ℎ = ε_𝑟-𝑟𝑒𝑣'/α_r'The threshold field represents the degree of domain wall pinning in the material<cit.>. For the two samples, the values are very similar, 3.56(7) for the annealed sample and 3.2(2) for the as-deposited sample. This means that annealing does not change the depth of the pinning center in the material and the difference in terms of the driving field sensitivity, represented by the Rayleigh parameter α_r, is due to the different number of domain walls. The values obtained are of the same order of magnitude as in 0.5Pb(Yb_1/2Nb_1/2)O3 -0.5PbTiO3 <cit.> (E_𝑡ℎ = 2.2), Pb(Zr_0.57Ti_0.43)O3<cit.> (E_𝑡ℎ = 2.7) or well-oriented Ba_2/3Sr_1/3TiO3<cit.> (E_𝑡ℎ = 1.9), indicating low depth of the pinning centers and a high mobility of the domain walls.Using the real and imaginary parts of the individual contribution, the associated dissipation factor can be calculated using:m_x = x”/x'with x' and x” are the real and imaginary parts respectively of ε_𝑟-𝑙, ε_𝑟-𝑟𝑒𝑣 and α_r.For the lattice contribution, the m_𝑟-𝑙 value for the annealed sample is 5 times higher, which is in agreement with the observations on the whole permittivity and is due to the dielectric relaxation around 20. It can seen that the dissipation factor for the irreversible domain wall contribution m_α_r is one order of magnitude higher than for the lattice contribution. After annealing, the dissipation factor is higher due to the higher density of the domain walls, as there are more interactions between the domain walls<cit.>. For the reversible domain wall contribution, the dissipation factor is 8 times higher for the annealed sample than for the as-deposited sample.This large difference is again attributed to the interaction between the domain walls which greatly affects the dissipative behavior of this contribution<cit.>, since before annealing the low domain wall density results in a small interaction and after annealing the large domain wall density results in much larger interactions. To summarize, in this study, the effect of thermal annealing at 500 of PMN–35PT thick films was characterized using methods at high field (P(E) and PUND) and low field (impedance spectroscopy). The measurement of P(E) loops shows that annealing increases the maximum polarization from 17 to 36. The PUND method was used to distinguish the switching and non-switching contributions to the polarization, and it shows that both samples, as-deposited and annealed, have a switching contribution and that annealing increases this contribution to the polarization by a factor 1.6. Moreover, the PUND shows that the increase in polarization is also due to the higher non-switching contribution, i.e. capacitive contribution, resulting from a higher relative permittivity.Using the measurement of dielectric properties as a function of frequency, we show that the dielectric relaxation around 20 at 298 occurs only when the sample has been annealed. In addition, the annealed sample exhibits a larger relative permittivity and dielectric losses which is due to a higher ferroelectric behavior of the material and consistent with the PUND measurement.Impedance spectroscopy as a function of the driving field magnitude, coupled with the Rayleigh analysis, allows us to determine the various contributions to the relative permittivity, lattice, reversible and irreversible domain wall contributions. For the as-deposited sample, the contribution of the domain wall motions to permittivity is very small, which is due to a low density of the domain walls. For the annealed sample, the domain wall motion contributions to the permittivity is much higher, which is due to a higher domain wall density. However, values of the threshold field suggest that the domain wall mobility of the samples remains similar after thermal annealing, therefore no additional defect acting as domain wall pinning centers are induced. § DATA AVAILABILITYThe data that support the findings of this study are available from the corresponding author upon reasonable request.§ SUPPLEMENTARY MATERIALSee supplementary material for structural and microstructural analyses of the as-deposited and annealed PMN–35PT thick films.§ ACKNOWLEDGMENTSThis work has been performed with the means of the CERTeM (microelectronics technological research and development center) of French region Centre Val de Loire. HU and MS acknowledge the Slovenian Research Agency (project J2-3058, bilateral project BI-FR/21-22-PROTEUS-004, young researcher project, research core funding P2-0105) and JSI Director’s fund 2017-ULTRACOOL. § CONFLICT OF INTERESTThe authors declare no competing financial interest.aipnum4-2
http://arxiv.org/abs/2310.18005v1
{ "authors": [ "Kevin Nadaud", "Matej Sadl", "Micka Bah", "Franck Levassort", "Hana Ursic" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20231027092704", "title": "Effect of thermal annealing on dielectric and ferroelectric properties of aerosol-deposited $0.65\\text{Pb}(\\text{Mg}_{1/3}\\text{Nb}_{2/3})\\text{O}_{3}-0.35\\text{PbTiO}_{3}$ thick films" }
13.510Ancillary Services in Power System Transition Toward a 100% Non-Fossil Future: Market Design Challenges in the United States and Europe [ January 14, 2024 =============================================================================================================================================Most reinforcement learning methods rely heavily on dense, well-normalized environment rewards. DreamerV3 recently introduced a model-based method with a number of tricks that mitigate these limitations, achieving state-of-the-art on a wide range of benchmarks with a single set of hyperparameters. This result sparked discussion about the generality of the tricks, since they appear to be applicable to other reinforcement learning algorithms. Our work applies DreamerV3's tricks to PPO and is the first such empirical study outside of the original work. Surprisingly, we find that the tricks presented do not transfer as general improvements to PPO. We use a high quality PPO reference implementation and present extensive ablation studies totaling over 10,000 A100 hours on the Arcade Learning Environment and the DeepMind Control Suite. Though our experiments demonstrate that these tricks do not generally outperform PPO, we identify cases where they succeed and offer insight into the relationship between the implementation tricks. In particular, PPO with these tricks performs comparably to PPO on Atari games with reward clipping and significantly outperforms PPO without reward clipping. § INTRODUCTION Reinforcement learning (RL) has shown great promise in addressing complex tasks across various domains. However, each new application typically requires environment-specific tuning and significant engineering effort. The Dreamer <cit.> line of work has focused on world-modeling to achieve high performance across a wide range of tasks. The most recent version, DreamerV3, outperforms the previous version with a single set of hyperparameters shared across all tasks. DreamerV3 introduces several stability and performance-enhancing technique to accomplish this, but it also includes several orthogonal changes such as a larger architecture. Additionally, most of these new techniques are not inherently tied to the world-modeling formulation and could potentially be applied in other reinforcement learning settings. <ref> categorizes the tricks according to which part of the DreamerV3 algorithm they apply to. Specifically, the following techniques are directly applicable to model-free actor-critic methods: * Symlog Predictions: A transformation applied to the targets of a neural network, helping to smooth predictions of different magnitudes* Twohot Encoding: A discrete regression objective representing continuous values as a weighting of two adjacent buckets* Critic EMA Regularizer: Regularizes the critic outputs towards its own weight Exponential Moving Average (EMA) to improve stability during training* Percentile Scaling: Scales returns by an exponentially decaying average of the range between their 5th and 95th batch percentile to improve robustness to outliers and varying reward scales* Unimix Categoricals: Combines 99% actor network outputs with 1% uniform random sampling to inject entropy into action selectionProximal Policy Optimization (PPO) <cit.> is a popular RL algorithm due to its simplicity and widespread success in a variety of domains. PPO is an actor-critic algorithm that approximates a trust-region optimization strategy to update the policy. The objective function for PPO is given by: L(θ) = 𝔼_t[min(r_t(θ)Â_t, clip(r_t(θ), 1 - ϵ, 1 + ϵ)Â_t)], where r_t(θ) = π_θ(a_t | s_t)/π_θ_old(a_t | s_t), Â_t is the estimated advantage function, and ϵ is a hyperparameter controlling the size of the trust region.Our work applies the stability techniques introduced in DreamerV3 to PPO, aiming to improve the algorithm's stability and performance. We adapt these techniques for the slightly different algorithmic setting of model-free learning and provide a working implementation tested on various environments. We provide comprehensive ablations demonstrating that these techniques generally do not enhance the performance of PPO. We believe this negative result merits sharing because of the widespread interest in DreamerV3 and serves to temper expectations of the generality of the tricks presented.The contributions of our work include: * Applying and adapting DreamerV3's stability techniques to PPO* Demonstrating the effects of these techniques on PPO's stability and performance across diverse environments* Analysis of the strengths and weaknesses of each trick, with a more detailed exploration of the two most promising tricks, twohot encoding and symlog predictions* A high-quality implementation in CleanRL to enable further research in this direction § RELATED WORK r0.6 Application of DreamerV3 TricksImplementation Trick Actor Critic World Model Symlog Predictions X X X Twohot Encoding X X Percentile Scaling X Critic EMA Regularizer X Unimix Categoricals X XThe Dreamer line of work focuses on learning from imagined experiences using world models. Each version improves upon the previous, with DreamerV3 achieving state-of-the-art results on a variety of tasks. While these algorithms primarily focus on world-modeling, we are interested in model-free algorithms because of their simplicity and widespread applicability. This work focuses on PPO, which has become the primary algorithm for applying RL to new domains in recent years. Numerous extensions and improvements to PPO have been proposed, such as incorporating multiple optimization phases <cit.> or exploration bonuses <cit.>.Several works have focused on improving the stability of reinforcement learning algorithms. For instance, Pop-Art <cit.> introduced adaptive normalization of value targets, while R2D2 <cit.> and Ape-X <cit.> address the challenges of off-policy learning. However, these approaches often require extensive modifications to the base algorithms or focus on specific challenges, whereas our work studies a set of general stability techniques that are designed to apply more broadly to various reinforcement learning settings.Previous work has studied the impact of the implementation details of PPO on its performance. <cit.> described the implementation details of PPO and explored how each one impacts its performance. <cit.> explores how much of PPO's performance over Trust Region Policy Optimization <cit.> can be attributed to algorithmic improvements versus implementation tricks. None of these discuss the methods explored in our work, though some of the tricks they discuss serve a similar purpose.In this work, we build on the stability techniques introduced in DreamerV3, exploring their applicability beyond world-modeling. Our approach enhances the performance and stability of PPO when rewards are unbounded. However, as we further show, reward clipping is a simple and strong baseline that is hard to beat. Our work highlights the strengths and weaknesses of these techniques.§ METHODS In this section, we describe the application of the stability techniques introduced in DreamerV3 to the PPO algorithm. We detail each technique and discuss the necessary adaptations for incorporating them into PPO.We implement these tricks as minimal extensions to CleanRL's extensively validated and benchmarked <cit.> PPO implementation. We used both DreamerV3's open source code base and the paper as references. We found the implementations of all tricks to be consistent with their descriptions in the paper, and we checked several small ambiguities with the authors. We performed manual hyperparameter tuning for each of the implementation tricks, as well as automatic hyperparameter tuning of the learning rate, entropy coefficient, and value loss coefficient using Optuna <cit.>, for the full algorithm with all tricks enabled. We found no improvement in performance and therefore chose to keep CleanRL's default hyperparameters. We also compared our code with all tricks disabled to the original unaltered scripts to ensure that there were no performance regressions as a result of improperly implemented controls on the tricks. Finally, we have open-sourced the implementation of our tricks and experiments https://github.com/RyanNavillus/PPO-v3. §.§ Symlog Predictions In DreamerV3, symlog predictions are used to compress the magnitudes of target values. Symlog approximates the identity function near the origin so that it has little impact on returns that are already small. The symlog transform and inverse symexp transform are defined in <cit.> as:(x) ≐(x)ln(|x|+1) (x) ≐(x)(exp(|x|)-1)We also apply the same transformation to the observations in environments with continuous vector observations and call this method symlog observations.To use symlog predictions in PPO, we calculate the value loss using the symlog transformation, and we apply the symexp operation to outputs from the critic network exactly the same as in <cit.>. When twohot encoding is disabled, for a critic f(x,θ) with inputs x and parameters θ we use MSE loss to predict the symlog transformed returns y.ℒ(θ) ≐1/2(f(x,θ)-(y))^2 §.§ Twohot Encoding Twohot Encoding is a generalization of one-hot encoding that represents continuous values as a weighting between two equal buckets. For integer values, this is identical to the one-hot encoding of the value. This discrete regression technique allows the critic to predict a distribution over buckets instead of a single value. The definition of twohot encoding from <cit.> is shown in <ref>: twohot(x)_i ≐ |b_k+1 - x|/|b_k+1 - b_k| ifi=k |b_k - x|/|b_k+1 - b_k| ifi=k+1 0 elsek ≐∑_j=1^B δ(b_j < x)We implement two-hot encoding by replacing the critic value output with 255 bins as in DreamerV3 and calculating the critic value output as the expected value of the softmax distribution of those logits. The critic learns via categorical cross entropy between the critic logits and the twohot encoding of bootstrapped returns from the environment.As in <cit.>, when using two-hot encoding we initialize the critic logits to zero to prevent it from predicting extremely large values at the start of training. In ablations with symlog enabled, we follow DreamerV3 and set the range of the twohot bins to [-20, 20], allowing the critic to represent values from [-exp(20), exp(20)]. When symlog is disabled, we instead choose a range of [-15000, 15000] for Atari environments without reward clipping enabled, and [-1000, 1000] for Atari environments with reward clipping enabled as well as the DeepMind Control Suite. These ranges were chosen to allow the critic to represent the maximum and minimum values seen during training. We find that the choice of range for two-hot encodings has a significant impact on performance. We include limited experiments demonstrating this relationship in <ref> §.§ Critic EMA Regularizer <cit.> regularize the critic towards an Exponential Moving Average (EMA) of its own outputs, improving stability during training.We use the critic EMA to regularize the critic loss using the same decay rate (0.98) and regularizer coefficient (1.0) as DreamerV3. We tried tuning these values but found little difference in performance. The EMA is updated once during each optimization step. For our ablation studies, when two-hot encoding is enabled, we use categorical cross-entropy loss between the critic logits and EMA logits. When two-hot is disabled, we calculate the Mean Squared Error (MSE) between the critic and EMA value predictions. §.§ Percentile Scaling As discussed in <cit.>, previous work on PPO has typically scaled returns by dividing them by their exponentially decaying standard deviation. However, they point out that when rewards are sparse and the standard deviation is small, this method amplifies the noise in small returns, preventing the policy from exploring sufficiently. Percentile scaling instead tracks an exponentially decaying average of the 5th and 95th percentile of each batch of returns. Scaling advantages by the difference between these percentiles promotes exploration while allowing DreamerV3 to use a fixed entropy scale. Instead of directly scaling returns as in DreamerV3, we scale the advantages predicted by Generalized Advantage Estimation. As in <cit.> we only scale advantages when the scaling factor (the difference between the 5th and 95th percentile) is greater than 1 to avoid amplifying errors in small advantages.To add this method to PPO, we compute bootstrapped returns in the same way as DreamerV3, then scale the advantages used to calculate the policy loss. Note that we do not modify the values or returns used to calculate the critic loss. We also found that when the EMA updated too quickly, it could lead returns to smoothly drop to 0 in the DeepMind Control Suite. To counteract this, we change the percentile EMA decay rate from 0.99 to 0.995 for the DeepMind Control Suite and 0.999 for the Arcade Learning Environment. We also found that in practice this method works best in combination with the standard advantage normalization used in PPO. §.§ Unimix CategoricalsUnimix categoricals are a mixture of 99% neural network outputs and 1% random uniform sampling used for action selection. This prevents the probability of selecting any action from being zero, and encourages exploration similar to an entropy bonus. Unimix categoricals are only used in the Atari experiments because the DeepMind Control Suite does not use categorical actions. We do not experiment with changing the unimix ratio in this paper and use 1% as in <cit.>.§ EXPERIMENTS In the following sections, we explain the environments we use to evaluate the implementation tricks. We test each trick on the entire environment suite across multiple seeds. We used approximately 8000 GPUs hours for the experiments in this paper as well as 4000 more for testing and development, most of which were run on Nvidia A100s. We report results using standard metrics as well as those provided by the RLiable library as recommended in <cit.> when results require uncertainty measures. They describe a methodology and metrics for creating reproducible results with only a handful of runs by reporting uncertainty. These metrics include 95% stratified bootstrap confidence intervals for the mean, median, interquartile mean (IQM), and optimality gap (the amount by which an algorithm fails to meet a minimum normalized score of 1).§.§ Environments §.§.§ Atari 100MWe train agents on each of the 57 environments in the Arcade Learning Environment <cit.> for multiple seeds in each of our ablations. In this paper, we are examining whether these tricks improve PPO's robustness to different reward scales. The high scores, or maximum possible returns, for Atari games range from 10 for Surround to 89 million for VideoPinball. They also have drastically different reward densities, where some games guarantee a reward at each step while others require the agent to learn complex behavior to experience any positive reward. Typically, previous work on the Arcade Learning Environment has used reward clipping to limit individual step returns to 1 <cit.>. This significantly reduces the scale of returns and increases the effective density of rewards by weighting all nonzero rewards equally. To better study different reward scales, we perform ablations with and without reward clipping enabled. Aside from reward clipping, we use the standard wrappers recommended by <cit.>. For Atari experiments, we standard benchmark of median human normalized scores <cit.> instead of episodic returns.§.§.§ DeepMind Control SuiteWe also test our method on 35 proprioceptive control environments from the DeepMind Control Suite. These are physics-based environments with well-normalized reward functions. Each environment has a minimum return of 0 and a maximum return of 1000, allowing us to test how these tricks perform on environments with returns that are already well-normalized. Rewards can also take on decimal values unlike in Atari environments. In our plots, we divide returns by 1000 to limit them to the range [0, 1] and ensure that optimality gap has a consistent definition across environments.§.§ Enabling All TricksWe first compare our version of PPO with all of the stability tricks enabled to the PPO baseline in each set of environments in <ref>.When reward clipping is enabled, we see that PPO achieves similar performance with and without the tricks. When reward clipping is disabled, the stability tricks allow PPO to recover most of the performance of PPO with reward clipping. This suggests that the tricks make PPO significantly more robust to varying reward scales, though they slightly underperform a simple reward clipping baseline. §.§ Add-One AblationsWe perform add-one ablations where we enable one trick at a time and evaluate on both the DeepMind Control Suite and Atari environments (with and without reward clipping) to determine if any tricks provide a general improvement to the PPO baseline. The results are shown in <ref>.We find that all of the tricks perform comparably or slightly worse than the PPO baseline on the DeepMind Control Suite and Atari with reward clipping. In particular, symlog predictions and twohot encoding underperform compared to the baseline. However, we see that symlog predictions dramatically improve performance when reward clipping is disabled, and all of the remaining tricks perform comparably or better on most metrics. This indicates that the tricks are effective as return normalization tools, but underperform when returns are already normalized to reasonable ranges.§.§ Drop-One AblationsWe perform drop-one ablations where we enable all tricks and disable one at a time to see the interactions between tricks and identify combinations that might outperform PPO. The results are shown in <ref>. In these Atari experiments, reward clipping is disabled when symlog is enabled. Our drop-one ablations focus on symlog predictions, twohot encoding, percentile scaling, and the critic EMA regularizer. We exclude unimix categoricals from the Atari drop-one ablations because they had little impact in the add-one experiments and do not interact with the other tricks. Likewise, we exclude symlog observations from the DeepMind Control Suite drop-one ablations because they do not interact with the other tricks.We find that for the DeepMind Control Suite, removing twohot encoding and the critic EMA regularizer actually improves performance. Removing percentile scaling and symlog predictions seems to harm performance while in the Arcade Learning Environment, only removing symlog predictions harms performance. In our add-one ablations for both environment suites, adding symlog predictions and twohot encodings to PPO also performed worse than the baseline. Our twohot encoding experiments require us to define a single bin value range across each entire environment suite which may be causing its lackluster performance. To diagnose this issue, we explore the interaction between symlog predictions and twohot encoding in the following section. §.§ Symlog Predictions and Twohot EncodingTwohot encoding can only represent a bounded range represented by its bins. In Atari environments with widely-ranging possible scores, it can be difficult or impossible to choose tight bounds, which is why it is always paired with a symlog transform in <cit.>. This section provides additional context to the interactions between these tricks and the results are displayed in <ref>. We first examine the effects that the range and number of bins have on the performance of PPO with twohot encoding. Large bounds also seem to have a detrimental effect on learning, possibly by reducing the effective number of bins used to predict values. We see that mean episodic return increases with the number of bins, then falls off at a much slower rate as we increase past the optimal number. Surprisingly, the agent only suffers a small drop in performance even when the twohot range is set to 1. It's possible that the performance loss would be more significant in an environment with larger returns than Breakout where the critic value prediction is much farther from the true value. We also see in <ref> that even when combined with symlog predictions, twhot encoding underperforms compared to base PPO across the entire Arcade Learning Environment. § DISCUSSION The tricks tested in this paper allow PPO to achieve strong performance in environments with drastically different reward scales using a single set of hyperparameters, while the original PPO algorithm is unable to do so. Our modified version of PPO performs comparably to the original algorithm in Atari games with and without requiring reward clipping, but we see that our agents perform worse on the DeepMind Control Suite. This suggests that the tricks may be poorly suited to environments with normalized, bounded, continuous rewards. Symlog predictions are by far the most impactful trick in all experiments. Conversely, all of our experiments seem to suggest that twohot encoding is a detrimental addition to PPO even in combination with other tricks. Percentile scaling, the critic EMA regularizer, and unimix categoricals all slightly improve performance when we disable reward clipping, and slightly harm performance when we enable it, again suggesting that they are most useful in environments without normalized returns. Symlog observations underperform in the DeepMind Control Suite, but could be useful in environments with larger unbounded observations.In this work, we applied stability tricks introduced by DreamerV3 to PPO and demonstrated that they do not result in a general performance improvement, but can be valuable in specific cases. The fact that these tricks do not benefit PPO raises the question of how exactly they impact DreamerV3. These tricks are directly applicable to PPO and required little to no modification to implement. It is possible that they may work for the entropy-regularized actor-critic that DreamerV3 uses, which is likely a weaker baseline than PPO on its own. It's also possible that symlog predictions, twohot encoding, or unimix categoricals are specifically beneficial to world model learning, or that the world model-specific tricks not studied in this paper are the source of its improved performance. We note that their use of SiLU activation functions <cit.>, layer normalization <cit.>, and a 200 million parameter architecture may also contribute to its state-of-the-art results. We include one experiment using a similar architecture in the appendix, but find that it does not work well for PPO.<cit.> make few direct comparisons to the previous DreamerV2 method and include limited ablations on only six environments, so it is difficult to confirm exactly how each trick contributes to performance. The ablation studies in this paper should serve as a valuable reference for studying these tricks and developing new methods that promote robustness to varying reward scales in reinforcement learning. Solving this problem would allow RL to more easily be applied to new problems without extensive engineering efforts. DreamerV3 achieves state-of-the-art results, but our results show that the relationship between these implementation tricks and those results may not be as straightforward as it would seem otherwise. Due to the importance of this problem and the impressive results of DreamerV3, we believe these tricks deserve further investigation both using DreamerV3 and in new contexts. § LIMITATIONS Due to the many differences between PPO and DreamerV3, we cannot say whether the findings in this paper transfer to DreamerV3 or other similar algorithms. Our experiments also do not fully cover the range of tasks evaluated in the original DreamerV3 paper. We have focused on a more limited set of environments in order to provide thorough, high-quality ablations. We experiment on Atari environments without reward clipping to study the effects of each trick on poorly normalized reward scales, and the DeepMind Control Suite for environments with well-normalized returns, but it's possible that we would see different results on more complex environments.§ CONCLUSION We have presented a thorough study of the stability techniques introduced in DreamerV3 to the widely used PPO algorithm. These experiments allow us to identify key areas where tricks improve performance and demonstrate the broader applicability and potential benefits of these techniques for the reinforcement learning community. In the spirit of openness and reproducibility, we have released our complete code base and experiment data at https://github.com/RyanNavillus/PPO-v3, further promoting the adoption and study of these techniques in the reinforcement learning community.§ ACKNOWLEDGMENTSWe would like to thank James MacGlashan for his helpful comments and suggestions, as well as CarperAI for providing the majority of the compute used for our experiments. § APPENDIX A: ATARI ADD-ONE ABLATIONS § APPENDIX B: ATARI DROP-ONE ABLATIONS§ APPENDIX C: DEEPMIND CONTROL SUITE ADD-ONE ABLATIONS§ APPENDIX D: DEEPMIND CONTROL SUITE DROP-ONE ABLATIONS § APPENDIX E: ARCHITECTUREWe use the 20 million parameter XL DreamerV3 encoder and actor-critic architecture in PPO and compare its performance to the 1 million parameter Nature CNN used in the rest of our experiments. § APPENDIX F: PPO HYPERPARAMETERS
http://arxiv.org/abs/2310.17805v1
{ "authors": [ "Ryan Sullivan", "Akarsh Kumar", "Shengyi Huang", "John P. Dickerson", "Joseph Suarez" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231026224030", "title": "Reward Scale Robustness for Proximal Policy Optimization via DreamerV3 Tricks" }
Bidisperse beds sheared by viscous fluids: Grain segregation and bed hardening]Bidisperse beds sheared by viscous fluids: Grain segregation and bed hardening This article may be downloaded for personal use only. Any other use requires prior permission of the author and AIP Publishing. This article appeared in Phys. Fluids 35, 103326 (2023) and may be found at https://doi.org/10.1063/5.0168415. [Also at ]School of Mechanical Engineering, UNICAMP - University of Campinas, Campinas–SP, Brazil Departamento de Petróleos, Escuela Politécnica Nacional, Av. Ladrón de Guevara E11-253, Quito, Ecuador Department of Earth and Environmental Sciences, University of Rochester, Rochester, NY 14627, [email protected]*Corresponding author School of Mechanical Engineering, UNICAMP - University of Campinas,Rua Mendeleyev, 200, Campinas, SP, Brazil When a granular bed is sheared by a fluid that flows above a critical limit, it undergoes a complex motion that varies along time: it can contain fluid- (bedload) and solid-like (creep) regions, being prone to strain hardening and, in case of polydispersity, segregation. In this paper, we investigate experimentally the short- and long-time evolution of a bidisperse bed sheared by a viscous liquid. Different from previous experiments, the density ratio between grains and fluid is 2.7, close to values found in rivers and oceans. We show the existence of diffusive, advective and constrained regions, that most of segregation occurs during the very first stages of the flow, and that bed hardening becomes stronger while bedload and creep weaken along time. We obtain the segregation rates, their evolution along time, their variation with the applied shearing, and the time evolution of creeping and bedload. Finally, we propose characteristic times for the segregation of large particles and bed hardening. Our results shed light on the complex motion of sheared beds existing in nature, such as river beds and creeping lands. [ Erick M. Franklin* January 14, 2024 ======================§ INTRODUCTIONThe transport of grains by a fluid flow is frequently observed in nature, such as can be found in rivers and deserts. Whenever the ratio between the entraining (due to the fluid shearing) and resisting (due to gravity) forces is within moderate values, bedload and creep can occur within the granular bed. In the case of air, bedload consists of saltating grains that effectuate ballistic flights and which, by impacting onto the bed, move part of the non-saltating grains by creep motion <cit.>. In the case of liquids, bedload is a moving layer in which grains roll, slide or effectuate small jumps while keeping contact with the lower part of the bed. This lower part has been described as being static <cit.>, but recently it has been shown that it may creep, with movements caused by very slow rearrangements of grains <cit.>. In addition, Houssais et al. <cit.> and Allen and Kudrolli <cit.> showed that the creeping layer can exist even when shear stresses are below the threshold for bedload (so that a bedload layer is absent). In particular, Houssais et al. <cit.> showed that within the granular bed there is a continuous transition between bedload and creep, and proposed that this transition occurs at a height characterized by a viscous number <cit.> I_v equal to 10^-7, where I_v is the ratio between the microscopic (related to the rearrangements of grains) and macroscopic (related to the macroscopic rate of deformation) timescales.Sheared granular beds usually experience hardening, with grains having their mobility reduced along time. Bed hardening has been identified as one of the causes of the increase along time of the bedload threshold, the minimum shear stress necessary for bedload to take place. With flume experiments, Charru et al. <cit.> and Masteller and Finnegan <cit.> measured the decay in the mobility of a bedload layer and proposed that the decrease was due to bed hardening, in its turn caused by purely geometric rearrangements of grains, i.e., the simple percolation of grains migrating to vacancies (leading to an increase in bed compactness). This explanation implies that bed hardening would be of isotropic nature. Later, Masteller et al. <cit.> identified hardening of a river bed by analyzing a 19-year-series dataset of fluid stress and sediment transport measured in the Erlenbach river.Over the past decades, many authors have investigated the jamming of granular materials under normal and shear stresses <cit.>, which bears some connection with the bed hardening observed in granular beds sheared by fluids. Cates et al. <cit.> showed that fragile states may appear in colloidal suspensions and granular materials by the formation of force chains aligned in preferential directions, the materials being able to support loading, and then jamming, in such directions, but being unable to support loading in other ones, with consequent unjamming. Bi et al. <cit.> showed that granular matter is subject to fragile states and shear jamming when external shear stresses are applied, in addition to the isotropic jamming that appears even in shear-free conditions. They observed that both fragile and shear-jammed states appear at lower particle fractions than those necessary for isotropic jamming, the fragile state appearing under small shear stresses and being characterized by a one-directional force network, while the shear-jammed state appears under stronger shear stresses and is characterized by a force network that percolates in different directions. The appearance of these different states may be regarded as memory formation, where out-of-equilibrium systems may keep information about the past <cit.>. Recently, Cúñez et al. <cit.> carried out experiments in a circular channel to investigate the response of a granular bed to fluid-shear stress cycles of varying magnitude and direction, and determined the isotropic (due to bed compaction) and anisotropic (due to shear-induced orientation) contributions. They showed that the application of an external shearing in a given direction produces, along time, an anisotropic structure that keeps memory of the applied shear, with the corresponding fragile and/or jamming states. When, however, the shear direction is reversed, the former anisotropic structure is wiped out, causing memory erasure and the formation of a new anisotropic structure after a characteristic time. They found that sediment transport promotes direction-dependent strain hardening for moderate shear stresses, due to an accumulated memory from the past, while higher stresses fluidize part of the bed, engendering dilation-induced weakening and memory loss. Finally, they quantified the hysteresis in sediment transport depending on the orientation of varying flows.In addition to hardening caused by bed compactness and shear-induced orientation, polydisperse beds may have their mobility reduced by natural armoring: the segregation leading to a higher concentration of larger grains on the bed surface. Those grains shield smaller particles from regions where the flow is stronger, hardening the bed <cit.>. Ferdowsi et al. <cit.> carried out experiments where a bidisperse granular bed was sheared by a steady viscous flow, and numerical simulations of a bed sheared by a layer of particles (without fluid). In their experiments, however, the ratio between the solid (ρ_s) and fluid (ρ) densities was S = 1.13 (close to unity), so that the gravity effects were much lower than in natural flows of water and sand (S ≈ 2.65). They found that bed armoring is mainly due to segregation, with the upward motion of large particles occurring from lower regions in the bed. They also showed the existence of two distinct layers: one, close to the surface, where bedload propel a fast shear-dependent segregation and which they associate with an advection mechanism, and another one below, where creep drives a slow segregation and which they associate with a diffusion-like mechanism.Segregation in bidisperse bedload and debris flow were also investigated at relatively low timescales (those for which creeping cannot be measured). Zhou et al. <cit.> investigated the effect of the interstitial fluid in particle segregation taking place in debris flows. By comparing the outputs of numerical simulations with and without the presence of an interstitial fluid (for both inertial and viscous fluids), they found that segregation is weaker and slower in the presence of an interstitial fluid, and that it weakens with increasing the fluid viscosity. The authors propose that both buoyancy and shear-rate alterations by the presence of fluids change the particle-particle dynamics that leads to the upward motion of large particles. Those results were later corroborated by Cui et al. <cit.>, who investigated numerically the effect of different interstitial fluids in a confined granular medium under imposed shearing. The authors found that, indeed, the segregation decreases with increasing the fluid viscosity above a lower limit. Below that limit, segregation does not vary with viscosity, but with the density ratio and fluid inertia. Cui et al. <cit.> propose that viscous effects weaken particle-particle contacts and dampen particle fluctuations, decreasing the rate of segregation. Rousseau et al. <cit.> inquired into the upward motion of a single large particle (intruder) within a bedload layer of smaller particles. They found that the upward motion of the intruder has two phases, a first one which is intermittent and slow, and a second one consisting of a fast motion to the bed surface. However, it is possible that the first phase occurs in the limit between the creep and bedload layers (not in the bedload layer itself). In a different approach, Frey et al. <cit.> carried out experiments in which they tracked a sinking layer of smaller particles within a bidisperse bedload layer under turbulent flow. The authors found that the sink velocity of small particles decreases logarithmic in time and varies with the particle-particle shear rate, which decays exponentially with depth. They also found that small particles reach a final depth, forming a layer there. We conjecture that, perhaps, that layer takes place in the limit between the bedload and creep layers. Although previous works increased our knowledge on the importance of shear and segregation for bed armoring, with the consequent variation in sediment transport, questions such as the structure of shear-induced hardening in bidisperse beds, segregation rates, and long-time evolution for moderate-weight beds (S > 1.5) remain open. In this paper, we investigate the evolution of a bidisperse bed sheared by a viscous liquid. For that, we carried out experiments in which we made use of RIM (refractive index matching) visualizations and, different from previous experiments, the ratio between grains and fluid was 2.7, close to values found in rivers and oceans. We show the existence of diffusive, advective and constrained regions, that most of segregation occurs during the very first stages of the flow (first 20–80 minutes), and that bed hardening becomes stronger while bedload and creep weaken along time. We obtain the segregation rates, their evolution along time, their variation with the applied shearing, and the time evolution of creeping and bedload. Finally, we propose characteristic times for both the segregation of large particles and bed hardening. Our results provide new insights into the physical mechanisms of segregation and bed hardening occurring in nature, such as in river beds and creeping lands. In particular, we show how segregation takes place in polydisperse beds found in nature (leading to bed armoring), how the lower layer (creeping layer) of a granular bed compacts (promoting bed hardening), and how grains are rearranged within the bed (which hardens the bed while keeping memory effect <cit.>). These results are important for understanding sediment transport found ingeophysical flows, hydraulics, and engineering applications.In the following, Sec. <ref> presents the experimental setup, Sec. <ref> shows the results and Sec. <ref> concludes the paper. § EXPERIMENTAL SETUPThe experimental device consisted basically of an annular (circular) flume with a rotating lid connected to a computer-controlled stepper motor, so that the lid rotation imposed a shear driven flow inside the flume. The flume had mean radius R = 18 cm, internal width W = 40 mm, and internal height H = 30 mm, being completely filled with controlled grains and liquid (described next), and the ensemble was mounted over an optical (heavy) table aligned horizontally. Figure <ref>(a) shows a photograph of the ensemble mounted over the optical table (a layout of the experimental setup is available in the supplementary material).A bidisperse granular bed of height 24 mm ≤ h ≤ 25 mm was set up in the flume, and the remaining space was filled with a viscous liquid whose refractive index was matched with that of the bed. The bed consisted of larger and smaller glass spheres (soda–lime–silica glass) with density ρ_s = 2500 kg/m^3 and diameters d_1 = 3 mm ± 0.2 mm and d_2 = 2 mm ± 0.2 mm, respectively, and which we call species 1 and 2. The ratio of the total volume occupied by the small spheres V_2 to that of large ones V_1 was V_2/V_1 = 1.5, and we considered the mean diameter as d = (0.4d_1 + 0.6d_2) /2 = 2.4 mm (i.e., averaged by the mass proportion of each species). For the fluid, we used an oil for fluorescence microscopy from Cargille Laboratories, with dynamic viscosity μ = 651 cP, density ρ = 931 kg/m^3, and refractive index (for 532 nm) of 1.5127 at 23 ^∘C, which assured the desired shear while matching the refractive index of grains. The fluid viscosity was measured with a rheometer Anton Paar MCR 102, showing that it was within 770 and 800 cSt during the experiments, and the room temperature was 22^oC ± 1^oC during all tests. Tables showing the composition used in each test and microscopy images of the used grains are available in the supplementary material.With that, a liquid film above the granular bed, with height 5.7 mm ≤ h_f ≤ 6.8 mm, was sheared by the rotating lid at a constant angular velocity Ω, creating a laminar Couette flow. The angular velocities Ω varied within 3 and 6 rpm, corresponding to lid velocities at its centerline of 53 mm/s ≤ U_lid ≤ 106 mm/s, shear rates of 8.1 s^-1 ≤ γ̇ = U_lid/h_f ≤ 19.5 s^-1, and Shields numbers of 0.14 ≤ θ ≤ 0.35, where θ = μγ̇ ((ρ_s - ρ )g d)^-1, g = |g⃗| being the modulus of the acceleration of gravity. The critical value of the Shields number, θ_c, was fixed at 0.1 (estimated from values found in the literature <cit.>). The Reynolds number based on the fluid height, Re = ρ U_lid h_f μ^-1, was within 0.46 and 0.92, and the Reynolds number based on the mean grain diameter, Re_p = ργ̇ d^2 μ^-1, within 0.05 and 0.23. The ratio between the grain and fluid densities was S = ρ_s / ρ = 2.7. Prior to each test, the upper lid was rotated at Ω = 25 rpm (θ = 1.5) during 60 seconds in order to suspend all the grains with the exception of the bottom-most layers, followed by a rest period of 5 minutes for the grains to settle.One continuous 0.2 W laser head emitting at 532 nm was mounted over and another one below the flume, both generating a vertical plane traversing the bed and forming a single laser sheet (approximately 1 mm thick). We used two lasers for having a regular distribution of light through the bed (the bed being lighted from both its top and bottom). A digital camera with a lens of 18–140 mm focal distance and F2.8 maximum aperture was mounted with a perpendicular view to the laser sheet. The camera was of complementary metal-oxide-semiconductor (CMOS) type with a maximum resolution of 20.9 Mpx for photographs and 1920 px × 1080 px at 60 Hz for movies. The regions of interest (ROIs) were set at 1920 px × 940 px for the movies and 2780 px × 1410 px for photographs, for a field of view of 60 mm × 30 mm. Movies were recorded at 30 Hz for the first 40 or 80 min of experiments in order to capture segregation within the bedload layer during the very first stages of the flow, and images were acquired at 0.05 Hz during 140 hours in order to sample the slow segregation and compaction in the solid-like region. In addition, movies were recorded at 30 Hz for 10 min every 4 hours in order to accurately measure the changes that occur between the bedload and solid-like layers. Afterward, the movies and images were processed by a code written in the course of this work. Movies showing the evolution of the bed are available in the supplementary material, and the image-processing codes and images are available in an open repository <cit.>. More details about particle detection and computation of velocities, packing fraction and strain are available in the supplementary material. § RESULTSAs soon as the lid begins moving, the fluid entrains grains into motion, with grains near the surface moving as bedload while those below move as creep (with velocities much smaller than those of bedload grains). With the fluid and velocities used, topmost grains of the bedload layer moved by rolling and sliding over other grains, and there was no grain in suspension. By processing the acquired images and movies, we computed spatio-temporal averages of the packing fraction < ϕ>, longitudinal velocity < V > and strain < ε> within the bed. The time averages were computed within specific intervals (as indicated in the following) and space averages were computed only in the longitudinal direction (not in the vertical, unless otherwise specified), so that we ended with vertical profiles of < ϕ>, < V > and < ε> for different applied stresses. For example, Fig. <ref> shows < ϕ>, < V > and < ε> (with the bed as background) for θ / θ_c = 3.5. We note the existence of oscillations with wavelength of the order of d in the vertical profiles of < ϕ>, which are due to the settling of particles in layers <cit.>. Details of the computations of < ϕ>, < V > and < ε> are available in the supplementary material.Based on the profiles of < ϕ> and < V >, we determined the regions where creeping and bedload take place by computing the vertical positions z_c and z_s. The position z_s is defined as that where < ϕ> = 1/2 < ϕ_sat>, so that for z > z_s the concentration of grains is low and bedload vanishes, while z_c is the position where a kink takes place in the < V > profiles, separating the regions where creep (z < z_c) and bedload (z_c < z < z_s) occur <cit.>. In our experiments, this kink (and thus z_c) always corresponded to the position where the viscous number I_v is approximately 2 × 10^-8.The viscous number I_v is the ratio between the microscopic and macroscopic timescales <cit.> applied to viscous flows, I_v = μγ̇/P_p, where P_p is the confinement pressure. This pressure decreases with height, being the result of the load of material above the considered height z. We computed P_p as in Houssais et al. <cit.>, P_p = ( ρ_s - ρ) g [ ∀_s/A_cont +∫_z^∞< ϕ> dz ] , where ∀_s is the volume of one grain (computed using the mean diameter d) and A_cont is the characteristic surface of contact between a typical topmost grain and the bed surface. We proceeded as in Houssais et al. <cit.> and considered that ∀_s / A_cont is equal to an integration constant α = 0.1d. Finally, the effective viscosity μ_eff is computed by Eq. <ref>, μ_eff = τ/γ̇, where τ is the applied stress.Figures <ref>a–<ref>e show vertical profiles of space-time averaged packing fraction < ϕ>, grain velocity < V >, confinement pressure P_p, viscous number I_v, and effective viscosity μ_eff, for different Shields numbers θ. The profiles of < V > show that velocities are higher on the bed surface (z/d = 10-12), decrease relatively fast with depth in the bedload layer (7-8 ≤ z/d < 10-12), and have much lower values (5 orders of magnitude lower than on the bed surface) and a smoother decrease with depth in the creep layer (z/d < 7-8), a kink existing in the transition from creep to bedload. This kink occurs at a height z where I_v ≈ 2 × 10^-8, as can be seen in Fig. <ref>d. This figure shows that I_v is maximum at the bed surface, decreases strongly with depth in the bedload layer and smoothly in the creep layer, with the kink occurring at z = z_c. For the packing fraction < ϕ>, we observe a fast increase with depth in the bedload layer, with an average constant value in the creeping layer. As noted for Fig. <ref>, oscillations with a wavelength of the order of d are present, which are due to the settling of particles in layers. On the bottom, < ϕ> tends to zero since the contact area between the spherical particles and the channel wall is very small. The effective viscosity μ_eff increases with depth, from values of the order of 10μ at the bed surface to 10^7 μ at z = 6-7, from which depth it remains constant until reaching the bottom. This indicates a solid-like behavior in the creeping layer. Finally, the pressure P_p is roughly constant for 10 ≤ z/d < 12 (top region of the bedload layer), and increases with depth for z/d < 10. Those results are in agreement with the experiments of Houssais et al. <cit.>, carried out with much lighter grains (S = 1.1) than our experiments (S = 2.7). For that reason, the magnitude of pressures at the bottom of the bed are two order of magnitude higher in our experiments when compared with those in Houssais et al. <cit.>.From the ensemble of experiments, we observed particle segregation and strain hardening, which we discuss next.§.§ SegregationWe begin with the segregation, for which we followed the motion of the large particles appearing in the recorded images (with an accuracy of approximately 0.005 mm, by using subpixel methods). We assigned a label to each one of those particles and tracked them along the movie frames and photographs. For example, Fig. <ref>a shows the trajectories of large particles that were segregating during the first 20 min of tests and Fig. <ref>b for t = 20 min to 140 h, both for θ / θ_c = 3.5 (multimedia available online). We observe that displacements during segregation are much higher in the bedload layer when compared with the creep layer (corroborating our computations of z_c). We also notice that most of segregation, given by the vertical motion of large particles, occurs within the bedload layer, and that the intensity of segregation is much higher during the first 20 minutes than during the next 139h 40 min.In order to quantify the intensity of displacements in the vertical direction and the regions where large grains move, we computed the Mean Squared Displacement <cit.> (MSD) of large particles, MSD(Δt) = 1/N∑^N[ z(t + Δt) - z(t) ] ^2 , where N is the number of averaged points, Δt is the interval for a given MSD computation, and MSD(Δt) corresponds to the area visited by the considered particle during the interval Δt. In addition to mean distances traveled by the considered particle, the curves of MSD as functions of Δt inform about regions where the particle is advected, moves by pure diffusion, or is confined. Typically, this kind of plot is curved upwards in case of advection (superdiffusion), is a straight line in case of pure diffusion, and is curved downwards in case of confinement (subdiffusion) <cit.>. We note that MSD is used here as an analogy for hints about the behavior of our system, MSD being typically used for more homogeneous systems with smaller particles. MSD was used with granular beds in previous works for the same purpose <cit.>, but care must be taken.Figure <ref> shows the MSD as a function of Δt for the large particles, Fig. <ref>a corresponding to θ / θ_c = 1.4 from t = 0 to 20 min, Fig. <ref>b to θ / θ_c = 1.4 from t = 20 min to 140 h, Fig. <ref>c to θ / θ_c = 3.5 from t = 0 to 20 min, and Fig. <ref>d to θ / θ_c = 3.5 from t = 20 min to 140 h. These graphics show that the advection of large particles occurs exclusively in the bedload layer and are more intense during the first 20 minutes, while pure diffusion is seen to occur in the creep layer close to the limit with the bedload layer, being more intense during the first 20 min, but also occurring considerably at later times. In the lower region of the creep layer (farther from the bedload boundary), large particles are seen to be confined.Figure <ref>a shows the vertical position z of the large particles that segregated (or were deeper in the bed) along time, for θ / θ_c = 3.5 (graphics in dimensional form and for the other shear stresses are available in the supplementary material). In Fig. <ref>a, the time is normalized by t_shear = S/γ̇. We observe that the segregation times (in logarithmic scale in the graphic) differ according to the vertical position the particle is initially at. Therefore, Fig. <ref>a shows in different colors and line types the curves corresponding to particles originally in different regions: 0.85z_s ≤ ζ_1 ≤ z_s; 0.75z_s ≤ ζ_2 < 0.85z_s; z_c ≤ ζ_3 < 0.75z_s; and 0.95z_c ≤ ζ_4 < z_c. In terms of order of magnitude, the large particles in the ζ_1 region segregate until t/t_shear ∼ 10^2 (within the first minute), those in the ζ_2 region within 10^2 and 10^3 (within 1 and 10 minutes), and those in the ζ_3 region within 10^3 and 10^4 (within 10 and 100 minutes), these three regions corresponding to the bedload layer. Particles originally in the ζ_4 region, which corresponds to the upper part of the creep layer (in the vicinity of the bedload layer), segregate within t/t_shear ∼ 10^5 and 10^6 (within 100 and 1000 minutes). Below this region, we have not observed segregation within the duration of our experiments. Figure <ref>b shows the initial position z_0 of the large particles of Fig. <ref>a (with origin at z_c) normalized by d as a function of their vertical displacement ( z_e - z_0 )/d, for different shear stresses, where z_e is the final position of particles. During the experiments, we have not remarked large particles moving distances of the order of their radius in the transverse direction, so that they did not leave completely the laser plane (i.e., information for identifying z_e was complete). Figure <ref>b also indicates the regions where segregation has effectively occurred (upwards motion of large particles) and those in which compaction has taken place (collective downward motion of all particles). Since the origin of the final position (ordinate) is z_c, we notice that most of compaction takes place in the creep layer, while most of segregation occurs in the bedload layer. In addition, we observe that segregation is stronger for higher shear stresses while compaction is stronger for lower shear stresses. Figure <ref>c shows the vertical displacements of the large particles of Fig. <ref>a ( z_e - z_0 )/d as a function of the instants when they were last detected t_e/t_exp, where t_exp is the duration of each experiment. We note that the data concentration at t_e/t_exp = 1 is due to particles that were detected until the end of experiments.Finally, Fig. <ref>a shows the displaced position of the large particles that segregated (symbols), ( z_ - z_min)/d, and fittings (black lines) of the corresponding averages as functions of t/t_shear, for each region where segregation takes place and different shear stresses. In this displaced coordinate, z_min is the lowest position reached by the particle (due to an increase in bed compaction) before start rising, and the fittings follow exponential functions (as proposed by Zhou et al. <cit.> for the degree of segregation). Figure <ref>b shows the number of segregated particles N normalized by the total number of large particles N_T identified in the images as a function of t/t_shear, for different shear stresses (segregation rates can be obtained by taking the time derivative of those curves, dN/dt, and are available in the supplementary material). From Fig. <ref>a, we observe a consistent behavior for the shear stresses tested, with similar segregation curves for each depth (ζ_1 to ζ_4) and with no clear dependency on θ, although varying with it. The timescales to complete the segregation in each region are the same observed above for Fig. <ref>a. Figure <ref>b shows that the number of segregated particles vary with θ, with periods of high slope alternating with others of low slope in the graphic. Although no clear tendency with θ can be found, the general behavior of curves is similar. The curves have initially a high slope, then a significant decrease in the slope occurs in a time that depends on the shear stress, the slope increases again to approximately the previous values after some time has elapsed, and, finally, by the end of the experiments, the slope decreases again. We do not have an explanation for these oscillations, but they could vary with the characteristic time of segregation of each region. For example, for θ / θ_c = 3.5 a high slope is observed in Fig.<ref>b for t/t_shear ∼ 10^2 (t ∼ 1 min), then a small slope for t/t_shear ∼ 10^3 (t ∼ 10 min), a high slope again for t/t_shear ∼ 10^4 (t ∼ 10^2 min), and a low slope for t/t_shear ∼ 10^5–10^6 (t ∼ 10^3 min). This remains, however, to be investigated further. Lastly, the inset in Fig. <ref>b shows N/N_t as a function of t_seg/t_exp, from which we observe that a great part of segregation occurs in the very beginning of experiments (within the first 1% of the total time), so that before 0.5 t_exp more than 90% of larger grains have already segregated. To summarize, we found a characteristic time for the upward motion (segregation) of large particles (Figs. <ref>a and <ref>a) which depends on the depth within the bed: * t_1/t_shear = 10^2 (t_1 = 1 min), for 0.85z_s ≤ ζ_1 ≤ z_s; * t_2/t_shear = 10^3 (t_2 = 10 min), for 0.75z_s ≤ ζ_2 < 0.85z_s; * t_3/t_shear = 10^4 (t_3 = 10^2 min), for z_c ≤ ζ_3 < 0.75z_s; * t_4/t_shear = 10^5–10^6 (t_3 = 10^3 min), for 0.95z_c ≤ ζ_4 < z_c. The first three regions (ζ_1 to ζ_3) correspond to the bedload layer, where large particles move mainly by advection (Fig. <ref>), while the the last one (ζ_4) corresponds to the upmost part of the creep layer (within 95% of its top), where particles move mainly by diffusion (Fig. <ref>). In this top layer, creep seems strongly influenced by the shear caused by the above bedload layer. Below in the creep layer (z < 0.95z_c), we have not observed any upward motion of large particles, but, instead, a collective downward motion due to bed compaction (Fig. <ref>). This increase in bed compaction results in strain hardening and decrease in the the granular mobility <cit.>, which are investigated next (in Subsection <ref>). In this picture, large particles in the upmost part of the creep layer move slowly by diffusion until reaching the creep-bedload transition zone (z = z_c), from which height they are vertically advected by the rapid motion of surrounding particles, the vertical velocity increasing with height.§.§ Strain hardening Figures <ref>a-d (multimedia available online) show the longitudinal component of the instantaneous velocity measured for all detected particles and the entire duration of each test as a function of the bed height z, for different shear stresses (Figs. <ref>a-<ref>d correspond to θ / θ_c from 1.4 to 3.5, respectively). We observe that, even considering the entire duration of tests, the bed behavior is consistent, with a region where velocity gradients are higher and which corresponds to the bedload layer, and another one where gradients are much lower and which corresponds to the creep layer. In particular, we observe that by increasing the shear stress, the bedload layer increases, which is the layer where bed dilation occurs, in agreement with previous works <cit.>. This is also the layer where the vertical advection of large particles takes place, so that segregation is stronger. The magnitude of longitudinal velocities also increases with the shear stress, going from roughly 0.25 mm/s for the topmost grains when θ / θ_c = 1.4 to 2.5 mm/s when θ / θ_c = 3.5, contributing then to higher segregation rates. On the contrary, the creep layer shortens with increasing the shear stress, with average velocities of the order of 10^-6–10^-5 mm/s. This is the layer where compaction takes place over the time, and both isotropic and anisotropic hardenings occur <cit.>. Figure <ref>e presents the heights z_c and z_s as functions of θ / θ_c, showing that the bedload and creep layers increase and decrease, respectively, linearly with the shear stress. At the leading order, this reflects a quadratic variation of the bedload flow rate with the applied shear stress if both the particles' velocities and bedload height vary linearly with θ. This is, indeed, a reasonable picture for laminar viscous flows in which both Re and Re_p < 1, Charru et al. <cit.> having shown that V_x ∼ θ and that the bedload flow rate varies with θ ^2 when θ is close to θ_c (explaining then the linear variation of the bedload layer). However, nonlinearities due to bidispersity and deviations from the critical conditions are expected. Figures <ref>a-d show the space-time diagrams of the longitudinally averaged strain ε for θ / θ_c from 1.4 to 3.5, respectively. We observe that, as the shear stress increases, the height of the creep layer decreases while the strain increases. In order to further investigate that, we computed the longitudinal-time averages of the strain, < ε>, which we plot in Fig. <ref>e as a function of height z/z_c for θ / θ_c from 1.4 to 3.5. We notice two distinct regions: a region below z/z_c ≈ 0.5, in which the levels of strain are relatively low and roughly independent of θ, and a region above z/z_c ≈ 0.5, in which the strain increases with the shear stress. We also took the maximum values of < ε>, represented by < ε>_max, which we plot in Fig. <ref>f. From this figure, it is possible to determine the time that each applied stress takes to cause a maximum strain of approximately d, which is t_ε/t_shear ∼ 10^6 in dimensionless form. In dimensional terms (graphics available in the supplementary material), the characteristic time decreases with the shear stress, being t_ε ∼ 10^4 min for θ / θ_c = 1.4, t_ε ∼ 10^3 min for θ / θ_c = 2.0, t_ε ∼ 10^2 min for θ / θ_c = 2.7, and t_ε ∼ 10 min for θ / θ_c = 3.5. Finally, Fig. <ref>g shows a map in the θ / θ_c vs. t/t_shear space of the integrals of the < ε> profiles, ϵ, divided by d. With this map, we can evaluate the regions in which either the fast or the slow evolution of the bed takes place for different shear stresses. Dimensional forms of the graphics are available in the supplementary material.We also computed the deformation of the creep layer based on the subtraction of the previous position of each particle from its previous position, which we show in Fig. <ref>. Figure <ref>a-d shows the space-time diagrams of the longitudinally averaged deformation Δε for θ / θ_c from 1.4 to 3.5, from which we can observe that: (i) deformations are higher at the beginning of tests (t/t_shear ∼ 10^2, in dimensional terms t ≲ 1 min); (ii) as the shear stress increases, large deformations become concentrated at t/t_shear ≈ 5 × 10^2 (t ≈ 1 min); and (iii), as the shear stress increases, the depth reached by large deformations also increases. In this way, while for θ / θ_c = 1.4 higher deformations are distributed within 4 ⪅ z/d ⪅ 9 and t/t_shear ⪅ 5 × 10^2, and for θ / θ_c = 3.5 they occur within 2 ⪅ z/d ⪅ 7 and 2 × 10^2 ⪅ t/t_shear ⪅ 3 × 10^2. Therefore, higher shear stresses deform deeper regions in the bed during shorter times. However, the maxima in all diagrams occur at t/t_shear ∼ 10^2 (t ≈ 1 min). Figure <ref>e shows the time variation of the longitudinal-vertical average of deformations, Δε, for θ / θ_c from 1.4 to 3.5. We notice that Δε has a peak at the beginning of motion at t/t_shear = t_Δε/t_shear ∼ 10^2 (t_Δε ∼ 1 min) for all shear stresses tested, and that the peak tends to increase with the shear stress, corroborating the observations made for Figs. <ref>a-d.Summarizing, we found one characteristic time for the deformation, t_Δε, which is independent of the applied stress both in dimensionless and dimensional terms, * t_Δε/t_shear ∼ 10^2 (t_Δε ∼ 1 min), for any θ / θ_c,and another one for the strain, t_ε, which in dimensional form depends on the shear stress. For strains corresponding to maximum displacements equal to d: * t_ε/t_shear ∼ 10^6, for any θ / θ_c * t_ε ∼ 10^4 min for θ / θ_c = 1.4; * t_ε ∼ 10^3 min for θ / θ_c = 2.0; * t_ε ∼ 10^2 min for θ / θ_c = 2.7; * t_ε ∼ 10 min for θ / θ_c = 3.5. § CONCLUSIONSIn this paper, we investigated the evolution of a bidisperse bed consisting of heavy grains (S = 2.7) sheared by a viscous liquid. For the range of shear stresses imposed, the bed developed a bedload layer on the top of a creep layer, for which we found that: (i) there exist diffusive, advective and constrained regions for the motion of larger particles; (ii) most of segregation occurs during the very first stages of the flow (within the first 10 min, or 10^3 when normalized by t_shear = S/γ̇); (iii) segregation occurs within the bedload layer and in the 5% topmost region of the creep layer; (iv) there exist four regions of increasing depth ζ_1 to ζ_4 where the characteristic times for segregation are t_1/t_shear = 10^2, t_2/t_shear = 10^3, t_3/t_shear = 10^4, and t_4/t_shear = 10^5–10^6 (t_1 = 1 min, t_2 = 10 min, t_3 = 10^2 min, and t_4 = 10^3 min). The first three regions are within the bedload layer and the last one corresponds to the top of the creep layer; (v) bed hardening becomes stronger while bedload and creep weaken along time; (vi) the characteristic time of bed hardening in terms of deformation is t_Δε/t_shear ∼ 10^2 (t_Δε ∼ 1 min), corresponding to the time when a huge peak in deformation occurs for all shear stresses; (vii) the characteristic time of bed hardening in terms of strain (corresponding to maximum values equal to d) is t_ε/t_shear ∼ 10^6, and in dimensional form varies with the shear stress, going from t_ε ∼ 10 min to t_ε ∼ 10^4 min for θ / θ_c decreasing from 3.5 to 1.4, respectively. Our results shed light on the complex motion of sheared beds found in nature, such as river beds and creeping lands, revealing the different layers and characteristic times for both segregation and hardening. In particular, the results can be useful for predicting the segregation in polydisperse beds (leading to bed armoring), the time for compaction of lower layers (promoting bed hardening), and the time for the rearrangement of grains within the bed (which hardens the bed while keeping memory effect <cit.>). § AUTHOR DECLARATIONSConflict of InterestThe authors have no conflicts to disclose§ SUPPLEMENTARY MATERIALSee the supplementary material fora brief description of the employed methods, the layout of the experimental setup, microscopy images of the employed grains, additional tables and graphics, and movies of sheared beds. § DATA AVAILABILITYThe data that support the findings of this study are openly available in Mendeley Data at http://dx.doi.org/10.17632/r96kpf7ytb <cit.>. Erick M. Franklin and Fernando D. Cúñez are grateful to the São Paulo Research Foundation – FAPESP (Grant Nos. 2016/18189-0, 2018/14981-7) for the financial support provided. Jaime O. Gonzalez would like to thank the Petroleum Department of the Escuela Politécnica Nacional, Quito, Ecuador. The authors are also grateful to the Conselho Nacional de Desenvolvimento Científico e Tecnológico – CNPq (Grant No. 405512/2022-8) for the financial support provided, and to Danilo S. Borges for the help with a tracking algorithm.
http://arxiv.org/abs/2310.17782v1
{ "authors": [ "Jaime Oswaldo Gonzalez Maya", "Fernando David Cúñez Benalcázar", "Erick de Moraes Franklin" ], "categories": [ "physics.flu-dyn", "cond-mat.soft", "physics.geo-ph" ], "primary_category": "physics.flu-dyn", "published": "20231026211625", "title": "Bidisperse beds sheared by viscous fluids: Grain segregation and bed hardening" }
1]Luigi Viola [email protected], https://orcid.org/0000-0001-7913-5685 2]Saeed Nordin [email protected], https://orcid.org/0000-0003-1823-9653 1]Daniel Dotta [email protected], https://orcid.org/0000-0002-3287-172X 2]Mohammad Reza Hesamzadehcor1 [email protected], https://orcid.org/0000-0002-9998-9773 3]Ross Baldick [email protected], https://orcid.org/0000-0003-2783-7321 4]Damian Flynn [email protected], https://orcid.org/0000-0003-4638-9333[1]University of Campinas, Av. Albert Einstein, 400, Campinas-SP, Brazil [2]KTH Royal Institute of Technology, 100 44 Stockholm, Sweden [3]University of Texas at Austin, Austin, TX 78705, USA [4]University College Dublin, 4 Dublin, Ireland [cor1]Corresponding author.The expansion of variable generation has driven a transition toward a 100% non-fossil power system. New system needs are challenging system stability and suggesting the need for a redesign of the ancillary service (AS) markets. This paper presents a comprehensive and broad review for industrial practitioners and academic researchers regarding the challenges and potential solutions to accommodate high shares of variable renewable energy (VRE) generation levels. We detail the main drivers enabling the energy transition and facilitating the provision of ASs. A systematic review of the United States and European AS markets is conducted. We clearly organize the main ASs in a standard taxonomy, identifying current practices and initiatives to support the increasing VRE share. Furthermore, we envision the future of modern AS markets, proposing potential solutions for some remaining fundamental technical and market design challenges. Ancillary services, flexibility, inverter-based resources, market design, and stability. § ACKNOWLEDGEMENTSThe National Agency of Electric Energy (ANEEL) research and development program has supported this work, with financial support provided by Engie (under grant PD-00403-0053/2021) and the National Council for Scientific and Technological Development (CNPq).§ ABBREVIATIONS aFRR, Automatic Frequency Restoration Reserve; AGC, Automatic Generation Control;AI, Artificial Intelligence; AS, Ancillary Service;BESS, Battery Energy Storage Systems;BRP, Balance Responsible Party;BSC, Black Start Capability;BSP, Balance Service Provider;DAM, Day-Ahead-Market;DER, Distributed Energy Resource; DG, Distributed Generation;DRR, Dynamic Reactive Response;DSO, Distribution System Operator;ESS, Energy Storage Systems;EU, Europe; EV, Electric Vehicles; FACTS, Flexible AC Transmission System;FCR, Frequency Containment Reserve;FFR, Fast Frequency Response;FPFAPR, Fast Post-Fault Active Power Recovery; FR, Frequency Regulation;FRC, Flexible Ramping Capability;GFL, Grid-Following;GFM, Grid-Forming;HVDC, High Voltage Direct Current;IBR, Inverter-Based Resource; ICT, Information and Communication Technology; IDM, Intraday Market;IR, Inertial Response;ISO, Independent System Operator;LMP, Locational Marginal Prices;LOC, Lost Opportunity Cost; mFRR, Manual Frequency Restoration Reserve;MP, Marginal Pricing;NSR, Non-Spinning Reserve;ORDC, Operating Reserve Demand Curve;PBP, Pay-as-Bid;PFC, Primary Frequency Control;PFR, Primary Frequency Response;PHS, Pumped Hydropower Storage; PLL, Phase-Lock Loop;PMU, Phasor Measurement Unit; PRF, Primary Frequency Response; PV, Photovoltaic;RoCoF, Rate of Change of Frequency;RP, Regulated PriceRR, Replacement Reserves;RTM, Real-Time Market;RUC, Reliability Unit Commitment; SC, Synchronous Condensers;SCED, Security-Constrained Economic DispatchSCUC, Security-Constrained Unit Commitment; SFC, Secondary Frequency Control; SG,Synchronous Generator; SIR, Synchronous Inertial Response; SO, System Operator;SR, Spinning Reserve;SSRR, Steady-State Reactive Response; STATCOM, Static Synchronous Compensators;SVC, Static VAR Compensators; TFC, Tertiary Frequency Control;TSO, Transmission System Operator; UFLS, Under-Frequency Load-Shedding;UPS, Uninterruptible Power Supply; US, United States; VIR, Virtual Inertial Response;VPP, Virtual Power Plant;VRE, Variable Renewable Energy; § INTRODUCTIONPower systems have witnessed a growing share of variable renewable energy (VRE) in the generation mix. This process is motivated by climate change concerns, aiming to reduce carbon emissions to limit global average temperature rises. By 2050, the United States (US) will reach 44% of renewable electricity supply <cit.>. Currently, the regions operated by CAISO (California) and ERCOT (Texas) present significant solar and wind shares, respectively. The target in Europe (EU) is to reach 32% renewable electricity supply in 2030 <cit.>. Great Britain (GB), Ireland (all-island), Germany, and the Nordic power system have dominant renewable generation participation. The path toward a 100% non-fossil future includes a mix of VRE (wind and solar), other renewables (hydropower, ocean, and tidal), low-carbon sources such as biomass, geothermal, and nuclear power plants, and energy storage (pumped hydropower plants, batteries, hydrogen systems). Even if the average generation share of VRE in a power system is less than 100%, power system stability problems may occur in a 100% non-fossil future due to high VRE instantaneous share <cit.>.Thus, traditional assumptions for power system operation based on synchronous generators (SGs) capabilities must be revisited, particularly the sufficient frequency and voltage support, and fuel availability (fossil-fuels-based units). A new set of system needs arises to preserve system stability and provide flexible operation when fewer synchronous resources are available. Inertia and fast response reserves are critical to maintain frequency stability. Voltage stability is impacted due to the scarcity of steady-state reactive power capability in regions with weaker transmission networks. The increased electrical distance between the remaining synchronous units requires enhanced dynamic reactive support. Additionally, ramping capability from flexible resources is essential to manage the variability and uncertainty of VRE <cit.>.Historically, power systems have comprised large and centralized dispatchable SGs, mainly fossil-fueled (from coal, oil, and natural gas) or nuclear-, and hydro-powered units. System inertia and strength (measured by the short circuit ratio) are sufficiently supplied as a byproduct of SG operation. Thermal generation is capacity-constrained, has significant variable costs, and baseload units (coal-fired and nuclear plants) are inflexible. System operators (SOs) generally consider a unidirectional power flow from transmission through the distribution system, which is modeled as an aggregated load to reduce computational complexity. Distribution system operators (DSOs) may have limited ability to operate the network, relying on the capacity defined at the planning stage (i.e. fit-and-forget approach). In contrast to historical practice, the transition toward a 100% non-fossil power system is boosted mostly by VRE generation and energy storage systems (ESS) interfaced by electronic inverters and located throughout the transmission and distribution system. Although centrally dispatched at the transmission level, the dispersion of non-dispatchable inverter-based resources (IBRs) through the distribution system tends to decentralize the generation. VRE is constrained by the availability of primary energy sources (i.e. energy-constrained) with very low variable costs (i.e. near zero-marginal cost). High shares of wind and solar generation result in a variable and uncertain net load profile <cit.> with steeper ramps and deeper valleys, requiring the dispatch of flexible resources. VRE displaces traditional generation in the merit-order dispatch and, combined with the inability of IBRs to provide an inertial response, reduces system inertia levels, shortening the response time to disturbances. Thus, fast-acting reserves must be dispatched to arrest the frequency drop following a disturbance, such as a generator tripping offline <cit.>. Also, tripping of online synchronous resources reduces reactive power capability and imposes new requirements to preserve voltage stability in response to small imbalances or contingencies. Increasing shares of distributed generation (DG) and other energy systems, such as batteries, requires enhanced load modeling to improve the visibility of distributed energy resources (DERs) in power system operation. Insufficient modeling of DERs in power flow studies may result in inefficient management of reactive power and congestion, also compromising contingency plans. Active operation of the distribution systems and close coordination between SOs and DSOs may unlock flexible resources, such as electric vehicles (EVs), heat pumps, dispersed generation and storage, improving system security.Ancillary services (ASs) are crucial to help SOs in frequency response, voltage control, and system restoration to robustly ensure system stability and flexibility. The impact of the new system needs is different in each power system and depends on the size of the system (small or large), level of interconnection (strong or weak), underlying flexibility of the existing portfolio, the available infrastructure of transmission (presence of bottlenecks), technological status (share of IBRs and DERs), regulatory policy etc. A tailor-made redesign of the AS markets must encourage eligible providers to supply the identified system needs, ensuring sufficient revenue and preserving investment signals for expansion in flexible capacity. Innovative technologies such as inverters, ESS, high voltage direct current (HVDC) grids, and information and communication technology (ICT) infrastructure, are paving the path toward a 100% non-fossil future. Mathematical models are being developed to address the impacts of uncertainty and variability of VRE through forecasting techniques, and load modeling aims to capture the behavior of emerging loads. Financial incentives through price- or incentive-based programs are encouraging demand response and empowering consumers to unlock flexible resources. Jointly, innovative technologies, mathematical models, and demand response are key elements of a successful energy transition. These energy transition enablers direct the business plan and investments of AS providers. Market players can strategically place or manage innovative technologies at the transmission level to provide ASs by, for example, linking VRE generation with ESS, or interconnecting regions with HVDC grids. ICT infrastructure should be provided at the distribution level to adequately integrate DG, ESS, and EVs. Furthermore, ASs from heat, natural gas, hydrogen, and EVs may provide additional flexibility, promoting the coupling of the power system with other sectors.The advances in US and EU power system operation practices and market designs serve as a reference point to other power systems due to the maturity of these wholesale markets. Accessing current technical barriers to improve system stability and market design flaws to mitigate inefficient economic signals allows practitioners to envision solutions and design modern AS markets under high shares of VRE. Also, investors can find opportunities to promote new business models. This paper provides a comprehensive review of more than two hundred papers (book chapters, reports, technical manuals) concerning the market design challenges in the US and EU, considering a power system transition toward a 100% non-fossil power system future. Our paper contributes to the relevant literature as follows.First, we contextualize how ASs have evolved and moved away from the synchronous-based power system paradigm. We clearly state the role of energy transition enablers in supporting ASs provision, addressing fundamental technological and modeling changes, and new demand response strategies. Second, we propose a holistic view of current US and EU ancillary service market designs through a systematic review of all US independent system operators (ISOs) and four relevant European power systems, considering their respective transmission system operators (TSOs). To make the proposed taxonomy easy to use, the AS types are conveniently summarized and categorized into frequency, non-frequency-related, and recently defined ASs. Third, we identify pivotal technological and market design barriers based on US and EU ancillary service market experiences. Subsequently, we propose potential solutions to enable secure and flexible power system operation under high VRE shares, to overcome market design inefficiencies that discourage providers from engaging in the AS markets. These potential solutions will help SOs and regulators redesign the existing AS markets.Table <ref> compares our paper with other relevant papers addressing the ASs theme. No reviews were found, including existing and emerging ASs, systematically analyzing both US and European challenges and proposing potential solutions for modern AS markets under the transition towards a 100% non-fossil future context.The authors in <cit.> present relevant AS market design issues regarding the existing ASs as highlighted in Table <ref>; however, technical or market design challenges under high shares of VRE are not discussed. Among the papers that include a comprehensive discussion about emerging ASs, <cit.> focuses specifically on fast response reserves and <cit.> in ASs provided by EVs. Paper <cit.> only investigates the AS markets in Ireland/Northern Ireland. The authors in <cit.> provide worldwide experiences, but no systematic comparison of AS markets and current practices is conducted. Although <cit.> compares several power systems, the paper focuses only on power system balancing challenges. Therefore, our paper fills an existing gap in the literature by drawing a line on the limits of some current AS markets design to accommodate the transition towards a 100% non-fossil power system and proposing potential redesign solutions.§ ANCILLARY SERVICES The definition of ASs is intrinsically associated with a system operator's fundamental duty of ensuring reliable power system operation. Because procuring electrical energy and capacity does not guarantee system security, SOs must acquire a set of auxiliary (or ancillary) services from capable providers to satisfy secure power system operation <cit.>. Firstly, we detail the operating requirements that define conventional ASs designed from the perspective of a synchronous-based power system. Subsequently, we contextualize the ongoing energy transition, highlighting its enablers that may favor the provision of ASs. We propose a sufficient definition and taxonomy for ASs while introducing a number of new recently defined services. Moreover, we introduce and discuss the key fundamentals for the design of efficient AS markets. §.§ Conventional Ancillary Services Two main groups define the conventional provision of ASs. The first comprises services to maintain the active power balance in normal operation and after a contingency, which are referred to hereafter as frequency-related ASs. The second includes services to ensure the reactive power balance and reserves to restore the power system after a significant contingency, which are referred to hereafter as non-frequency-related ASs. §.§.§ Frequency-Related Ancillary Services The inertial response (IR) and a hierarchical control scheme guarantee frequency equilibrium in steady-state operation. IR is an inherent reserve immediately released from synchronous machines after any active power imbalance between the generation and demand. This reserve is the stored rotational energy of the synchronous machines, which counteract rotational speed changes and is independent of the machine power output <cit.>. An SG, equipped with a governor, can automatically change its active power when the frequency limit exceeds a defined dead band during regular or random deviations from the supply and demand, performing primary frequency control (PFC). In practice, if a tight tolerance band was imposed by SOs for nominal frequency under normal operating conditions, frequency fluctuations generally will not trigger PFC <cit.>. To maintain the frequency close to the nominal value, restore scheduled power flows between interconnected control areas, and reduce the area control error, secondary frequency control (SFC) is essential <cit.>. SFC is centralized and can be performed manually or automatically. Online generators with automatic generation control (AGC) can quickly change their active power output in response to rapid imbalances (second-to-second through minute-to-minute). In contrast, SO instructions or manual changes in dispatch can manage slower fluctuations (intra- and inter-hour) <cit.>. Figure <ref> shows a frequency excursion after a large under-frequency event and the activation sequence of the hierarchical control for a large power system.IR dampens the sudden frequency fall and extends the available response time until PFC acts, avoiding the disconnection of loads by under-frequency load-shedding (UFLS) schemes <cit.>. The large frequency variation experienced after a disturbance is outside the governor dead band. PFC captures the frequency drop and stabilizes the frequency within an acceptable range. The remaining frequency deviation from the nominal value is corrected through the SFC. Generators that were following the AGC signal under normal operating conditions can momentarily turn off AGC contributing to power capacity <cit.>. Also, available capacity is manually activated from online and offline generating units <cit.>. Deploying the reserve capacity of generators by dispatching it to generate electricity then results in a deficit of remaining reserves that must be replenished by rescheduling the generating units. Tertiary frequency control (TFC) restores the power system to the pre-contingency status, preparing it for the next possible contingency <cit.>. Three important operational metrics to limit the system frequency are illustrated in Fig. <ref>. Immediately following the contingency event, the derivative of frequency to time defines the maximum rate of change of frequency (RoCoF). RoCoF is an operational metric that indicates how fast the frequency changes. The nadir occurs at the maximum frequency deviation (Δ f_max) point <cit.>, while the maximum allowed frequency deviation (Δ f_max^ss) sets the quasi-steady-state frequency, that is notionally the frequency reached after inertial and primary response but before secondary response activation, as shown in Fig. <ref>. These operational metrics help SOs define the necessary reserves and set protection schemes <cit.>.§.§.§ Non-Frequency-Related Ancillary Services Besides the system frequency, reactive power imbalances can be regulated by monitoring voltage variations. To operate seamlessly, electrical equipment needs to function within a narrow voltage band; otherwise, malfunctions and damage can occur. The supply of reactive power results in the consumption of generation and transmission resources. Since reactive power losses increase with distance, voltage control is location-constrained. Therefore, static and dynamic devices are installed at key buses in the power system to provide this AS <cit.>. Static devices, such as capacitors and reactors, help to regulate steady-state voltages. Dynamic devices can control the voltage output in response to voltage changes, such as flexible AC transmission system (FACTS) devices. The latter includes static VAR compensators (SVC) and static synchronous compensators (STATCOM). Notice that a synchronous generator continuously adjusts its reactive power to perform systemic voltage control. Also, synchronous condensers (SC) may be installed for an improved reactive compensation <cit.>.In the event of a blackout, power system restoration must be initiated as quickly as possible to minimize technical and economic losses. This task involves complex coordinated steps, commencing with the restart of the generators. In this case, the necessary steps include restarting appropriate resources rapidly without an external power supply, energizing transmission lines, and restarting other available generators <cit.>. Generally, relatively small power plants, such as hydroelectric power plants, pumped hydropower storage (PHS), and combustion turbines, with a battery or diesel generator to feed the auxiliaries of the main generator, start the restoration process <cit.>.§.§ Enabling the Energy Transition The transition to a 100% non-fossil power system is a challenging path, where the operating requirements of the power system must be attained under high shares of VRE. Existing synchronous resources allow the introduction of new technologies maintaining system stability. Innovative technologies are decarbonizing the power system and boosting the energy transition. Mathematical models should be created or adjusted to include a more accurate representation of the new system needs, and new demand response strategies should facilitate power system decentralization. This section highlights the main enablers of the energy transition capable of supporting the provision of ASs.§.§.§ Existing Synchronous Resources The evolution to an inverter-based paradigm is notably sustained by continuous experimentation in a changing power system. The capabilities of the existing synchronous resources allow a gradual introduction of new technologies to avoid system instabilities. For instance, Ireland/Northern Ireland is imposing a 75% limit for non-synchronous instantaneous generation penetration to maintain secure operation <cit.>. A minimum number of online synchronous units may be necessary to guarantee stable operation, considering the VRE-driven displacement of SGs. Synchronous condensers can mitigate the reduced system inertia and synchronizing torque levels, enhancing the dynamic voltage support and fault levels provision <cit.>. Considering adequate financial incentives, coal-fired power plant owners can also retrofit their units to improve operational flexibility, such as reducing their stable minimum power output to expand the operational range <cit.>. The unlocked flexibility of coal-fired plants is helpful in avoiding the curtailment of VRE generation if network bottlenecks constrain the power transfer.§.§.§ Innovative Technologies Emerging technologies which transform the traditional synchronous-based paradigm and promote increasing shares of VRE can be named innovative technologies. Next, we detail advances in these technologies, indicating promising developments to ensure secure and flexible operation. Power Electronics and ControlPower electronics are essential in converting and controlling raw VRE and deploying ESS, which are useful for managing load fluctuations. However, the electronic interface decouples variable speed wind turbines (VSWT) (particularly full converter type) and photovoltaic (PV) generators from the grid, preventing the natural transient response under a contingency. Consequently, they do not inherently provide IR, which leads to lower system inertia if conventional generators are displaced in the economic dispatch, such as in high instantaneous VRE share conditions. The reduction in system inertia leads to an increase in RoCoF and a lower frequency nadir if the largest contingency remains the same, thus requiring that other generators respond in a shorter time frame <cit.>.As of the writing of this paper, most IBRs are coupled to grid-following (GFL) inverters, acting as a controlled current source. A current control loop quickly changes the current output based on the angular reference from the phase-lock loop (PLL) control. The PLL estimates the instantaneous voltage phase angle by measuring the terminal voltage phasor of the inverter. Several control techniques, based on the frequency measurement, have been considered to enable a frequency response when GFL inverters are adopted <cit.>. The hidden inertia technique comprises a supplementary control that allows a VSWT to respond rapidly to frequency changes. After a disturbance, the power output of the wind turbine can be increased based on the frequency deviation, which slows down the turbine and enables the release of the hidden rotational energy from the rotating mass <cit.>. Alternatively, fast power reserve aims to increase the power output within a constant percentage <cit.> or range (e.g. 5-10% as in <cit.>) of the nominal wind turbine power for a defined wind speed range. The overproduction period helps to arrest the RoCoF; however, it is followed by an underproduction period due to operation below the maximum power point. Deloading of the VSWT or PV generation provides a reserve margin for PFC activation. A governor-like behavior is emulated through droop control programmed in the inverter to respond to frequency deviations by changing the active power proportionally, improving the frequency nadir <cit.>. In addition to a frequency response, the power electronic interface can assist VSWT and PV panels in voltage control. Under normal operating conditions, a VSWT supports reactive power by controlling the voltage of a specific bus or setting a fixed power factor. However, these control strategies can be insufficient if the power plant is placed in a weak grid to maximize the use of wind or solar resources. Also, system stability should be ensured in transient periods, and thus, additional control strategies and FACTS devices, such as STATCOM, and SVC, can ensure the voltage ride-through requirements imposed in grid codes <cit.>. Solar PV panels inherently produce DC power and cannot deliver reactive power. Nevertheless, modern inverters can absorb or inject reactive power from the grid, performing voltage control <cit.>. In <cit.>, PV inverters are programmed to act as a STATCOM, during the night and day to avoid voltage instabilities. Voltage ride-through capabilities are accessed in VSWTs through improved control techniques or connection of external FACT devices <cit.>.Inverters provide a low short circuit fault contribution, reducing system strength. Additionally, an inherent delay in processing the signal from the PLL control is inevitable in GFL inverters preventing an immediate response to disturbances. Under extremely high instantaneous VRE share, system stability may be seriously threatened. Grid-forming (GFM) inverters operate as voltage sources, imposing a constant voltage phasor without the need for a PLL <cit.>. Therefore, an essentially instantaneous response can be obtained. Using some short-term ESS (batteries or supercapacitors) or sufficient headroom from the input energy source (wind or solar), and a modified control strategy, such as the so-called virtual synchronous machine (VSM), the dynamic behavior of a synchronous machine under disturbances can be emulated <cit.>. Several control techniques are summarized in <cit.> and <cit.>. In addition, GFM inverters allow PV and wind plants to energize their own sites. A coordinated process can create smaller and distributed power islands in the distribution system that will further energize the transmission lines <cit.>. Energy Storage SystemsAn ESS can shift energy and power in time and they are crucial for decarbonizing the power system since they act as an energy buffer smoothing variable generation. They can provide short- and long-term capacity, enhancing system flexibility to alleviate the peak load, deferring grid investments, and providing frequency and voltage control. Available options include mature technologies, such as pumped hydropower storage (PHS), more recent developments, such as battery energy storage systems (BESS) and flywheels, and newer solutions, such as supercapacitors and hydrogen storage. A hybrid system, including two or more technologies, is also possible <cit.>.A PHS is a versatile technology to provides frequency-related AS in generating mode. In pumping mode, fixed-speed PHS can operate their SG as a synchronous condenser to increase the inertia of the load. Also, variable-speed PHS coupled by an inverter interface can contribute to a fast response against frequency deviations after a disturbance <cit.>. Using batteries in EVs make them economical for power system applications. BESS can quickly respond to changes in system frequency due to the absence of moving components. In <cit.>, the authors formulate an optimal state of charge strategy for PFC provision. Similarly, SFC using BESS is analyzed in <cit.> to evaluate the impact of continuous cycling on battery aging. BESS are also helpful to enhance short-term flexibility due to their high ramp capability <cit.> and can contribute to system restoration if suitably located <cit.>. Flywheels are well-suited to follow small frequency imbalances <cit.>, and the short-term storage of supercapacitors can assist in inertia emulation control <cit.>. Additionally, long-term hydrogen storage can potentially mitigate seasonal fluctuations reducing the curtailment of variable generation, as presented in <cit.>.High Voltage Direct Current GridsHVDC transmission lines provide power transfer for long lines, interconnecting systems asynchronously and helping to integrate renewable energy by enabling energy balancing over a wider area. Line-commutated converter (LCC) HVDC links have a high power transfer rating, but their inability to provide an AC voltage from the DC side precludes black start operation. Also, LCC HVDC cannot support voltage control. Voltage source converter (VSC) HVDC links have black start capability and can contribute to voltage control due to the independent controllability of active and reactive power <cit.>. Additionally, LCC and VSC HVDC technologies can contribute to frequency response. Considering LCC HVDC links, the authors in <cit.> propose integrating the converters with feedback loops to emulate inertia and provide PFC. Regarding VSC HVDC technology, <cit.> proposes a scheme capable of autonomously adjusting the emulated inertia constant according to the grid frequency deviations. Detailed frequency control strategies are presented in <cit.>. The focus is on VSC HVDC transmission lines connected to wind farms. Information and Communication TechnologyAppropriate ICT infrastructure is essential to monitor, control, and manage resources at different voltage levels. In distribution systems, households can interconnect sensors and controllers through an internet of things (IoT) infrastructure to facilitate the automation of residential appliances and optimize their electricity consumption <cit.>. Digital meters can enable two-way communication between utilities and consumers, improving the visibility of rooftop PV systems, BESS, and EVs across the grid, allowing consumers to trade their flexibility. System observability can be improved using wide-area measurements from phasor measurement units (PMUs), enhancing system security. Time-synchronized phasor (voltage and current) measurements enable real-time frequency and voltage monitoring, fault and oscillation detection, allowing SOs to adopt corrective actions faster <cit.>. Also, inertia estimation is essential under high shares of VRE. Model-based methods derived from the swing equation and real-time estimation from synchrophasor dataare found in the literature <cit.>.§.§.§ Mathematical Models for Power System Operation and StabilityMore accurate wind and solar power forecasts, and improved representation of modern loads arising from dispersed resources at low-voltage distribution systems, illustrate how mathematical models can be applied in power system planning and operation to enhance AS provision. In this section, we show why some models are outdated, considering the transition to a 100% non-fossil future and the benefits of their improvements. Wind and Solar Power Forecasting ModelsForecasting models have been developed to support improved decisions from SOs and market participants due to the stochastic nature of wind and solar. Methodologies can be divided into deterministic and probabilistic methods. Deterministic methods provide a single series of expected values and are widely used in power systems by SOs to guarantee sufficient reserve. In electricity markets, the deterministic forecast guides VRE producers to find suitable trading strategies. Significant advantages are simplicity, compatibility with existing operator tools, straightforward evaluation, and fast use and reproduction. Deterministic methods are classified as physical or statistical models. Numerical weather prediction is a physical model based on meteorological data suitable for long-term horizons (a day to a week ahead). Time series-based and artificial intelligence (AI) models (or a hybrid approach) are statistical models adequate for short-term (minutes to hours ahead) forecasts <cit.>. Probabilistic methods provide a confidence interval of the expected values, resulting in uncertainty estimation. A range of possible outcomes is an improved solution compared to the deterministic point forecast. Analysis of multiple scenarios potentially optimizes the SO reserve procurement and the decision-making process of VRE suppliers <cit.>. Probabilistic methods can be parametric or non-parametric. The former is based on a known probability density function. In contrast, no assumptions are made in non-parametric models, and a tailor-made probability density function is empirically determined <cit.>.Load ModelingTraditional static and passive load models, such as the constant impedance, current, power model (ZIP), and exponential model, have been extensively adopted in power system analysis <cit.>. However, changes in load profile as the increasing participation of drive-controlled induction motors, for instance, in air conditioning systems, and the proliferation of DERs, make the traditional load models less suitable to represent load behavior. Dynamic load models, such as the induction motor (IM) model, incorporate differential equations derived from the load equivalent circuit to describe the active and reactive power response in time, and is particularly helpful for angular and transient stability studies <cit.>. Additionally, static and dynamic load model components can be aggregated to create a composite load model, such as (ZIP and IM), which tend to be more accurate than the individual models <cit.>. After choosing a load model structure (static, dynamic, or composite), a key step in load modeling is the identification of load parameters to validate the model. Existing methodologies are divided into component- and measurement-based approaches. The former relies on information about electricity consumption, that is, the load composition, and is available in commercial tools. Customers with similar load composition are aggregated in classes of loads, typically residential, commercial, industrial, agricultural, and others. The latter aims to fit a load model that emulates the behavior of the aggregate load model recorded from disturbance measurements of PMUs and digital meters <cit.>. The component-based approach does not require field measurements, saving costs on installing measurement devices. However, gathering detailed information about load composition is difficult. Load profile may change if consumers engage in demand response programs and start to follow instructed price signals. Also, the load composition varies seasonally and in the short-term (daily and weekly). In contrast, measurements are valuable for SOs because they indicate the operating conditions. The main advantage of the measurement-based approach is retrieving the power system dynamic response. Nonetheless, this approach cannot provide generic load models since data is collected at specific locations and depends on the occurrence of disturbances. Combining model- and data-driven approaches in a hybrid solution is a potential alternative <cit.>. Emerging modern loads at low voltage levels, such as EVs, solar rooftop PV systems with or without batteries, and heating and cooling electrical loads, are typically hidden behind a power electronic interface, resulting in a non-linear relation between voltage and current. The complexity of the current distribution system makes the use of detailed mathematical models for load modeling computationally impractical. A dynamic equivalent model of active resources can be obtained using disturbance measurements and some system identification method, such as artificial neural networks. By aggregating the behavior of a large number of resources with different technologies, SOs simplify the dynamic analysis of active distribution networks <cit.>. If SOs can adequately quantify the new system needs arising from the distribution network, new grid code requirements and the need for new ASs may be well addressed.§.§.§ Demand Response Demand-side participation is a powerful strategy to engage large and small players, such as industrial, commercial, and residential loads, in AS provision. Next, we discuss three potential strategies to increase power system flexibility and competition in the wholesale electricity market. Sector CouplingCoupling power systems with heat, gas, hydrogen, and transportation sectors allows SOs to procure additional flexible resources. For instance, power-to-heat includes thermal loads (building heating and cooling, water heating, refrigeration, and freezing), water pumping, air compression, and loads with associated storage processes <cit.>. These resources can assist in frequency control, as shown in <cit.>. The authors simulate a real district-heating plant to regulate frequency imbalances, stressing the importance of multi-energy systems. Hydrogen is an energy vector particularly helpful in storing and transporting renewable energy produced in power systems. The production of green hydrogen by electrolysis using excess power from VRE or hydropower plants is a power-to-hydrogen application that can enhance power system flexibility and contribute to the decarbonization process. Also, electrolyzers can act as a controllable load modulating the hydrogen production and quickly responding to frequency deviations, as presented in <cit.>, benefiting from demand response price signals. The vehicle-to-grid strategy involves a bidirectional control that allows electricity stored in vehicle batteries to be pushed back into the grid. These mobile batteries could allow EVs to respond rapidly to frequency deviations. In <cit.>, a single EV is used as a proof of concept to track frequency and store energy. Capacity payments should incentivize EV owners to maintain their vehicles parked and recover additional battery life costs due to increased cycling and round-trip energy losses. DER AggregationResources in the distribution system are dispersed and have smaller capacity compared to typical transmission-connected resources, but are potential providers of ASs. DERs can be aggregated to form a virtual power plant (VPP) or an energy community (EC). In contrast to a VPP, an EC primarily focuses on social, economic, and environmental benefits, rather than financial profits <cit.>. An aggregator is a third-party company that coordinates several DERs and acts as an intermediary between DER owners, the SO, and the DSO. The aggregation of DERs is changing the traditional centralized generation paradigm, empowering consumers to become producers (prosumers), and service suppliers in a decentralized fashion. Centrally coordinating individual DERs through bidirectional communication is impractical for SOs. Instead, an aggregator can interface thousands of DERs and receive operational signals with instructions from DSOs and SOs <cit.>, as shown in Fig. <ref>. Aggregators could be allowed to provide services locally for the DSO or directly for the SO. To avoid the risk of double counting resources or for SO and DSO to be operating at cross-purposes, close coordination and a clear definition of responsibilities between SO and DSO are fundamental to ensure that transmission and distribution network constraints will be respected while aggregators can compete with large players <cit.>. Considering a VRE-based VPP, the authors in <cit.> propose a methodology to improve the forecasting performance of aggregate wind, solar, and hydroelectric power on extreme quantiles to reduce the risk of not providing ASs. The power production obtained is used to offer reserve capacity to follow downward movements of the system frequency. In <cit.>, a hierarchical energy management system is proposed to optimize the operation of an aggregated BESS supplying electricity and frequency regulation. The authors consider the performance of the battery to follow the regulation signal and the coordination of two different battery types. EV aggregation for frequency regulation provision is discussed in <cit.>. The comfort level of EV owners is reduced if frequency regulation capability increases, irrespective of the charging strategy adopted, but discharging EV batteries during load valley optimizes the frequency regulation capability throughout the day. Two-stage stochastic programming is used to model an energy community considering the active and reactive power provision from DERs to DSOs in <cit.>. A collaborating scheme rewards reactive power supply and reduces the total community cost. Data CentersThe growing internet use to enable numerous applications of digital technologies relies on electricity-intensive data centers. In 2022, global electricity consumption from data centers was estimated at around 1-1.3% of annual electricity demand[Data centers for cryptocurrency mining were not included in this estimation. They correspond to an additional 0.4% of global annual electricity demand <cit.>.] <cit.>. Since digital transformation tends to increase, it is valuable to identify the potential interplay of data centers in the power system. Firstly, data centers are an important flexible load. To maintain continuous operation during a power outage, uninterruptible power supply (UPS) systems, typically formed by redundant BESS, are installed to provide the necessary backup. Nevertheless, the redundancy of the backup system oversizes the capacity of the batteries, which are rarely used due to a stable power supply. Thus, there is an opportunity for a revenue stream, if data centers partially operate their workload using their flexibility from idle energy storage capacity, and financial incentives are provided <cit.>. Delay-tolerant workloads can be shifted in time, and workloads also can be routed to other data centers dispersed geographically <cit.>. Such temporal and spatial load management allows data centers to procure electricity when and where it is greener and cheaper, contributing to power system balancing and decarbonization.Secondly, several pilot projects have demonstrated that UPS can provide frequency response. Fast-acting reserve provision from a UPS has been tested in Ireland and the Nordic power system. Small data centers, with limited UPS storage capacity, would face barriers to competing as an AS provider. Therefore, the trials also considered the participation of the data center in a VPP. In US, a UPS has been considered to demonstrate frequency regulation following PJM signals <cit.>. Thirdly, electricity consumed by data centers generates heat as a byproduct, which can be funneled into a district heating network to be reused for residential and commercial buildings. Existing initiatives are found in Ireland, and the Nordic power system <cit.>. However, the low temperature of the waste heat from data centers requires additional heat pumps to raise the temperature, which consumes electricity and increases total costs <cit.>.§.§ Emerging Ancillary Services To handle the impacts of reduced inertia levels, some immediate solutions include managing the potential of conventional technologies. Traditionally, synchronous inertia and PFC are byproducts of SG operation. In power systems with high shares of VRE, the operation of some conventional SGs can be uneconomical. However, these units are critical to guarantee a minimum inertia level and frequency stability. Synchronous inertial response (SIR) should be explicitly defined as a new AS to encourage synchronous resources to reduce their minimum generation level and allow additional units to remain online, contributing to their rotational energy. SIR was introduced in EirGrid/SONI (Ireland/Northern Ireland) <cit.> and was discussed in the ERCOT AS market redesign process <cit.>. Primary frequency response (PFR) is the rapid response to changes in frequency. SGs provide PFR capability through the local and automatic electromechanical control of the turbine governor. Alternatively, PFR can be delivered using controllable loads with a governor-like response. PFR is a well-developed AS in Europe and is being introduced in ERCOT as an explicit AS, moving away from the obligatory requirement traditionally adopted in US. As shown in Section <ref>, IBRs are evolving and contributing to system security. Creating new ASs to include technology-agnostic solutions and removing market design barriers of existing ASs, which favor SG-based solutions, is essential to incentivize capable resources toward a competitive electricity market. Non-synchronous resources interfaced by GFM inverters with a modified control strategy and some energy buffer to act like the rotational energy of synchronous resources can emulate the inertial response of synchronous machines, virtually providing inertia. Currently, no SO explicitly defines a virtual inertial response (VIR) service to fit the capabilities enabled by GFM inverters. Non-synchronous resources can respond to very fast changes in frequency independently of the inverter type. In this sense, fast frequency response (FFR) is another newly defined AS introduced by several SOs, comprising a subset of PFR, to support a faster acting capability than the traditional governor response. FFR capability can be obtained by shedding large industrial interruptible loads triggered by under-frequency relays or using the active power capability from VRE, HVDC links, or BESS.Wind and solar generation naturally introduce variability and uncertainty due to weather dependence. Deviations in the VRE generation forecast and net load can occur, resulting in significant power imbalances and, consequently, increased ramps. Thus, existing resources should be encouraged to enhance their flexibility, and new units should have financial incentives to promote a flexible operation. Flexible ramping capability (FRC) comprises the ramping capability from flexible resources, online or offline, capable of quickly ramping, following future movements of the net load. This newly distinguished AS is procured in CAISO <cit.>, MISO (Midwest US) <cit.>, SPP (central Southern US) <cit.>, and EirGrid/SONI <cit.>.The displacement of SGs can reduce the dynamic reactive capability. In EirGrid/SONI, the dynamic reactive response (DRR) incentivizes resources to provide a fast reactive response after an event. The fast post-fault active power recovery (FPFAPR) is another newly defined AS that rewards wind generators capable of quickly recovering their active power output after a large voltage dip that impacts frequency stability <cit.>. §.§ Taxonomy of Ancillary Services The absence of a standard nomenclature and definition between SOs can lead to misunderstandings by researchers and industrial practitioners, used to typical textbook or regional SO nomenclatures. We propose a comprehensive definition for each AS, highlighting potential providers, as shown in Table <ref>. Figure <ref> links the conventional and recently defined ASs according to their role in system security. The proposed taxonomy is not exhaustive, noting that other potential new ASs are proposed in the literature <cit.>.Depending on the system, some services are implemented as mandatory capabilities within the grid codes, rather than explicit ASs procured and remunerated by SOs. Some examples are capabilities for power quality improvement (power smoothing, harmonic mitigation, and power factor control), congestion management, and system restoration (islanded operation). By introducing new requirements based on new system needs, SOs encourage existing resources to offer/enhance their capabilities and signal new participants to invest in enhanced technology. These mandatory and non-remunerated requirements draw a baseline for the future design of ASs, which are sometimes quite complex, both technically and economically, to implement. The key difference between SIR and VIR is that the former is a natural and uncontrolled response, while the latter is an emulated response. Through adequate control strategies, VIR could be deliberately designed to deviate from the temporal shape of SIR, by responding based on the RoCoF, if the expected response proved to be more effective than just emulating SIR. PFR and FFR are both frequency-deviation-based reserves; however, the former relies on the slower governor response, whereas the latter comprises a faster response capability. In contrast to FFR, VIR is a RoCoF-based reserve <cit.>. Notice that synthetic inertia is generally used to describe VIR <cit.>. However, the term synthetic inertia was coined under the deployment of GFL inverters, which have a natural response delay, and thus, FFR should be the preferred terminology. FRC differs from frequency regulation because the former reserves capacity to follow net load movements in future time dispatch intervals (minutes), while the latter reacts to continuous imbalances (seconds) in net load <cit.>. Since FRC is provided through the system dispatch, this AS differs from spinning reserve, which is reserved for support disturbances <cit.>.§.§ Design of AS markets A few guiding practices should be followed for competitive procurement and efficient allocation of resources when designing AS markets, but no standard prescription conforms to the unique characteristics and historical practices of each power system. Every market defines its own set of ASs according to its infrastructural changes and operational requirements, leading to a continuous and complex redesign <cit.>. A market redesign for a 100% non-fossil future is most likely to be an improved market considering some fundamental existing rules, still valid under high shares of VRE, rather than a new market design built from scratch. An AS should follow a specific objective that reflects or anticipates future system needs. The AS product must be strictly defined, comprising technical and administrative requirements that eligible providers must follow. The procurement method, pricing mechanism, remuneration structure, and cost allocation scheme are key design variables that direct an AS market framework. Figure <ref> relates various procurement methods and pricing mechanisms.ASs are acquired by SOs and market participants (utilities, generators, and consumers) with obligations in system security. Mandatory provision (remunerated or not) resembles a vertically integrated utility approach and aims to guarantee that certain capabilities must be provided. The market power of dominant agents is reduced, but additional costs to suppliers may cause unnecessary investments and the overproduction of resources. Self-provision allows market participants to use their portfolio to meet all, or a portion, of their AS obligations. A regulated price (RP) or market-based mechanism compensates the resources. To ensure a market framework, an AS must be competitively provided by a number of cost-efficient suppliers. Also, sufficient demand need for the AS must be ensured to justify the fixed operating costs <cit.>. Long-term bilateral contracts are helpful to hedge against the risk of insufficient reserve capacity or higher prices. Alternatively, monthly or weekly auctions procure reserves ahead of the spot market, paying suppliers according to the offers made. Public tendering processes are suitable for non-standard products, such as trials of emerging AS. Day-ahead market (DAM) and real-time market (RTM) comprise short-term platforms for AS acquisition that can be cleared based on pay-as-bid pricing (PBP) or marginal pricing (MP) <cit.>. In the spot market, SOs can sequentially optimize energy and frequency-related AS (FR, SR, NSP, and RR) <cit.>. As energy and reserves are mutually exclusive, the foregone revenue by reserving capacity in AS markets, rather than selling electricity in the energy market, represents the lost opportunity cost (LOC) associated with reserve provision <cit.>. Scheduling and procuring energy in advance can result in poor allocation of the decisions and price signals <cit.>. Alternatively, co-optimizing energy and multiple reserves simultaneously optimizes the market products and, thus, adequately allocates and prices them, including the reimbursement of opportunity costs <cit.>. Reserves follow a hierarchy in response time, where better-quality reserves (faster response) can replace low-quality (slower response) ones. Several SOs enforce this downward substitutable characteristic to provide appropriate price signals across the reserve categories. A price hierarchy ensures that prices decrease from higher to lower quality reserves, that is, from FR through contingency reserves (SR > NSR > RR) <cit.>. The higher LOC experienced by FR is due to frequent power output changes in short-time intervals, which reduce the revenue in the energy market and increase maintenance costs <cit.>.The remuneration structure reflects the costs incurred for AS provision. Since energy and capacity are different products, resources should submit separate offer prices for the creation of two distinct merit orders. Considering that energy and reserves are co-optimized, if a resource is selected to dispatch in real time, it should be paid for the electricity delivery. Also, if a resource is called to provide reserves, capacity reservation payments, which internalize the incurred opportunity costs, should be paid <cit.>. However, if energy and reserves are sequentially optimized, SOs could adopt availability payments to compensate units that reserve capacity within a predefined window for a later call. Furthermore, utilization payments reimburse units for the electricity delivered during reserve provision. Payment for performance is applied in specific ASs, such as frequency regulation, to encourage improved response. In general, the allocation of costs for the procured AS relies on a tariff, which is socialized across customers. However, a more economically efficient scheme should consider the cost causation principle, that is, those market participants who cause the costs to the system should pay those costs <cit.>.A market-based framework is unsuitable for some ASs because of their specificities. The general practice of SOs shows that non-frequency-related ASs involve certain technical and economic barriers that must be overcome before adopting a competitive procurement mechanism <cit.>. Voltage control is highly sensitive to the grid location, which facilitates the abuse of some suppliers' market power. Moreover, a full AC power flow model must be performed to evaluate voltage support needs and price the service. However, such a model is non-linear and non-convex, which increases the computational complexity and makes its solution challenging for an RTM <cit.>. Reactive power procurement is generally compulsory or agreed upon through bilateral contracts, and resources are compensated via cost-based payments or a provision tariff <cit.>. Black start resources are typically procured through long-term bilateral contracts <cit.>. The main hurdles in setting up a market for black start capability are the technical and locational restrictions of the providers. Not all generators have the desired technical capabilities to support power system restoration. If a generator is capable, it should be strategically located to restore the main feeders according to the restoration plan <cit.>.§ US WHOLESALE ELECTRICITY MARKET In the United States, SOs do not own transmission assets <cit.> and are called independent system operators (ISOs), while, historically, state-specific or, more broadly, regional transmission organizations (RTO), refer to greater footprints. ISOs and RTOs are very similar concepts. Currently, seven ISOs/RTOs oversee two-thirds of the US electricity load <cit.>: California ISO (CAISO), Electric Reliability Council of Texas (ERCOT), ISO New England (ISO-NE), Midcontinent ISO (MISO), New York ISO (NYISO), Pennsylvania-New Jersey-Maryland Interconnection (PJM), Southwest Power Pool (SPP). All ISOs operate under the jurisdiction of the Federal Energy Regulatory Commission (FERC) except ERCOT, due to historical reasons. Concepts underlying electricity markets have evolved from experiences on northeast regional power pools[Prior to restructuring, ISO-NE, NYISO, and PJM, were regional (tight) power pools, while CAISO and ERCOT were dominated by a few investor-owned utilities.]. However, utilities in the southeast, southwest, and northwest regions of the country remain vertically integrated. The electricity generation mix [%(GWh)] of the seven US ISOs is shown in Fig. <ref>.All ISOs present a fossil fuel-based mix, mostly served by natural gas power plants, complemented mainly by nuclear, hydroelectric, and renewable energy power plants (wind and solar). Among the seven US ISOs, SPP has the most renewable mix, i.e., around 37.5% of the total generation comes from wind power plants <cit.>. Similar to SPP, ERCOT presents high shares of wind generation, resulting in an annual average generation of 25%. Additionally, solar power accounts for 6% of the total supply <cit.>. Solar power is the dominant renewable in CAISO, reaching 16% of the total generation, followed by 9% for wind power generation. Imported electricity represents 17% (others) of total generation in CAISO <cit.>. MISO has an intermediate level of renewable generation (17%), mostly supplied by wind power <cit.>. Renewables in ISO-NE represent 11% of total generation while the contribution of net import electricity (others) is 14% <cit.>. NYISO (6.0%) and PJM (7.0%) present similar percent levels of renewable generation <cit.>. The aggregate contribution of all ISOs results in an average of 20.4% renewable generation. Each ISO coordinates system operation and clears a centralized DAM in its administrative region. The scheduling and dispatch process is shown in Fig. <ref>.On a long-term basis, forward energy and capacity markets ensure resource adequacy, while financial transmission rights (FTR) allow market participants to hedge congestion charges associated with long-term contracts. In the DAM, producers submit detailed physical parameters and offers for energy and ASs. A three-part offer is the prevailing format in the energy market, which comprises incremental offers (MW-h, $/MWh), a start-up fee ($/start), and a no-load fee ($/h) <cit.>. The ISO gathers the demand bids and performs a centralized unit commitment, jointly optimizing energy and reserves[Differently from the other US ISOs, there is no day-ahead reserve product in ISO-NE. A forward reserve market run before day-ahead to provide reserve capacity for real-time physical delivery <cit.>.], considering transmission network constraints and security requirements <cit.>. The security-constrained unit commitment (SCUC) decides which resources should be available and what their output should be. Moreover, the security-constrained economic dispatch (SCED) determines the locational marginal prices (LMPs) <cit.>. Additionally, the ISO runs a reliability unit commitment (RUC) to ensure that sufficient energy and AS capacity are committed to serve the forecasted net load <cit.>. After the DAM has closed, participants can inform the SO of any change in their operating plans, but cannot update their offers <cit.>. In real time, a look-ahead SCUC or SCED (or both) assists the ISO dispatch decisions for evaluating future system conditions. In addition, a SCUC/SCED, typically 5-minute resolution, co-optimizes energy and reserves (except ERCOT[In ERCOT, the reserve levels obtained in DAM are held in RTM.]) to optimally allocate the resources in real time <cit.>. The AS clearing prices are retrieved from the dual variables of each service constraint and reflect the incurred LOC.§.§ AS Markets in ISOs/RTOs In the US, ISOs are the authority responsible for balancing the power system and acquiring ASs. Table <ref> details the ASs procured in each ISO following the classification introduced in Fig. <ref>, also highlighting the technical and market features of each AS. Historically, the ERCOT power system evolved synchronously independent of the west and east US interconnections. Reduced levels of inertia and weak interconnection motivated a synchronous inertial response product proposal. Among the US ISOs, only ERCOT has identified a potential need for financial incentives to guarantee minimum inertia levels. However, creating a new service has not achieved a consensus among stakeholders <cit.>. Additionally, no ISO has specific guidance for GFM requirements or has defined an explicit VIR service. The primary frequency response has been considered as a byproduct of SG operation in all ISOs for decades. However, after the NERC BAL-003-1.1 standard <cit.> and FERC Order 842 <cit.>, primary frequency response evolved into an obligatory capability. The standard mandates that balancing authorities demonstrate sufficient primary frequency response capability. The order imposes primary frequency response capability requirements for all newly interconnecting large and small units, synchronous (except nuclear power plants) or non-synchronous (including ESS). In ERCOT, responsive reserves are being implemented to aggregate slower and faster reserves related to the primary frequency response, complying with the NERC standard BAL-001-TRE-1 <cit.>. Besides the autonomous governor response from SG (slower reserves), ERCOT procures fast frequency response capability from interruptible loads with under-frequency relays, and is introducing the procurement from BESS <cit.>. The latter should respond automatically within 0.25 s at 59.85 Hz threshold while maintaining a full response for at least 15 minutes <cit.>. Only ERCOT in the US is introducing a payment for SG governor response. The ISO intends to compensate for the availability of delivery reserve capacity ($/MW) through a market framework <cit.>.Frequency regulation is designed with separate products for upward (regulation up) and downward movements (regulation down) in CAISO, ERCOT, and SPP. In PJM, two products have been created to separate traditional units with slow ramp rates (controlled by the RegA signal) from new fast ramp rate resources, such as BESS (controlled by the RegD signal) <cit.>. Notice that frequency-based ASs are procured in the DAM in ERCOT and are physically binding in real time. If necessary, ERCOT may procure additional reserves in real time through the supplemental ancillary services market <cit.>. Apart from compensation for available capacity through marginal pricing, frequency regulation providers are also remunerated based on their ability to follow the AGC signal, complying with FERC Order 755 <cit.>. The exception is ERCOT, which does not monitor the accuracy of frequency regulation providers <cit.>. Prices for performance are set as the mileage offer of the marginal capacity provider <cit.>. Fast-responding regulation is a sub-product of the regulation service in ERCOT. This new AS was launched under a pilot project and is a tailor-made AS to reward the benefits of the fast-ramping capability of ESS. Resources must provide regulation capacity within one second after the ERCOT signal or after independent identification of a trigger frequency <cit.>. All ISOs procure synchronized resources that are fully available within 10 minutes to supply spinning reserves. Additionally, non-synchronized resources capable of responding within 10 minutes are eligible to provide non-spinning reserves. ISOs remunerate the availability of capacity based on marginal prices, except for the non-synchronized reserve in PJM, which is cost-based. Some ISOs acquire an additional 30-minute spinning and non-spinning reserve to serve as replacement reserves. In NYISO, the sum of the total 10-minute reserve and the total 30-minute reserve must be greater than or equal to twice the largest single contingency <cit.>. In PJM, the day-ahead scheduling reserve market procures 30-minute reserves (so-called secondary reserves). PJM determines the requirements for this product based on load forecasting for the following operational day, but it does not impose performance obligations in real time <cit.>. Additionally, to incentivize the response of flexible resources during a reserve shortage, ISOs in US are gradually introducing the operating reserve demand curve (ORDC) approach, which ensures enhanced price signals when available capacity is scarce. Flexible ramping capability is procured in CAISO, MISO, and SPP when the net load ramping requirements exceed the ability of dispatched units to follow net load, thus, incentivizing flexible resources to reserve capacity <cit.>. In CAISO, high shares of solar power cause an upward ramp in the morning and a downward ramp in the evening (so-called duck curve). The flexible ramping product in CAISO emerges after including a constraint in the real-time dispatch <cit.>. Currently, CAISO procures upward and downward capacity in the fifteen-minute market (FMM) and RTM markets to provide the ramping capability for the next 15-minute interval, consisting of three consecutive 5-minute intervals. To determine the procurement and shadow prices, CAISO uses a demand curve based on the net demand forecast uncertainty for the next time interval, simultaneously extracting the VRE and demand forecast errors <cit.>. In MISO, wind power is the dominant renewable, and the ramp capability product is procured over 10 minutes in the day-ahead and real-time markets <cit.>. Similar to MISO, SPP has observed an accelerated growth of wind generation. Thus, a ramp capability product was launched in 2022 considering a 20-minute interval <cit.>.A voltage control-related AS is mandatory for synchronous, and all newly interconnecting non-synchronous (wind and solar), units, according to FERC Order 827 <cit.>. Additionally, large and small facilities connected to the transmission system should provide dynamic reactive power support, complying with the voltage ride-through capability requirement stated in FERC Order 828 <cit.>. Cost-based utilization, or capability (or both) payments, reimburse the costs of providers <cit.>. Black-start resources are procured through service agreements, being compensated for cost-based rates to recover incurred costs. Currently, there is no black start capability procurement in SPP. ERCOT applies a competitive biannual auction to define resources with the lowest costs <cit.>. To reimburse AS costs, a transmission tariff (open access transmission tariff; OATT) is applied to transmission customers in all US ISOs <cit.>.§ EUROPEAN WHOLESALE ELECTRICITY MARKETS In Europe, SOs are known as transmission system operators (TSOs), and unlike the US, are allowed to own transmission assets. European countries typically adopt a decentralized DAM with sequential optimization of energy and reserves[Italy and Spain co-optimize energy and reserves in DAM.] and zonal prices. These energy markets mainly rely on self-dispatch, where suppliers should communicate their operating plan to TSOs, and they can also decide the production of each unit. By sharing a greater responsibility in planning system operation with providers, TSOs tend to be less active than the ISOs of centralized DAMs, which implies a market based on financial exchanges <cit.>. Figure <ref> shows the organization of European wholesale electricity markets.Capacity mechanisms (strategic reserve or capacity payment) have been introduced in Europe to facilitate the integration of VRE <cit.>. In the day-ahead energy market, participants typically submit a simple price-quantity offer to a financial platform known as power exchange. Considering the available cross-border capacity, as informed by the TSOs, the EUPHEMIA algorithm clears the single day-ahead energy market by coupling several wholesale markets across Europe. The results are the electricity price and the net position for each bidding zone <cit.>. TSOs should reserve capacity beforehand through auctions covering different time frames (yearly, monthly, weekly, daily) in their national balancing capacity market from balancing service providers (BSPs). TSOs have the final responsibility to balance their control area in real time <cit.>. Market participants are also responsible for balancing the system, and are incentivized to self-balance. To do so, a market participant must be connected to a balancing responsible party (BRP), which is financially responsible for maintaining balanced portfolios. BRPs send the generation and load schedules to the TSOs for planning the operational day <cit.>. TSOs run a sequential optimization of energy and reserves to verify the feasibility of their dispatch <cit.>. If security constraints are violated, a redispatch is carried out <cit.>. Using updated information from the VRE forecast on the operating day, market participants can update their positions in the intraday market (IDM) by continually buying and selling energy (continuous trading). In real time, TSOs must balance unforeseen disturbances by activating balancing energy quantities previously secured in the balancing capacity market. BSPs can also offer their energy availability in the balancing energy market for real-time operation. The imbalance settlement determines the imbalance charge that BRPs must pay according to their deviation from the schedule <cit.>.The annual average electricity [%(GWh)] generation mix of several European countries is shown in Fig <ref> <cit.>. Renewables represent 18.2% of European generation (selected countries) and are complemented by conventional fossil fuels (mainly oil and gas) and hydroelectric generation. Countries such as Norway and Austria possess high hydroelectric generation, while France and Belgium have a significant share of nuclear energy. By contrast, Denmark, Germany, and Ireland have higher renewable generation. §.§ Toward an Internal Electricity Market Since the 1990s, the European Union has considered building a single wholesale electricity market. To achieve this goal, the European Commission created an agreement in 1996 to set standard rules for the internal electricity market <cit.>. Nevertheless, insufficient transmission capacity resulted in the creation of regional electricity markets. In these markets, TSOs have procured reserves at the national level, potentially overestimating the necessary amount and increasing costs. Integrating several European markets requires product standardization and specific trading platforms. The European Network of Transmission System Operators for Electricity (ENTSO-E) sets a uniform framework of balancing services, that is, primary, secondary, and tertiary control-related services, being aligned with balancing exchange platforms <cit.>, as follows:* Frequency Containment Reserve (FCR): This service is constantly activated (outside the deadband between 49.99 Hz to 50.01 Hz) to contain frequency deviations. A local and automatic response, typically up to 30 seconds, which should be sustained for 15 minutes, is requested to stabilize the frequency fluctuation <cit.>.* Frequency Restoration Reserve (FRR): Two reserves comprise the FRR product: (i) automatically activated FRR (aFRR) is a continuous reserve, while (ii)semi-automatic or manual FRR (mFRR) is a discrete reserve. These reserves should be available from 30 seconds to 15 minutes, and conventionally maintained for hours. aFRR aims to replace FCR and restore the frequency to its nominal value <cit.>. A European-wide sizing and procurement of aFRR and mFRR are facilitated by the balancing platforms PICASSO (Platform for the International Coordination of Automated Frequency Restoration and Stable System Operation) and MARI (Manually Activated Reserves Initiative) <cit.>. * Replacement Reserve (RR): This service is semi-automatic or manually activated within 15 minutes or more. The TERRE (Trans European Replacement Reserves Exchange) platform integrates the European RR markets. Notice that the procurement of RR is optional regarding the target model defined by ENTSOE <cit.>. * Imbalance Netting (IN): To avoid simultaneous activation of aFRR in the opposite direction, TSOs are also responsible for maintaining an area control error close to zero, and correcting the input of aFRR accordingly. To this end, ENTSO-E implemented a platform called IGCC (International Grid Control Cooperation) for TSOs to exchange their real-time imbalances <cit.>.§.§ AS Markets in Europe AS markets in Europe show many similarities but also some important differences. We analyze the AS markets in different European regional electricity markets, selecting power systems with high shares of renewables. The main findings are summarized for comparison in Table <ref>.§.§.§ Great Britain Great Britain's (England, Scotland, and Wales) power system is an island power system interconnected by HVDC links to other countries, which enhances the negative effects of low inertia levels. Great Britain's TSO, National Grid ESO, was the first SO to determine grid code requirements for GFM inverters <cit.>, which opens a path for VIR and system strength service provision. The AS markets are evolving from an intricate framework, with an overlap of services, toward a simpler and rational design, phasing out some frequency-related services <cit.>. National Grid ESO introduced three new ASs, comprising a new suite of services designed for continuously tracking system frequency variations and replacing the existing dynamic firm frequency response and enhanced frequency response services. Dynamic containment is a fast-acting (1 second) post-fault response to manage higher RoCoF after a disturbance, associated with SGs being displaced by VRE generation. In contrast, dynamic regulation and dynamic moderation are pre-fault services, that is, they aim to correct the system frequency before it moves outside the operational limit specified for the service. The former is a continuous response to stabilize small and continuous deviations in the operational frequency range <cit.>. The latter is an additional fast response to manage larger imbalances and arrest the system frequency, responding within 1 second <cit.>. Both dynamic containment/moderation services are well-suited for BESS, whereas dynamic regulation accommodates the capabilities of traditional suppliers <cit.>. The static firm frequency response service is provisionally renamed as static response and comprises load shedding when a target frequency setpoint is reached <cit.>. To supply static and dynamic services, providers can submit offers in a single clearing price day-ahead auction. A quick reserve service has been designed as a pre-fault, bi-directional, and manually activated service within one minute after TSO instruction, to follow frequency deviations during normal conditions. Slow reserve has been designed as a post-fault, bi-directional, and manually activated service within 15 minutes after TSO instruction, to restore the frequency after large imbalances <cit.>. Quick and slow reserves intend to replace the short-term operating reserve and fast reserve services in the coming years <cit.>.Integration into the TERRE platform was envisioned before Brexit, but National Grid ESO officially leaves the project in December 2022, leading to uncertainties about the future of a replacement reserve service in Great Britain <cit.>. Voltage control is a mandatory service compensated by a utilization payment <cit.>. Providers capable of supplying an additional reactive power response can offer this capability through a tender arrangement <cit.>. In 2020, National Grid ESO introduced a competitive pay-as-bid process for procuring black start resources. Availability and other minor payments compensate providers <cit.>. The operating costs of the ASs are recovered from a system charge applied to BRPs.§.§.§ Ireland/Northern Ireland Similar to Great Britain, Ireland is an island power system with non-synchronous interconnections to its neighbors, and newly designed ASs facilitate the integration of high shares of wind (and solar) power. EirGrid and SONI are the first SOs to introduce a service related to the provision of synchronous inertia capability. Synchronous inertial response aims to incentivize SGs to reduce their stable minimum power output, enabling the dispatch of other units to ensure a minimum inertia level <cit.>. The service is procured on a regular tender process, and providers receive a payment that considers the available rotational energy and minimum generating level of the unit <cit.>. Fast frequency response is procured from resources capable of responding within two seconds to contain the frequency decay, but with financial incentives to provide a faster response.Primary, secondary, and tertiary operating reserves and replacement reserves (synchronized and desynchronized) are the existing services for frequency control <cit.>. Three ramping services schedule available ramping capability over 1, 3, and 8 hours to manage uncertainties associated with VRE forecasts. Unlike similarly named services introduced in CAISO and MISO, the ramping margin created by EirGrid and SONI relies on a long-term schedule and focuses only on upward movement, given that generators can be conveniently requested to switch offline or VRE can be curtailed if the available VRE generation greatly exceeds the forecast <cit.>. Steady-state reactive power is the current AS for voltage control under normal conditions. Since significant shares of VRE connected to the distribution system are displacing SG units, a reduction in reactive power capability is noticed. Particularly, geographic locations, far from consumer centers, and thus, with low demand and weaker networks, have experienced an increase in magnitude and frequency of occurrence of low voltage deviations in transmission buses <cit.>. In addition, the scarcity of dynamic reactive power capability, associated with the reduction of synchronizing torque due to fewer online synchronous units, is anticipated at very high (+70%) instantaneous non-synchronous shares. Dynamic reactive response is designed as a new AS to increase the transient reactive power response and mitigate the angular instability using different resources, for example, synchronous condensers, wind turbines, and STATCOMs <cit.>. Reduced fast dynamic reactive power support can lead to a voltage instability condition cascading into frequency instability, the so-called voltage-dip-induced frequency dip phenomenon <cit.>. Consequently, fast post-fault active power recovery is also a new AS designed to incentivize faster and sustained active power recovery of wind power plants after a fault on the system <cit.>. Black start resources are procured through bilateral contracts, or a tender, and compensated by availability payments and other additional costs in EirGrid <cit.>. In SONI, black start capability is mandatory and remunerated by a regulated payment. The generic payment structure for individual services in EirGrid/SONI comprises the product of the available volume, multiplied by a regulated fixed tariff and also by a scalar. The latter includes various multipliers to reward system-friendly providers and penalize less beneficial participants, depending on, certain capabilities. For example, resources eligible to provide fast frequency response that are capable of responding very quickly, such as BESS, within 0.15 seconds, will be paid (much) more. Participants receive higher payments if they provide services under scarcity conditions (high VRE levels), locations, and can sustain their response across consecutive reserve categories. In the case of non-delivery, payments are reduced <cit.>. In EirGrid/SONI, the costs of balancing services are recovered from consumers through a tariff.§.§.§ Germany The energy transition in Germany is supported by high subsidies for solar and wind sources, which allow individuals and private cooperatives to be self-producers, decentralizing generation <cit.>. Additionally, the country phased out its nuclear power plants and designed a competitive tender process to compensate coal-fired power plant owners that deactivate their units <cit.>. Despite the increasing shares of VRE and the phase out of SG, Germany does not experience significant stability issues. Instantaneous reserve, that is, an inertia emulation service, was discussed by Dena (the Germany Energy Agency) to overcome possible problems arising from low inertia levels under the “Ancillary Services Study 2030"  <cit.>. However, investments in transmission expansion and its favored location in the center of Europe, which provides strong AC interconnections, have lessened some system needs observed in less well-interconnected regions. A sensitive system need observed in Germany is implementing congestion management to avoid bottlenecks in all voltage levels. The country is moving from a cost-based redispatch to a market-based approach, aiming to adequately remunerate flexibility from the demand side <cit.>. Currently, German TSOs maintain organized markets to procure frequency containment reserves (primary control reserve), automatic and manual frequency restoration reserves (secondary control reserve and minute reserve, respectively). Replacement reserve is not procured in Germany. Voltage control is mandatory and only paid if agreed upon through a bilateral contract <cit.>. Eligible black start resources receive a fixed annual payment aiming to recover costs. A tariff applied to transmission consumers reimburses operating costs.§.§.§ Nordic Power System The Nordic synchronous system (Eastern Denmark, Sweden, Finland, and Norway) is marked by high shares of hydropower, accounting for 54.5% of the generation, while other renewables represent 15.5%. Asynchronous links interconnect the Nordic power system with the Continental and Baltic synchronous area[The Vyborg HVDC link, which connects Finland and Russia, ended its operation after the beginning of the Ukraine war on February 2022  <cit.>.]. Two frequency containment reserve (FCR) products are available in the Nordic synchronous system. FCR-N is constantly maintained within the normal frequency band, while FCR-D is dimensioned to withstand disturbances when the steady-state frequency deviation exceeds 0.5 Hz <cit.>. Critical inertia levels often arise in summer at night when consumption is low and wind production is high. Fast frequency reserve is procured to complement the FCR-D product in case of a low-inertia event to reduce the maximum frequency deviation <cit.>. IBRs and loads capable of fast response, within one second, are traded in single clearing price capacity auctions. TSOs acquire automatic frequency restoration reserves in a regional balancing capacity market, forecasting imbalances in the bidding zones and available transmission to dimension the amount of reserve <cit.>. A regional balancing capacity market for manual frequency restoration reserves is under design. Instead of manually activating the reserve capacity, each TSO will determine the demand for reserve in their bidding zone, according to forecasted imbalances, thereby enabling a central optimization algorithm for offer selection <cit.>. Notice that TSOs in the Nordic power system currently do not acquire replacement reserves. Voltage control is a mandatory AS, and bilateral agreements define the remuneration of suppliers. If sufficient offers are submitted, a competitive tender procures black start resources in Denmark; otherwise, bilateral contracts are established <cit.>. As in Sweden, the operating costs can be shared between consumers and BRPs.§ UNITED STATES VERSUS EUROPEAN ANCILLARY SERVICES MARKET DESIGN The different market design choices of the US and Europe impact the definition of standard ASs in each region. Figure <ref> compares the US (red bars) and European (blue bars) frequency-related products regarding their time frames. The requirements for primary frequency control are satisfied by primary frequency response and frequency containment reserve in the US and Europe, respectively. Under normal operating conditions, secondary frequency control (SFC) is performed by frequency regulation in US, and automatic frequency restoration reserve (aFRR) in Europe. Under contingency conditions, US defines spinning and non-spinning reserves as the standard products to perform SFC, while the European framework considers the deployment of aFRR (SFC) followed by manual frequency restoration reserve, which is related to tertiary frequency control (TFC). Both designs consider replacement reserves as an additional reserve associated with TFC to replenish reserve levels. Under high VRE shares, the demand for reserves tends to increase to mitigate forecast errors <cit.>. However, the German experience shows the opposite. Adequate market design and other factors have so far been sufficient to compensate for the increased variability and uncertainty of wind and solar generation, as highlighted in <cit.>. Higher temporal granularity in the intraday market (15-minute interval) allows participants to continuously update their positions, reducing the need for balancing reserves in real time. The European balancing platforms enable exchange of the procured reserves among TSOs, optimizing the management of portfolios. Additionally, reduced security margins, infrequent outages of generators, and improved forecasting tools are also factors that have contributed to diminishing the need for balancing reserves. In ERCOT, market design changes also have reduced the procurement of frequency regulation capability despite the increasing share of wind generation. When ERCOT moved from a zonal to a nodal market, the portfolio-based dispatch was replaced by a unit-specific dispatch, which ensures more detailed control of generation. By shortening the dispatch interval, from 15 to 5 minutes, the small imbalance uncertainties between dispatch intervals were reduced, diminishing the need for frequency regulation reserves <cit.>. Also, wind power plants are much faster responding than conventional generation, and thus, much tighter control can be achieved.§ MODERN ASS MARKET Dominant VRE participation does not imply changes in physics or economics concepts, but such considerations must be respected to ensure an efficient (secure and at least-cost) power system operation. Nevertheless, new system needs are pushing AS needs into the center of electricity market design discussions. Rather than being labeled as auxiliary, services are instead essential toward a 100% non-fossil future transition. The main technical and market design gaps are now highlighted, pointing out potential solutions or research needs to orient academic researchers and industry practitioners. §.§ Technical Challenges New system needs imposed by the displacement of synchronous resources and high VRE levels reveal frequency and voltage control shortfalls that must be mitigated. Grid-forming inverters are a promising technology, but not yet considered mature for bulk power systems. Additionally, in order to unlock the hidden potential of DERs, better prediction of future installed capacity and dynamic load modeling are needed.§.§.§ Power System Stability Concerns Two major topics are selected here: namely, frequency and voltage control issues. The main challenge for frequency stability is to contain the higher RoCoF and frequency deviation resulting from an inverter-dominant power system. For the voltage stability case, DERs should contribute to solving problems which can help avoid the need for additional grid infrastructure.Frequency Control IssuesThe proliferation of IBRs and modern loads interfaced by power electronic inverters within AC power systems might allow greater tolerance regarding the frequency variation range in the future. Nonetheless, current practice is to keep the system frequency within a narrow band around the nominal value. To tackle the reduction of system inertia, SOs could adopt modern inertia monitoring methods using data from PMU measurements, rather than dispatch-based estimation, to capture the time-varying demand-side inertia contribution <cit.>. Under high shares of variable generation, quantifying regional inertia is particularly important to guarantee sufficient ability for a potential island grid formed after a transmission line tripping to support frequency control <cit.>. Thus, more accurate estimation is helpful in determining inertia-related thresholds, enhancing coordination with inertial and fast response reserves. To transport higher volumes of VRE generation over long distances, the transfer capability of HVDC links tends to rise, increasing the size of the largest contingency, and worsening the higher RoCoF and lower maximum frequency deviation problems, if system inertia levels are reduced. Therefore, a greater volume of RoCoF and frequency-deviation-based reserves should be procured by SOs for frequency containment. Since reserve provision tends to shift from conventional synchronous toward inverter-based resources, and currently, virtual inertia capability is not widespread, it follows that under very high instantaneous VRE shares, limiting HVDC imports and curtailing VRE generation could help to ensure acceptable frequency deviations, but at an operational cost <cit.>. Voltage Control IssuesA reduction in online synchronous generation means that power systems will increasingly rely on capacitor banks, shunt reactors, FACTS, and IBRs to provide reactive power and maintain secure voltage stability margins <cit.>. Since the voltage should be controlled locally, the current location of some devices could result in insufficient voltage support in some locations, requiring optimal placement strategies. Additionally, IBRs have lower fault current contributions compared to SGs, which compromises the ability of protective systems to sense and clear faults promptly, making the fault propagate through the network and potentially ending in a cascading outage <cit.>. Most grid codes require that wind and solar plants remain connected and contribute with reactive power after a severe voltage drop. Ride-through capability requirements (voltage and frequency) can be further extended to DERs.A less centralized power system also encourages enhanced participation of DERs to actively regulate voltage. In the US, utilities and ISOs are gradually incorporating the requirements proposed in the IEEE 1547-2018 standard, updated to consider high DER shares and modern inverter capabilities (active control and ride-through functionality) <cit.>. For example, the gradual movement of clouds alters PV generation, resulting in higher voltage fluctuations, which can be mitigated by injecting or absorbing reactive power from modern inverters <cit.>. Also, during light load conditions and peak solar production, reverse power flow from the low-voltage feeder through the distribution substation can lead to voltage rise in the former <cit.>. To accommodate the installation of new devices, network reinforcement is necessary. Alternatively, investments can be deferred using modern inverters capable of absorbing reactive power. §.§.§ Grid-Forming Inverters Deployment at Scale Grid-forming (GFM) inverter solutions, driven by non-synchronous resources, could potentially assist power system stability and further provide low-carbon AS. The necessary requirements to extract GFM inverter benefits in bulk power systems still need to be defined in grid codes, but the technology is not widely available. To date, GFM inverter capabilities have been demonstrated mainly in microgrids and isolated systems, and grid code initiatives are limited, such as in National Grid ESO <cit.>. On the other hand, without clear capability specifications and market incentives, manufacturers could be discouraged from developing the technology. This circular problem is being addressed by testing GFM inverters at the transmission level through pilot projects, enabling identification of the main barriers to GFM adoption at scale in bulk power systems <cit.>.Inverters, both GFL and GFM technologies, are physically limited by the availability (wind and solar headroom) or the size (battery capacity) of an energy buffer to respond to fast active power variations after disturbances. Thus, a key issue is to determine the reserve level that GFM-based resources should maintain to preserve system stability. GFM-based solutions should avoid trying to fully replace SG capabilities, such as high fault current, which requires greater energy buffer capacity and increases overall costs. Instead, if SOs could quantify the benefits arising from GFM inverter connection to system stability, providers could be rewarded, which would accelerate technology development, allowing higher shares of non-synchronous resources <cit.>.In weak grids, GFM inverters could improve local voltage stability and support a minimum system strength to allow the connection of additional grid-following (GFL) inverters, which are cheaper and, in the future, could potentially be converted to GFM capability <cit.>. Also, GFM inverters can provide VIR and support frequency control. Distinct from GFL technology, GFM inverters are capable of black start. Nevertheless, how to coordinate GFM-based resources between different sites to create power islands that can restore the whole system is, to date, an open research question <cit.>. Another sensitive point is the interaction between different inverter technologies and SGs, which potentially introduces harmonics, new oscillation modes, and resonances, resulting in system instabilities that need further investigation <cit.>. §.§.§ Improving Distributed Energy Resources Visibility In order to access the benefits of integrating DERs in distribution systems, such as AS provision, these resources should be sufficiently visible for DSOs and SOs in planning studies and real-time operation. During the planning phase, DSOs should accurately estimate future DER capacity additions in the grid to avoid system security problems and increased costs. Improved DER capacity forecasts require collecting data from individual consumer characteristics, such as electricity consumption from electricity bills, suitability of rooftops from satellite data, etc. Using data-driven models (bottom-up modeling), DSOs can weight consumer characteristics against potential DER insertion, providing better visibility of future DER capacity. However, detailed consumer data may well not be readily available, requiring significant investments in data acquisition. Also, economic uncertainties, such as future capital costs and DER regulatory policies, and modeling uncertainties, introduced to simplify consumer dynamics, are inherent shortfalls that should be addressed carefully, potentially creating multiple solution scenarios <cit.>.New system needs, such as enhanced local voltage control and congestion management, arise due to increasing shares of DERs. SOs should consider a dynamic equivalent of emerging active distribution networks to analyze the effects of many hidden modern loads on system stability, and improve operation planning <cit.>. Currently, neither SOs nor specialized software companies have a sufficiently versatile tool to perform detailed transient stability simulations of modern loads <cit.>. The key issue is to provide a sufficiently accurate and flexible load model, capable of being generalized under different operating conditions and locations, with reasonable computational time to allow integration with SO tools. A potential solution, based on research findings, is developing a gray-box model, which considers the load composition and system dynamics measurement information <cit.>. §.§ Market Design Challenges Potential market design issues arising from high VRE shares can be aggregated as two fundamental goals of electricity markets: (i) efficient price signals, particularly real-time price signals, and (ii) competition. The most relevant issue regarding (i) is scarcity pricing. Supply shortages should be reflected in real-time prices, and propagated through long-term decisions to ensure reliability, since high shares of VRE are shifting the revenue streams from energy toward flexibility (ASs) and capacity. On the other hand, TSO-DSO coordination is the most important reform to be implemented regarding (ii). To explore flexibility from distribution-based resources and promote competition between large and small players, improved coordination between the TSO and DSO(s) is a crucial point. §.§.§ Improving Real-Time Pricing Signals In US, relevant issues to enhance price formation, such as multi-interval pricing and non-convexities, emerge from the need to align the optimal dispatch instructed by the ISO with the profit-maximizing objectives of flexible resources. European balancing market redesign could focus on co-optimizing reserves and improving locational and temporal pricing signals in real time. Although explicit scarcity prices are primarily addressed in US, the topic is also sensitive for price formation in Europe.Multi-Interval Dispatch and PricingThe need for increasing operation flexibility requires suitable consideration of the intertemporal constraints, such as ramping rates, start-up cost allocation, and ESS dynamics, including battery state-of-charge and hydrogen volume tank level, to avoid distorted price signals. Ramping constraints, for instance, tend to bind more often in real time under high VRE levels, which has led to the creation of a specific AS to reward the opportunity costs to dispatch out-of-merit units. However, ramping products, such as those proposed in CAISO and MISO, may not obtain the least-cost operation solution <cit.>. An alternative approach solves the real-time economic dispatch and pricing looking ahead to future time intervals through a rolling horizon. Using the projected power system conditions, ISOs can pre-position resources to manage forecasted binding ramping constraints <cit.>. NYISO and CAISO have implemented multi-interval pricing in their RTM, while in ERCOT, the approach remains in the proposal stage. Nevertheless, the advisory prices emerging from multi-interval pricing when a rolling horizon is considered cannot support the optimal economic dispatch because there is no financial or physical commitment after the first interval. By acting as rational profit-maximizing agents, flexible resources are encouraged to self-dispatch, deviating from ISO instructions. Therefore, side payments (or uplift payments) should be provided, even in a convex market, to preserve a consistent market outcome. In particular, dispatch supporting prices can be provided if a fixed horizon and perfect foresight are considered <cit.>. The lack of commitment to future prices could be fixed by a financially binding look ahead, which would be updated and modified at each successive interval. The financial commitment at each set of look-ahead adjustments preserves the dispatch incentives and avoids the need for uplift payments. Revenue Insufficiency due to Non-ConvexitiesFor the US ISOs, uplift payments are also needed in the presence of non-convexities arising from fixed costs, such as start-up and no-load costs, and minimum generating power constraints from the SCUC in order to make resources whole, since marginal pricing fails to recover non-convex costs. Since uplift payments suppress price signals, market transparency is undermined, which may lead to inefficient operating and investment decisions if a significant volume of payments are provided <cit.>. Variable generation imposes more frequent cycling (start-up and shut-down) of conventional generators to accommodate stochastic net load fluctuations, which can increase total uplift payments <cit.>. Introducing new ASs, such as synchronous inertial response, includes additional non-convexities in the unit commitment problem, since units may be committed out-of-merit exclusively to provide this service <cit.>. Also, a non-convex market for voltage support based on the solution of the AC power flow could efficiently remunerate providers, compared to cost-based methodologies, encouraging them to support voltage stability. Convex-hull pricing is theoretically the preferred solution to tackle non-convexities and reduce the lost opportunity cost, a certain type of uplift payment. However, solving the Lagrangian dual problem is currently computationally impractical for real power systems <cit.>. Alternatively, a computationally efficient primal implementation that closely approximates convex-hull prices is proposed in <cit.>. Also, another approximation, commonly called approximate extended LMP (aELMP), which relaxes the integrality of binary variables, is currently used in MISO and PJM. Transition to Co-Optimization of Energy and ReservesThe European DAM design considers separate markets and entities to procure electricity (power exchanges on the energy market) and reserve capacity (TSOs on the balancing capacity market) to sequentially clear energy and reserves. Sequential optimization leads to a sub-optimal dispatch of energy and reserve capacity, which prevents the best use of available resources. To avoid a potential lack of reserves in real-time operation, forward reservation capacity typically occurs well ahead of real-time electricity activation in Europe. The current approach implies that participants should infer their future imbalances, which is becoming increasingly difficult under high shares of VRE, to embed an estimation of their opportunity cost in their offer <cit.>. Inefficient allocation of reserves may result in a price reversal condition, whereby providers of low-quality reserves receive higher compensation than better-quality providers, resulting in disincentives for flexible resources <cit.>. Joint optimization of energy and reserves explicitly reflects the opportunity cost of holding back reserves instead of generating electricity, resulting in efficient price signals and avoiding the need for redispatch actions. However, several institutional obstacles make the transition to simultaneous optimization of energy and reserves a complex task in Europe. Some issues include integrating power exchanges and TSOs platforms, and the resulting impacts on EUPHEMIA performance. Also, the ongoing harmonization of balancing services across Europe is crucial <cit.>. Increasing Locational and Temporal Price GranularityAccurate real-time price signals are essential to reflect the uncertainties in real-time operation introduced by high VRE shares. Fundamentally, balancing energy markets manage energy deviations between day-ahead and real-time operation, considering day-ahead prices as a reference. Thus, price formation in Europe strongly relies on DAM, which harms the consistency with RTM. Price formation centered on real-time prices allows better use of the balancing energy exchanged in ongoing European platforms. Additionally, two possible refinements are increasing locational and temporal price granularity.Moving towards nodal pricing may optimize the use of flexible resources due to their improved visibility, thus avoiding redispatch actions or curtailment of VRE, and so, reducing total operating costs. Moreover, manipulative bidding in market-based redispatch, so-called inc-dec gaming, which aggravates congestion, can be eliminated under the nodal pricing approach <cit.>. Nodal prices could arise from an RTM with co-optimization of energy and reserves <cit.>. Besides balancing product harmonization, electricity market governance should be discussed to avoid conflict interests in transmission line use <cit.>. Harmonization of the imbalance settlement period from 60 to 15 minutes is an ongoing directive in Europe, but it remains far from the 5-minute US ISOs standard. Moving from continuous trading to more frequent auctions in the intraday market could encourage the participation of smaller providers that cannot invest in sophisticated trading mechanisms to improve the speed of trades <cit.>. Furthermore, shortening gate closure times, allows participants to more accurately update their positions near real-time operation, potentially reducing the need for reserves. Explicit Scarcity PricesConsidering the increasing levels of near-zero marginal cost generation, short-term price signals (electricity and reserves) should adequately induce efficient long-term investment decisions. If available capacity is tight, prices should rise to reflect supply scarcity. However, administrative price caps and weak demand participation require the introduction of an explicit scarcity component in short-term pricing <cit.>. An Operating reserve demand curve (ORDC) is a mechanism to determine real-time reserve capacity prices, and the associated scarcity adder in real-time electricity prices, to optimally allocate capacity between electricity provision and reserve for system security <cit.>. The ORDC approach was first implemented in ERCOT and is now also adopted by PJM and MISO. In US, all other ISOs are proposing reforms to include improved scarcity pricing <cit.>. The authors in <cit.> investigated the inclusion of the ORDC approach under the European market design due to the lack of an RTM for reserve capacity. The explicit scarcity prices provided by an ORDC, incentivize operational flexibility from controllable loads, DERs, and fast-ramping resources to be available during shortage conditions, fostering long-term investments in new flexible capacity <cit.>. Important improvements in the ORDC modeling should be considered. The traditional static model based on historical information could evolve into a dynamic model based on updated information available from forecasting tools <cit.>. By coupling probabilistic forecast methods and stochastic programming models, multiple dispatch scenarios could indicate a more conservative reserve procurement <cit.>. Also, the ORDC approach could be expanded to address multiple reserves and locations. All the aforementioned improvements are hard to be implemented and have computational barriers <cit.>. §.§.§ Removing Barriers for Competition Market arrangements should facilitate a technology-agnostic provision and the entry of low-carbon AS suppliers in the market, enabling competition among resources with diverse cost structures and availability, irrespective of their voltage level. In the following, inefficient market arrangements, such as symmetrical offers and renewable subsidies, are first shown to prevent some participants from competing in the electricity markets. Afterward, better TSO-DSO coordination is discussed as an enabler for improving DER visibility and competition in electricity markets. Inefficient Market ArrangementsIn some US and European markets, offers for services, such as frequency regulation, require a symmetrical reserve capability in both upward and downward directions to accept providers. Upward capability involves a greater opportunity cost than downward capability, in most circumstances, except for low load and high renewable conditions. However, variable generation typically operates at its maximum power point without sufficient headroom for increased supply. Also, renewable subsidies, widely adopted to incentivize power system decarbonization, can further exacerbate the problem of higher opportunity costs of upward reserve supply from VRE, discouraging the AS provision <cit.>. Thus, a single product for upward and downward frequency regulation inhibits wind and solar power plants from making offers to provide this service[Similarly, conventional generators operating at their minimum power cannot offer upward regulation if VRE levels increase suddenly.]. Separate upward and downward products would better reflect power system conditions, enabling more efficient use of resources <cit.>. A transition from energy- (feed-in tariffs) toward capacity-based subsidies (through technology-specific auctions), which competitively support installed capacity, rather than current energy production, can improve operational decisions <cit.>. Also, the introduction of an explicit market-based price for carbon emissions can provide investment signals for low-carbon sources, reducing the need for subsidies. Another common barrier for VRE and ESS is the minimum capacity required by SOs for service provision, which precludes individual DERs from accessing the market. Reducing the minimum capacity itself may be insufficient to incentivize individual small and dispersed resources to compete against large units. Nevertheless, if DER aggregation is allowed, competition is enhanced. In Europe, the imbalance pricing method is an additional barrier. TSOs have widely applied the dual pricing scheme for the financial settlement of imbalances. In this case, if a balancing responsible party (BRP) faces a negative individual imbalance (shortage), it must pay the balancing service costs plus a penalty.Penalizing negative imbalances incentivizes BRPs to over-contract reserves in the DAM to financially hedge against the risk of being short in real-time operation. Large market participants are favored since they can strategically manage their portfolio to settle imbalances, discouraging the participation of small players <cit.>. A transition to a single pricing scheme, in which negative and positive imbalances are settled at the same price without penalties, would incentivize flexible resources to balance the power system. Germany, the Netherlands, and Belgium have already implemented a single imbalance pricing scheme, while France and the Nordic power system plan to do so. Addressing the foregoing market design inefficiencies is an important step to making AS markets a viable source of revenue stream for low-carbon providers.Improved TSO-DSO CoordinationDecentralization of the power system has transferred reserve capacity from the transmission to the distribution system. If reserve capacity from the transmission level to preserve system security is insufficient, SOs should procure reserves from the distribution level. To avoid misaligned actions, closer real-time coordination between DSO and TSO/ISO is essential to leverage a bottom-up AS provision. Individual or aggregated resources in the distribution system should alter their operating plans to offer reserve capacity for balancing the transmission system. Also, active management of distribution constraints by the DSO could impact transmission system balance <cit.>. Thus, fundamental issues are determining the roles and responsibilities of TSO and DSO, and who has precedence in using DER capability. The set of activities in which system operators should be involved depends on the TSO-DSO coordination model. Joint optimization of local (DSO) and common (TSO-DSO) AS markets, for instance, involves a close interaction between system operators, aiming to determine the least-cost solution to match transmission and distribution system needs <cit.>. Infully centralized models, the TSO has priority to use DER capability. In contrast, purely local AS markets are managed by a DSO, which has precedence to reserve capacity from dispersed resources. Alternatively, a decentralized common AS markets defines priority to use DER capability through the combined solution of the local and common markets. The local market is cleared first with distribution grid constraints, but without the commitment of units. Afterwards, the common market is cleared, considering the previous solution and the transmission constraints <cit.>. Including local AS markets reduces computational and information exchange complexity compared to a fully centralized model, while reaching near-optimal allocation <cit.>.§ CONCLUSION As power systems transition to higher shares of VRE, new system needs are directing a review of existing AS suites toward a 100% non-fossil future. New frequency-related AS have been recently defined to mitigate reduced levels of system inertia, ensure fast-acting reserves, and promote flexible ramping capability. Emerging services related to voltage control focus on maintaining system stability under contingency conditions. Although increasing participation of inverter-based resources is one of the roots of stability problems, they are evolving and are also an integral part of the solution. TSO-DSO coordination is fundamental to extracting DER flexibility, allowing competition with large players. The inclusion of explicit scarcity prices ensures efficient real-time prices and adequate long-term investment signals by addressing supply shortage needs. Particularly in US, better allocation and pricing of ramp capability could be achieved by adopting multi-interval dispatch and pricing. Also, improving price formation to handle non-convexities could reduce the financial losses of more frequent start-up and shut-down operations. In Europe, joint optimization of energy and reserves would optimally allocate and price available resources. Although a transition to nodal pricing and further refinement of temporal granularity could be challenging, the potential benefits of improving resource visibility, and thus, achieving more efficient price signals, should be considered. elsarticle-num
http://arxiv.org/abs/2311.02090v1
{ "authors": [ "Luigi Viola", "Saeed Nordin", "Daniel Dotta", "Mohammad Reza Hesamzadeh", "Ross Baldick", "Damian Flynn" ], "categories": [ "physics.soc-ph", "cs.SY", "econ.GN", "eess.SY", "math.OC", "q-fin.EC" ], "primary_category": "physics.soc-ph", "published": "20231026225621", "title": "Ancillary Services in Power System Transition Toward a 100% Non-Fossil Future: Market Design Challenges in the United States and Europe" }
Boosting Data Analytics with Synthetic Volume Expansion A]Xiaotong Shen[label=e1][email protected], A]Yifei Liu[label=e2][email protected] B]Rex Shen[label=e3][email protected][A]School of Statistics, University of Minnesota, Twin Cities[presep=, ]e1,e2[B]Department of Statistics, Stanford University[presep=, ]e3 Synthetic data generation, a cornerstone of Generative Artificial Intelligence (GAI), signifies a paradigm shift in data science by addressing data scarcity and privacy while enabling unprecedented performance. As synthetic data gains prominence, questions arise concerning the accuracy of statistical methods when applied to synthetic data compared to raw data. This article introduces the Synthetic Data Generation for Analytics (Syn) framework. This framework employs statistical methods on high-fidelity synthetic data generated by advanced models such as tabular diffusion and Generative Pre-trained Transformer (GPT) models. These models, trained on raw data, are further enhanced with insights from pertinent studies through knowledge transfer. A significant discovery within this framework is the generational effect: the error of a statistical method on synthetic data initially diminishes with additional synthetic data but may eventually increase or plateau. This phenomenon, rooted in the complexities of replicating raw data distributions, highlights a “reflection point” — an optimal threshold in the size of synthetic data determined by specific error metrics. Through three case studies — sentiment analysis of texts, predictive modeling of structured data, and inference in tabular data — we demonstrate the effectiveness of this framework over traditional ones. We underline its potential to amplify various statistical methods, including gradient boosting for prediction and hypothesis testing, thereby underscoring the transformative potential of synthetic data generation in data science.Generative Machine Intelligence Large Language Models Knowledge Transfer Pretrained Transformers Tabular Diffusion Unstructured§ INTRODUCTION The advent of synthetic data generation, fueled by generative artificial intelligence (AI), has shifted data analytics towards a more synthetic data-centric approach. According to Gartner,60% of the data utilized in AI and analytics projects will be synthetically generated by 2024, and synthetic data will surpass real data in AI models by2030 <cit.>. This paradigm shift challenges the traditional practice, which exclusively relies on raw data. Synthetic data, mirroring real-world scenarios, presents a viable alternative to the challenges posed by data collection, sharing, and analysis within limited data environments.Synthetic data confers two primary advantages on data analytics <cit.>. First, it alleviates data scarcity and addresses privacy concerns <cit.>.When crafted to emulate the raw data distribution <cit.>, sharing synthetic data comes with minimal risk of exposing sensitive raw data. The significance of such data is accentuated in downstream or subsequent analyses, as exemplified by a COVID-19 study <cit.>. Second, synthetic data enables training in real-world scenarios like autonomous driving <cit.>, negating the necessity for expensive experiments.Moreover, it proves invaluable for approximating the distributions of test statistics using Monte Carlo methods through repeated numerical experiments <cit.>.This paper introduces the Synthetic Data Generation for Analytics (Syn) framework, designed to bolster the precision of any statistical methods using high-fidelity synthetic data that closely mirrors raw data, thereby harnessing the anticipated advantages.Generative models produce synthetic data by training on raw data and enrichment with insights from related studies via knowledge transfer.The Syn framework employs an array of generative models suited for various domains: image diffusion <cit.>, text diffusion models <cit.>, text-to-image diffusion models <cit.>, time-series diffusion models <cit.>, spatio-temporal diffusion models <cit.>, and tabular diffusion models <cit.>. Moreover, Syn embraces advanced models such as the Reversible Generative Models <cit.>. These flow-based models capture the raw data distribution and can estimate both conditional and marginal distributions. Central to our Syn exploration is this pivotal issue: Can high-fidelity synthetic data enhance the efficacy of statisticalmethods solely reliant on raw data? If so, how may we implement such enhancements? Recent research offers diverging viewpoints on this issue. In some cases, synthetic X-ray images improve the accuracy of machine learning models <cit.>, whereas, in others, training on synthetic data may compromise performance for some machine learning models <cit.>. Synthetic data holds immense potential for enhancing data analytics. When generated accurately, this high-fidelity data can boost the accuracy of a statistical method by expanding the sample size of raw data. However, a significant caveat exists: low-fidelity synthetic data could yield unreliable outcomes. Often, a generational effect emerges, whereby as the size of synthetic data grows,the precision gain might diminish or even plateau. This phenomenon has been exemplifiedin our case study focusing on structured data prediction as discussed in Section <ref>. This challenge arises from generation errors or discrepancies between the data-generation distributions of synthetic and raw data. Fundamentally, the generational effect underscores a key concern: regardless of the size of synthetic data, generation errors can compromise the accuracy of a statistical method.While evaluating predictive tasks is typically straightforward, hypothesis testing presents the challenge of regulating the Type-I error. To address this challenge and enhance the power of a test, we introduce “Syn-Test,” a test that augments the sample size of raw data by applying synthetic data. For clarity, “Syn-A” denotes Method A within the Syn framework throughout this article. Syn-Test determines the ideal size of synthetic data required to manage the empirical Type-I error while performing a test for finite inference samples using Monte Carlo methods. Our research indicates that the ideal size of synthetic data can heighten the accuracy. Moreover, our theoretical investigation sheds light on the generational effect, precision, and the size of synthetic data.Additionally, we introduce a streamlined approach called Syn-Slm to improve Syn's usability in applications. This approach forgoes actual data generation when one knows the synthetic data distribution. Using sentiment analysis as an illustration, we demonstrate that Syn-Slm is competitive with some alternatives under the Syn framework. To showcase the capabilities of the Syn framework, we delve into three key domains: sentiment analysis of texts, predictive modeling for structured data, and inference on tabular data. Across these domains, all statistical methods leveraging high-fidelity synthetic data surpass their counterparts employing raw data.The superior performance is due to high-fidelity synthetic data generated by diffusion models. Initially trained on raw data, these models are further improved by fine-tuning pre-trained models via knowledge transfer, resulting in enhanced statistical accuracy and larger sample sizes.In the first area, we contrasted three models: OpenAI's Generative Pre-trained Transformer (GPT)-3.5 within the Syn framework, Distilling BERT (DistilBERT, <cit.>) in the Syn-Slm framework, and the Long Short-term Memory Networks (LSTM)in the traditional framework for analyzing consumer reviews from the IMDB movie dataset. In this context, Syn's generative capability using GPT-3.5 significantly outperforms the LSTM approach. Yet, the Syn-Slm framework using DistilBERT, although trailing, demonstrated notable competitiveness compared to GPT-3.5.In the second area, we introduced Syn-Boost, a version of CatBoost <cit.>—a gradient boosting algorithm <cit.>—trained on synthetic data. Syn-Boost bolsters the precision of CatBoost for both regression and classification tasks across eight real-world datasets, utilizing a refined tabular diffusion model <cit.>. Statistically, the error trajectory of Syn-Boost exhibits either a U-shaped or L-shaped pattern, determined by the generation errors and the volume of synthetic data used. Moreover,when employing the same knowledge transfer methods, Syn-Boost surpasses traditional feed-forward networks trained on raw data in six of eight cases. These observations highlight that the generative approach of Syn offers a predictiveadvantage even against a top predictive methodology given the same data input.In the third domain, we explore the pivotal role of significance tests in discerning feature relevance in regression and classification using CatBoost. This exploration within black-box models unveils largely uncharted territory. Recently, <cit.> introduced a nonparametric asymptotic test through sample splitting. To augment its statistical prowess, we employ the Syn-Test, capitalizing on pre-trained generative models and knowledge transfer, as demonstrated in two distinct scenarios.In one scenario, we leverage a pre-trained model to ensure smooth knowledge transfer from the male data to the female data while improving generation fidelity and test accuracy for female data, especially when male and female data distributions present distinct characteristics. These observations emphasize the importance of knowledge transfer in mitigating disparities in domains such as healthcare and social science, particularly when data for specific subgroups, like minorities, are limited. Moreover, we shed light on the “generational effect”, accentuating the invaluable interplay of synthetic data generation and knowledge transfer in hypothesis testing—a domainrarely explored in current literature. This article consists of six sections. Section <ref> explores Syn's role in enhancing the accuracy of statistical methods and emphasizes the importance of knowledge transfer in synthetic data generation. Section <ref> discusses the streamlined approach—Syn-Slm and its impact on generative techniques in statistical practice. Section <ref> provides illustrative examples, exploring the generational effect in predictive modeling and inference. Section <ref> focuses on the privacy issue of synthetic data.In Section <ref>, we discuss the implications of generating synthetic data for data science. The Appendix contains technical details. § ENHANCING STATISTICAL ACCURACY §.§ Synthetic DataThe Syn framework empowers data analytics by applying statistical methods on a synthetic sample ^(m)=(_i)_i=1^m. This sample is generated by a generative model trained on raw data ^(n)=(_i)_i=1^n through fine-tuning a pre-trained model, leveraging insights from various similar studies. These models include GPT <cit.>, diffusion models <cit.>, normalizing flows <cit.>, and GANs <cit.>.For an in-depth understanding of the generation processes for diffusion models and flows,readers refer to <cit.>. In this framework, the cumulative distribution function (CDF) F̃ of ^(m) estimates the CDF F of ^(n).To produce high-fidelity synthetic data, fine-tuning a pre-trained generative model is recommended, which involves transferring knowledge from previous studies. If pre-trained models are unsuitable, constructing a generative model from scratch is a viable, though less preferred, alternative. The quality of the generated data hinges on the choice of generative model and the effectiveness of knowledge transfer from similar studies. For an illustration, we detail thesynthetic data generation process using a diffusion model in Figure <ref>. Furthermore, to demonstrate the importance of knowledge transfer, we fine-tune a pre-trained tabular diffusion model <cit.> on the Adult-Male dataset to apply this knowledge to the Adult-Female dataset in Section <ref>, where male and female distributions exhibit distinct differences. For a detailed explanation of the impact of knowledge transfer,readers refer to Section <ref>. It is imperative to underscore the pivotal role of knowledge encapsulated in pre-trained generative models for improving generation accuracy through fine-tuning.Nevertheless, directly accessing the pre-training data for pre-trained models is frequently infeasible, impeded by privacy concerns, extensive storage needs, and data inconsistencies <cit.>,as seen in models like GPT-4. Furthermore, the distributions of pre-training datasets, especially from different sources, might not always mirror that of raw data, as illustrated by the Adult-Male and Adult-Female examples in Section <ref>.Considering these challenges and the nuances of real-world scenarios, we omit pre-training data from our raw data composition throughout this article.To yield F̃ directly, one can utilize some revertible generative models such as normalizing flows and Roundtrip GAN <cit.>, acting as a nonparametric estimate of F. For other generative models, such as diffusion models and GPT, one can typically obtain F̃ from synthetic data employing Monte Carlo methods, which we elaborate on in Section <ref>. In the numerical examples presented in this paper, we utilize a tabular diffusion model TDM <cit.> andGPT for synthetic data generation.Subsequently, we explore the precision advantages offered by the Syn framework. §.§ Optimal Synthetic Size for Estimation and Prediction In estimation and prediction, leveraging a statistical method on a synthetic sample gives rise to an estimate, denoted as θ(^(m)), of a parameter vector θ. The effectiveness of this method gauges through a specific error metric (θ(^(m)))= L(θ(^(m)), θ), which would theoretically improve the statistical accuracy of θ(^(m)) with an infinite amount of synthetic data, provided F̃ perfectly replicates F. Here,symbolizes the expectation under F, while L(·,·) represents a loss function quantifying the discrepancies between θ and θ. However, numerical insights from Section <ref> reveal the existence of a reflection point, denoted as m_0=min_m ≥ 1(θ(^(m))), which delineates a relationship between the synthetic sample size m and the accuracy augmentation for this method. This point m_0 is governed by the generation error measured by metrics such as the total variation distance between F̃ and F, defined as TV(F̃, F)= sup_B|P_F(B)-P_F̃(B)|, where P_F and P_F̃ are probabilities measures induced by F and F̃ and B is any event.To estimate m_0, we optimize its empirical risk measure across m on an independent cross-validation sample from the original resources, which yields an optimizer m̂ as an estimate of m_0. For instance, in scenarios where the risk measure is the generalization error in binary classification, its empirical risk is the test error on a test dataset obtained from the original resources, approximating a classifier's generalizability.To investigate the theoretical aspects of the generational effect concerning the accuracy of θ(^(m)), we clarify the notation: Let ^(m)=(θ(^(m))) - (θ(^(m))) represent the discrepancy between the synthetic and raw errors for a sample size m, where (θ(^(m)))= L(θ(^(m))) denotes the risk incurred when employing a raw sample ^(m) of size m. Suppose (θ(^(m))) can be expressed as C_θ m^-α for some constantα > 0. Moreover, assume that ^(m)≥ m LTV if m ≤ m^* and ^(m)≥^(m^*) if m > m^* for some finite index m^*, where LTV is the learning error induced by the generation error. Then, the minimum of (θ(^(m))) occurs at a finite m_0 provided that LTV is larger than(θ(^(m_0)))/ m^*. As stated in Theorem <ref>, a sizable generation error leads to the synthetic risk (θ(^(m))) minimizing at a finite m_0, after which the synthetic error starts to deteriorate. This result suggests that augmented synthetic sample size may not enhance estimation or prediction given substantial generation error, a phenomenon further exemplified in Section <ref>. Conversely, with a small generation error, the synthetic risk remains controlled, and the optimal m_0 tends to be significantly large or infinite, as substantiated by Theorem <ref>. Next, we establish a bound on the synthetic risk to offer insight into the conditions under which accuracy improvements may arise. Assume that ^(n) is independently and identically distributed according to F while ^(m) is independently and identically distributed according to a conditional distribution F̃≡ F_|^(n) given Z^(n).Let L be a nonnegative loss function upper-bounded by U>0. For any m ≥ 1,(θ(^(m))) ≤(θ(^(m))) + 2 U m TV(F̃,F).Moreover, if (θ(^(m))) = C_θ m^-α, then(θ(^(m_0))) ≤(θ(^(s_0))) ≤(α^-α/1 + α + α^1/1 + α) (2U)^α/1 + α C_θ^1/1 + α·TV( F̃,F) ^α/1 + α,when s_0 =(2U/α C_θ )^α/1 + α·TV(F̃,F)^-1/1 + α. Hence, (θ(^(m_0))) ≤(θ(^(n))) achieves an accuracy gain under Syn whenthe total variation TV(F̃,F) is sufficiently small. For example, this occurs when TV(F̃,F) ≤ C_θ (2U)^-1( α^-α/1+α + α^1/1 + α)^-1 + α/α· n ^-(1 + α).Theorem <ref> posits that training a method trained on synthetic data can lead to an accuracy gain, provided that the generation error that governs the synthetic risk (θ(^(m))) is small. Hence, high-fidelity data can mitigate the synthetic risk in a method, thereby enabling a large optimal synthetic size m_0 for further amplifying the precision.The existing literature provides insights into the magnitude of generation error as represented by TV(F̃, F). For instance, Theorem 5.1 in <cit.> specifies bounds for a diffusion model, given the data-generating distribution is a member of a Besov space. It's worth noting that the boundary defined in <cit.> pertains to TV(F̃, F), withsymbolizing the expectation relative to F. However, one may extend this to the in-probability convergence rate for the random quantity TV(F̃, F) by leveraging Markov's inequality, which provides the convergence rates for TV(F̃, F) based on the raw sample size n and/or the pre-training sample size.§.§ Optimal Synthetic Size for Hypothesis TestingWe now introduce Syn-Test, a novel inference tool using high-fidelity synthetic data to boost any test's power by expandingthe sample size of raw data. Syn-Test yields two distinct advantages. First, it employs synthetic data to gauge the null distribution of any test statistic by Monte Carlo methods as in the bootstrap approach <cit.>, circumventing analytical derivations. This methodology proves particularly powerful for unstructured data inferences, including texts and images <cit.>. Second, Syn-Test identifies the optimal synthetic data size, optimizing a test's power while maintaining a suitable control of Type-I errors. For illustration, we refer to Section <ref>. Given a raw sample, Syn-Test employs two nearly equal-sized subsamples 𝒮_1 and 𝒮_2, partitioned from a training sample for fine-tuning a pre-trained generative model. It also uses a separate inference sample 𝒮_3 of size n for validating model training. One generative model generates synthetic data using 𝒮_1 for null distribution estimation, while the other uses 𝒮_2 for computing the test statistic. Figure <ref> illustrates the splitting scheme and Syn-Test process. Syn-Test also empirically determines the optimal synthetic data size m_0 to control the Type-I error. By swapping the roles of 𝒮_1 and 𝒮_2, Syn-Test can de-randomize the partition, transitioning the original inference sample size n to a synthetic inference sample size of m. This swapping mechanism proves especially advantageous in scenarios with low generation error. Crucially, abundant synthetic data can enhance the size of the raw data,even when sample splitting results in a reduced inference sample size <cit.>. Syn-Test encompasses four steps, using a significance levelα, a tolerance error ε,and a Monte Carlo sizeD. Syn-Test goes as follows:Step 1: Controlling Type-I Error. Generate D distinct synthetic samples (T(_1^(m,d)))_d=1^D of size m by refining a pre-trained generative model with 𝒮_1 under H_0. Compute the empirical distribution of the test statistic T using (T(_1^(m,d)))_d=1^D. Define a rejection region C_m at a significance level where α>0.Step 2: Optimizing Synthetic Size through Tuning. Execute Step 1, but use 𝒮_2 instead of 𝒮_1 to produce ^(m,d)_2. Utilize the empirical distribution from (T(_2^(m,d)))_d=1^D to find the empirical Type-Ierror, denoted P̃(C_m), for the C_m created in Step 1.To effectively control the Type-I error, we propose two distinct strategies for identifying m̂:an aggressive and a conservative approach. The aggressive approach selects the largest m that maintains the estimated Type-I error within the desired limit. In contrast, the conservative one chooses the smallest m about failing to control the estimated Type-I error. Mathematically,* Aggressive: m̂ =max{m: P̃(C_m) ≤α + ε}.* Conservative: m̂ = min{m: P̃(C_m) ≤α + εand P̃(C_m + 1) > α + ε}. In practice, we recommend adopting a conservative approach, as it more effectively manages Type-I errors, although it may beless powerful compared to a more aggressive strategy. Step 3: Calculating the P-value. With the determined m̂, produce synthetic data _1^(m̂) by fine-tuning thepre-trained generative model using 𝒮_1. Calculate the test statistic for T(_1^(m̂)) and determine the P-value, P^1, leveraging the null CDF based on 𝒮_2. Step 4: Combining the P-values.Repeat Step 3 by interchanging the roles of 𝒮_1 and 𝒮_2to compute the P-value P^2. Combine P-values via Hommel's weighted average <cit.>:P̅ = min(C min_1 ≤ q ≤ 22/q P^(q),1),where C=∑_q=1^2 1/q=3/2 and P^(q) is the q-th order statistic of {P^1,P^2}. Hommel's method excels in controlling the Type-I error relative to many of its peers, ensuring that ℙ(P̅≤α) ≤α under H_0. While effective, there are also alternative strategies such as the Cauchy combination <cit.>. To expedite the search of an estimated m_0 m̂, we may consider techniques such as Bisection <cit.> or Fibonacci Search <cit.>. Assume that ^(m, d) is an i.i.d. sample of size m following F̃=F_|^(n) given ^(n). Let _T and F_T be the synthetic and raw distributions of T calculated on a sample of size m. Then,the estimation error of the null distribution is governed by the Monte Carlo error and the generation error:sup_xF_T̃(x) - F_T(x)≤√(log2/δ/2D) +TV(^(m), ^(m)).As a result, Syn-Test offers a valid test as long as TV(^(m), ^(m)) = m ·TV(F̃, F)→ 0 in probability and D →∞. Moreover, let the power function ϕ_m, α be P(T(^(m)) ∈ R_α | H_a) for rejection region R_α at significance level α, and ϕ̃_m, α analogously with ^(m). If for some m > n, Δ = ϕ_m, α - ϕ_n, α > 0, then ϕ̃_m, α > ϕ_n, α when TV(F̃, F) < Δ / m, indicating that Syn-Test enhances power if the generation error is small.Syn-Test enables valid inference without requiring many model assumptions, specific datadistributions, and an infinitely large inference sample.Instead, its validity and power hinge on the generation error, which is generally satisfied when using generative models trained on adequately large datasets.Syn-Test is adept at debiasing test statistics. In this context, the bias estimation is achieved through a Monte Carlo (MC) approach, utilizing synthetic samples generated by a refined generative model. This estimated bias is then subtracted from the estimated null distribution, leading to a debiased version of the test statistic. The effectiveness of this aspect of Syn-Test is demonstrated through numerical examples, as detailed in Section <ref>. §.§ Generative Model and Knowledge Transfer Knowledge transfer elevates generation accuracy by infusing task-specific generative models with pre-trained knowledge from relevant studies. From the perspective of dimension reduction, we dissect knowledge transfer in two scenarios. In the first situation, consider a generative model g_θ parametrized by θ. Originally trained on an extensive dataset for a generation task, the model undergoes subsequent fine-tuning on a smaller but similar dataset to account for distribution shift, resulting in model g_θ^', where the architecture remains consistent across both models, with the essence of knowledge transmission occurring via the transition from θ to θ^' amid the fine-tuning. In the other situation, a robust pre-trained model undergoes training across multiple tasks, characterized as (f_1, …, f_t) ∘ h. Here, f_i defines the output function tied to the i-th task, h is the shared representation function, and ∘ denotes functional composition. Given a learned representation h, one only fine-tune f_0 during its optimization phase for f_0 <cit.>. As the generative model refines its precursor, it absorbs the precisely calibrated representation h. This knowledge transfer thus can augment the generation precision through fine-tuning with a heightened accuracy of the learned f_0, facilitating its dimension reduction. It is pivotal to acknowledge that within this configuration, f_0 ∘ h and f_i ∘ h; i = 1, …, t, only share the same architecture in h. An alternate strategy entails concurrent fine-tuning off_0 and h to derive a representation explicitly forf_0. The Syn framework capitalizes on knowledge transfer to bolster its overall efficacy, streamlining the synthetic data generation process. In Section <ref>, we illustrate knowledge transfer in generative models using a pre-trained model based on adult male data <cit.>, subsequently fine-tuned with adult female data for downstream analysis. As demonstrated in Figures<ref>, <ref>, and Table <ref>, the fine-tuned model adeptly captures the data distribution, even with a limited size of raw samples.§ SYN-SLM: STREAMLINED APPROACH The Syn Framework enables unsupervised synthetic data generation, mirroring raw data distributions. It is adept at tackling statistical challenges in both unsupervised and supervised realms. It can also derive F̃ directly from some generative models such as normalizing flows <cit.>. Subsequently, we will explore a streamlined method, termed Syn-Slm, which bypasses synthetic data generation. §.§ Synthetic Data Distribution The synthetic data distribution F̃ may degenerate when supported on a low-dimensional manifold, particularly if some components ofexhibitfunctional dependence. AccessingF̃ can be challenging,even when it is non-degenerate, especially for some generative models. However, when we derive F̃ from a generative model, it becomes a nonparametric estimate of the raw data distribution F. For a parameter of interest, expressed as θ = ϕ(F), our streamlined method Syn-Slm gives rise to a plug-in estimate θ̂ = ϕ(F̃). As an example, if θ = denotes the mean, then ϕ = ∫ z d F( z), yielding the Syn-Slm estimate θ̂ = ∫ z d F̃( z). Section <ref> illustrates this method. §.§ Statistical Methodologies Supervised learning aims to predict an outcome based on a predictor vector . Consider a one-dimensional outcome variable Y and define =(Y,). In this scenario, generative models, such as diffusion models <cit.> and the Reversible Generative Models <cit.>, not only can generate synthetic data conditioning onbut also provide the conditional CDF of synthetic data F̃_Y| as an estimate of the conditional cumulative distribution function (CDF) F_Y|. From this, one can deduce an optimal prediction function f^o by minimizing the expected loss: f^o=inf_f L(Y,f()). Here, L(·,·) symbolizes a loss function. For instance, if l is the 0-1 loss in binary classification, then f^o= F_Y|-1/2, and its Syn-Slm estimate is given by f̃^o= F̃_Y|-1/2. An example is available in Section <ref>. § CASE STUDIES §.§ Sentiment Analysis This subsection presents sentiment classification applied to the benchmark dataset, IMDB <cit.>. This task involves assigning emotions expressed in the text into positive or negative sentiments based on the opinions reflected in each text. The dataset comprises 50,000 polarized movie reviews,categorized as “positive” or “negative” sentiments. These labels correspond to movie scores below four or above seven out of ten, where no movie has more than 30 reviews to prevent significant class imbalance. Here, we use 49,000 of these reviews as our training data, reserving 1,000 reviews for testing. We compare Syn's generative approach against its conventional counterpart in a downstreamprediction task, utilizing three state-of-the-art models, GPT-3.5, DistilBERT, and LSTM models <cit.>.GPT-3.5 functions primarily as a text completion model, predicting the succeeding token as the sentiment label. Although essentially a completion model, GPT-3.5 is a conditional generative model that aligns with Syn, which we adapt for predictive tasks. We fine-tune GPT-3.5 with the text-embedding-Ada-002 configuration, adhering to OpenAI's recommended procedures[OpenAI GPT fine-tuning: https://platform.openai.com/docs/guides/fine-tuninghttps://platform.openai.com/docs/guides/fine-tuning].In contrast, DistilBERT generates a fixed-size embedding of a review, which is then relayed to an appended classification head to deduce sentiment likelihood. Unlike GPT-3.5's token generation approach, DistilBERT's technique aligns more closely with Syn-Slm for supervised tasks. We fine-tune DistilBERT using the Distilbert-base-Uncased model from HuggingFace[HuggingFace: https://huggingface.co/distilbert-base-uncasedhttps://huggingface.co/distilbert-base-uncased]. Our tuning regimen includes a batch size of 16, a span of 10 epochs, and the Adam optimizer with standard decay parameters, setting the learning rate to 1× 10^-5.Additionally, we train a traditional LSTM model from scratch, eschewing prior knowledge. Like DistilBERT, the LSTM processes an embedding and feeds it into a classification head, rendering it a predictive model. For our LSTM configuration, we modified code from a pertinent Kaggle notebook[Kaggle: https://www.kaggle.com/code/pawan2905/imbd-sentiment-analysis-using-pytorch-lstmhttps://www.kaggle.com/code/pawan2905/imbd-sentiment-analysis-using-pytorch-lstm]. We employ identical model structures and hyperparameters across our custom-split datasets. Table <ref> compares GPT-3.5, DistilBERT, and LSTM in seven performance metrics. The extensive collection of pre-trained models likely contributes to GPT-3.5's outstanding performance. On the other hand, LSTM's underperformance stems from its inability to transfer knowledge. Knowledge transfer plays a crucial role in model performance.§.§ Prediction for Structured Data This subsection investigates the generational phenomenon and challenges associated with enhancing the precision of gradient-boosting for regression and classification <cit.>.It also focuses on the implications for the quality of synthetic data generation. Within the Syn framework, we designate the boosting method tailored for synthetic data as Syn-Boost. Despite the surge of diverse predictive models, the capabilities of the Syn framework in predictive modeling remain largely untapped. To highlight this potential, we draw contrasts between Syn-Boost and its traditional supervised counterparts: specifically, the boosting algorithm — CatBoost <cit.> and FNN — a fully connected neural network that leverages insights from a pre-trained model.Syn-Boost presents a strategy to harness knowledge transfer in boosting, effectively addressing the transfer learning challenge inherent to the boosting method. §.§.§ Real-Benchmark Examples To closely emulate real-world scenarios, we investigate situations where available pre-trained models have incorporated insights from relevant studies. For this study, we utilize five classification and three regression benchmark datasets <cit.>, each encompassing three subsets: pre-training, raw, and test data. We train pre-trained models using the pre-training data, facilitating effective knowledge transfer. For Syn-Boost and FNN, we exclude the pre-training data from our raw set, which is then used for training and fine-tuning both methods. The test data, on the other hand, is reserved exclusively for performance evaluation. A detailed description of these datasets can be found in Table <ref>.In the Syn-Boost framework, we utilize a tabular diffusion model, TDM <cit.>, to generate synthetic data of mixed types that closely match the distribution of the original data. TDM employs multinomial and Gaussian diffusion processes to simulate categorical and continuous attributes. The procedure starts by training a TDM model[We train TDMs using a single TITAN RTX GPU.] on pre-existing data and then fine-tuning it with raw data. Subsequently, we use CatBoost on the synthetic data of size m, created by TDM for classification and regression tasks. To identify the best m for Syn-Boost's synthetic data, we evaluate the error relative to the synthetic-to-raw data ratio,ranging from 1 to 30, with a step size of 1.For FNN, we engage in transfer learning, starting with pre-existing data and later fine-tuning with raw data. This technique is consistent with TDM's training approach, ensuring that both models have a harmonized foundation for effective knowledge transfer.Figure <ref> underscores the significant contribution of Syn in bolstering CatBoost's efficacy in classification and regression. While there is a minor boost in “FB comments”, the extent of improvement via Syn-Boost is diverse. Classification and regression enhancements span 0.6% to 17.4% and 0.03% to 12.3%, respectively, against CatBoost's baseline rather than the Bayes error. Should the Bayes error have been accessible, we anticipate even higher percentage improvements. The magnitude of these enhancements varies by scenario. For instance, datasets like “Gesture” and “House”, which have a larger predictor count, show more significant leaps. In contrast, datasets like “Adult” and “California” with fewer predictors demonstrate modest gains. Figure <ref> also highlights the generational effect as the size of synthetic data increases.Accuracy gains plateau after reaching the estimated reflection point m̂, an estimated optimal size of synthetic data.This point represents the peak of statistical accuracy and is consistently greater than raw sample sizes, often by at least five-fold. In scenarios like “Gesture”, “Adult”, “California”, “House”, and “Insurance”, Syn-Boost surpasses CatBoost even when raw and synthetic data sizes are equal (m=n). This observation suggests thatthe efficacy of Syn-Boost is rooted in the increased sample size m, as evidenced by the “California”and “House” datasets that utilized an optimized synthetic-to-raw data ratio of 25:1 or higher. Typically, error curves form a U-shape around a moderate m̂ but shift to an L-shape when m̂ is exceptionally large. Syn's robust performance primarily stems from the generative capabilities of diffusion models coupled with the application of knowledge transfer. These elements enhance the generative model's generation accuracy by accurately estimating the distribution of raw data over low-dimensional manifolds <cit.>. However, it is crucial to recognize that the Syn framework's success also hinges on thoughtful modeling and predictor selection. In a supervised setting, Syn-Boost, which employs CatBoost on synthetic data, typically outperforms FNN, except in the “FB comments” and “Abalone” datasets. When the generation errors are modest, the performance edge of Syn-Boost over FNN spans from 11.1% to 14.6% in classification and 7.2% to 16.3% in regression. The reduced performance on the “FB comments” and “Abalone” datasets, with a decline ranging from 1.4% to 6.6%, is chiefly attributed to significant generation errors from the pre-trained TDM <cit.>. In both scenarios, Syn-Boost may not surpass CatBoost when m=n. A similar phenomenon also occurs for Decision Tree, Random Forest, and Logistic Regression without knowledge transfer <cit.>. We speculate that the generation error in the “FB comments” dataset arises from the model architecture's inability to handle large pre-training instances, while the “Abalone” dataset's underperformance could be due to insufficient pre-training size. Notably, while FNN outperforms CatBoost in the “Abalone”, “FB comments”, and “Gesture” datasets, it lags in others. These findings highlight Syn-Boost as a strong competitor against the well-established predictive model FNN. This case study highlights the effectiveness of the Syn framework in enhancing statistical accuracy through synthetic data generation. It also explains the results of <cit.> regarding the potential pitfalls in a prediction task when employing synthetic data to train machine learning models, a concern mentioned in the Introduction. Low-fidelity synthetic data, resulting from substantial generation errors, can negatively impact statistical accuracy. The study suggests that employing knowledge transfer from relevant studies is a strategy to reduce generation errors.§.§.§ SimulationTo investigate how generation errors impact the efficacy of Syn-Boost, we conducted simulations with access to ground truth data. We consider a model that closely mimics real benchmark examples:Y = 8 +X_1^2 +X_2X_3 + cos( X_4) + exp( X_5X_6) + 0.1X_7 + ϵ,where X = ( X_1, …,X_7) is uniformly distributed over [0,1]^7 (Uniform(0, 1)^7) and ϵ follows a normal distribution with mean zero and standard deviation 0.2, N(0, .2^2). In(<ref>), we generate a dataset of 700 samples, dividing it into 500 for training and 200 for validation. To demonstrate the impact of effective versus ineffective generators on downstream tasks, we pre-train a tabular diffusion model (TDM, <cit.>) with two sizes, 1000 and 5000. It is noteworthy that pre-trained models typically use considerably larger training sizes. To evaluate the distribution discrepancy between raw and synthetic samples, we employ the 2-Wasserstein distance, defined as their distributional distance and determined by solving an optimal transport problem using appropriate metrics[<https://pythonot.github.io/quickstart.html#computing-wasserstein-distance>], as detailed in Table <ref> for reference. We evaluate Syn-Boost's root mean square error (RMSE) on the prediction performance of synthetic data generated from a pre-trained model, both with and without fine-tuning on raw training data. These scenarios represent the outcomes with ineffective and effective generators, respectively. For comparative purposes, we also assess the RMSE of CatBoost, trained on raw data, and provide the square root of Bayes error, which is 0.2 by design. As depicted in Figure <ref>, Syn-Boost attains an RMSE closer to the Bayes error when employing an effective pre-trained generator, thus outperforming CatBoost trained on raw data. In contrast, with an ineffective pre-trained generator, the RMSE of Syn-Boost is similar to the CatBoost error, far from the Bayes error. In practice, we recommend fine-tuning a pre-trained model rather than using it directly. As illustrated in Figure <ref>, fine-tuning pre-trained generators can further improve the performance of Syn-Boost. Table <ref> supports the observed generation effects, which supplements Figure <ref>. §.§ Feature Relevance for Tabular Data This subsection concerns the relevance of features for predicting the outcome of the response variable Y by a machine learner using a candidate feature vector X. Define the subvector _S by _S = (X_j: j ∈ S), where S is a subset of the features. Our objective issignificance testing of _S in its functional relevance to Y. To assess the influence of _S, we use the differenced risk R(f^*)- R(f_S^c^*). Here, f_S^c=f(_S^c), andf^* and f^*_S^c are the optimal prediction functions in the population, defined as f^*=_f R(f) and f^*_S^c=_f_S^c R(f_S^c). The risks, R(f) and R(f_S^c), are given by R(f) =(L(f(), Y) ) andR(f_S^c) = (L(f(_S^c), Y) ), whererepresents the expectation over randomness. Now, we introduce the null and its alternative hypotheses H_0 and H_a:H_0: R(f^*)- R(f^*_S^c)=0,versus H_a: R(f^*) - R(f^*_S^c)<0.Rejecting H_0 at a significance level implies feature relevance of _S for predicting Y.It is worth mentioning that we target the population-level functions f^* and f^*_S^c in (<ref>).In (<ref>), <cit.> developed an asymptotic test tailored to black-box learning models. Building upon this foundation, we illustrate how Syn-Test can bolster the power of this traditional test on raw samples by enlarging the synthetic data size while circumventing the necessity to derive the asymptotic distribution of a test statistic. For Syn-Test, we adhere to Steps 1-4, as delineated in Section <ref>, to examine the relevance of feature set _S to outcome Y, employing CatBoost <cit.> as the learning algorithm. Here, we engage a diffusion model, TDM <cit.>, to engender synthetic data. Initially, we adapt the original test statistic from <cit.> to suit synthetic data as follows: T = R_m(f̃_S^c) - R_m(f̃)/SE(R_m(f̃_S^c) - R_m(f̃)),where R_m(·) denotes the empirical risk, evaluated on an inference sample _1^(m,d) in Step 1 and _1^(m) with m=m̂ in Step 3 of Syn-Test, f̃ and f̃_S^c denotethe estimated predictive function function with and without S, and SE denotes the standard error. To mitigate the bias in T due to CatBoost, we propose a Monte Carlo debiasing technique. This method refines an estimated null distribution from one sub-inference sample by utilizing the corresponding test statistic values from the other sub-inference sample. Specifically, we consider test statistic values (T(_1^(m,d)))_d = 1^D, as per (<ref>), based on synthetic samples _1^(m,d) in Step 1, derived from the null generative model to approximate the null distribution, and the corresponding (T(_2^(m,d)))_d = 1^D to evaluate the Type-I error in Step 2, with _2^(m,d) being synthetic samples contingent on 𝒮_2. The empirical CDF of (T(_1^(m,d)))_d = 1^D, denoted as F̃_0, renders an approximated null distribution. To rectify the bias in F̃_0, we deploy the empirical CDF of (T(_1^(m,d))-D^-1∑_j = 1^D T(_2^(m,d)))_d = 1^D, which centers at zero under H_0. A visual depiction can be found in Figure <ref> (middle row).In (<ref>), we reject H_0 if T manifestsas large.To compute the test statistic values in Steps 1 and 3, we generate an additional synthetic sample of size 2Nand split it evenly into two subsamples. Using the first subsample, we train a CatBoost model (Y | )=f() to forecast Y employing all features , resulting in f̃. In parallel, we train another CatBoost model (Y | _S^c)=f(_S^c) using features _S^c, yielding f̃_S^c. By employing the synthetic sample to compute full and null predictive models, yielding f̃ and f̃_S^c, we can mitigate the intrinsic bias and asymptotics highlighted in <cit.> stemming from a limitedinference size. This behavior is demonstrated in Figure <ref> (middle row).To refine a generative model under the null hypothesis, researchers often employ permutation by replacing redundant predictor vectors _S with irrelevant values <cit.>. However, this approach may not preserve the correlation structures between X_S and _S^c. Addressing this issue, we introduce a novel method that maintains these correlation structures while ensuring the feature irrelevance of _S on Y given the rest of the features. Our procedure involves two steps: * We first train a predictive model to estimate (_S | _S^c) = g(_S^c). * We then generate synthetic data tuples (Y, _S, _S^c) using this model. Then modify these tuplesby replacing _Swith the predicted values g(_S^c), resulting in new tuples (Y, g(_S^c), _S^c). This process ensures compliance with a specific subclass under the risk invariance of H_0 by creating conditionally independent tuples, as described by <cit.>. Consequently, we obtain modified tuples Z̃^(m)=(Ỹ_i,X̃_iS, X̃_iS^c)_i=1^m, which adhere to the feature irrelevance hypothesis H_0. Finally, we designate an MC size of D=1,000 for estimating both the null distribution and the Type-I error in Step 1 of Syn-Test. The parameters are set as α = 0.05, ε = 0.01, and the optimal m̂ will be tuned based onthe ratios m/n ∈1, 2, …, 20. §.§.§ Real-Benchmark ExamplesKnowledge transfer profoundly impacts the behavior of synthetic data, affecting critical downstream tasks, including inference. To illuminate this relationship, we employ Syn-Test to assess feature relevance using the gradient boosting method, CatBoost <cit.>. We explore this in a regression context with the California dataset <cit.> and a classification context using the Adult dataset <cit.>, maintaining the experimental setup detailed in Section <ref>. Within these frameworks, we examine the influence of knowledge transfer on synthetic data generation. Concurrently, we evaluate the efficacy of Syn-Test in identical scenarios and those that are distinct yet closely related. To contrast Syn-Test with its traditional counterpart, consider the significance test in (<ref>). When the finite-sample null distribution of T is unknown, the asymptotic distribution of the test statistic may require stringent assumptions <cit.>. To circumvent this, we use synthetic samples generated from TDM to approximate the null distribution, as in <cit.>. Contrary to that approach, we refrain from using data perturbation, thus eliminating the requirement to maintain the rank property for privacy protection.Knowledge Transfer from Identical Distributions. Drawing from the 1990 U.S. census, the California dataset offers a glimpse into median house values through eight specific attributes. These include the longitude and latitude of the property, its median age, the total room count, bedroom count, block population, household count within the block, and the median household income. To facilitate knowledge transfer, we initially pre-train a TDM using a random subset of 13,209 observations. For significance testing in (<ref>), we adapt the one-split black-box test statistic in <cit.> with a training sample of size 6,605 and an inference sample of size 826. To perform the Syn-Test, we follow the splitting scheme illustrated in Figure <ref>. In detail, we divide the training sample equally into two subsets, S_1 and S_2, for fine-tuning purposes. Additionally, we utilize the inference sample S_3 for both model training and fine-tuning.As depicted in Figure <ref> (top left), the empirical Type-I error initially descends, then rises, ultimately surpassing the α=0.05 level as the ratio of synthetic-to-raw size m̂/n grows. The estimated maximum ratio m̂/n=6 preserves the Type-I error control. In other words, we can augment the sample size to sixfold the raw sample size n. This observation aligns with the generational effect we observed in predictive modeling for classification and regression. Consequently, m̂= 6n notably enhances the power of Syn-Test, as indicated inFigure <ref> (bottom left), where the test statistic distribution shifts to the right, increasing the power to reject H_0 when it is false.Knowledge Transfer Across Distinct Distributions. The Adult dataset concerns adult income for 32,650 males and 16,192 females based on the 1994 census, including six numerical and eight nominal features, for example, age, work class, final weight, the number of years in education, marital status, working hours per week, and native country; see <cit.> for more details. Our objective is to test if age, the number of years in education, and working hours per week are relevant to predicting if an adult female's income surpasses the threshold of 50K per year. As Figure <ref> depicts, the pronounced differences across genders exist in the distributions of categories like income, age, marital status, occupation, and relationship. A pertinent question arises: can adult male data augment the synthetic data generation and subsequent female analysis? To harness this knowledge transfer, we pre-train a TDM solely on the adult-male data with 32,650 observations.For hypothesis testing in (<ref>), with a focus on adult females, we utilize a training sample of 2,700, divided evenly into S_1 and S_2. Additionally, we use an inference sample S_3 consisting of 300. Following the Syn-Test approach as described in Figure <ref>, we use S_1 and S_2 for fine-tuning the pre-trained TDM and while using S_3 to avoid overfitting in model training and fine-tuning.Figures <ref> and <ref> demonstrate that the synthetic female data produced by the diffusion model aligns more with the adult-female dataset than with the adult-male dataset. This alignment is evident by the Fréchet Inception Distance (FID), which measures the distributional differences between the generated and raw data vectors under the Gaussian assumption. The 1- and 2-Wasserstein distances between the empirical distributions of the two datasets provide further evidence (note that FID is 2-Wasserstein distance under Gaussian assumption). Notably, as shown in Table <ref>, the male-focused pre-trained TDM, once fine-tuned with a smaller female dataset, crafts synthetic female data with a diminished margin of error compared to models pre-trained on the adult-male data and adult-female data with 2700 individuals. This result confirms that leveraging pre-trained adult-male data with a somewhat distinct distribution can enhance the TDM's generation precision for females. This empirical validation emphasizes the imperative of refining a pre-trained model to maximize knowledge transfer and achieve unparalleled accuracy.Figure <ref> highlights three key observations. First, the top two figures demonstrate that m̂ = 6n consistently controls the Type-I error of the Syn-Test across both scenarios. Second, the middle two figures suggest successful debiasing through synthetic data generation. Lastly, the bottom two plots exhibit a pronounced rightward shift in the test statistic distribution when comparing synthetic data to those derived from raw data. This shift signifies an enhancement in the test's power, directly attributable to the augmentation of the sample size. §.§.§ SimulationIn this section, we evaluate the effectiveness of Syn-Test in controlling Type-I errors through simulation studies. We use the model in(<ref>) with a modification: an additional feature, X_8, is included but does not contribute to the model. Here, X = ( X_1, …,X_7,X_8) is distributed uniformly on [0,1]^8(Uniform(0, 1)^8), and ϵ follows a normal distribution N(0, 0.2^2). We assess the relevance of the feature X_8 in prediction, thereby examining the control capability of Type-I error by Syn-Test.We will use the one-split test statistic proposed by <cit.> in Syn-Test, although we don't rely on their asymptotic theory for the test statistic.For our experiments, we split a raw sample into a training set and an inference set with 1000 and 200 samples, respectively.Additionally, we use a distinct pre-training sample of 10000 to train the TDM on ( Y,X). This pre-trained model is subsequently fine-tuned on the raw training set according to the Syn-Test's procedure. The inference sample, consisting of 200 instances, is utilized to validate the training of f̃ and f̃_S^c for the test statistic in (<ref>).Moreover, we employ an MC size of D = 1000 with parameters α = 0.05 and ϵ = 0.01, and explore synthetic-to-raw ratios from 1, 2, …, 20 to fine-tune the synthetic size m.As shown in Figure <ref>, the tuning curve of Syn-Test with synthetic data demonstrates a comparable performance to that of the same test when using independent raw data, particularly in terms of generational effect,while exhibiting similar patterns of variation. This finding indicates that Syn-Test effectively controls the Type-I error with synthetic data, aligning with observations from raw data. Finally, we adopt the conservative approach in selecting m, choosing m̂ / n = 18, in accordance with the guidelinesin Section <ref>. As illustrated in Figure <ref>, the estimated null distribution curve derived from synthetic data with m̂ / n = 18 closely resembles that based on independent raw data, albeit with a slight shift. This observation suggests minor generation errors by our generators. Furthermore, the distribution of P-values from Syn-Test using synthetic data under the null hypothesis H_0 with m̂ / n = 18 aligns well with that based on raw data. These plots demonstrate the effectiveness of Syn-Test in controlling Type-I error.§ DATA PRIVACY The Syn framework can address privacy concerns using synthetic data generated by generative models trained to mimic the distribution of raw data. Unlike raw data, synthetic data imposes fewer privacy risks. However, it's not entirely immune to reverse engineering attacks. These vulnerabilities mainly stem from the model's parameters, which attackers could potentially infer from the synthetic data. In real-world applications like healthcare and finance, where data sensitivity is paramount, ensuring robust privacy measures is crucial.To bolster data privacy, one may consider a privacy protection standard known as (ε,δ)-differential privacy <cit.>, recognized as a gold standard in data privacy. Notably, it was implemented in the 2020 U.S. decennial census, demonstrating its practical applicability and effectiveness. This approach effectively safeguards against various privacy threats, including reverse engineering, re-identification, and inference attacks.The definition of (ε,δ)-differential privacy considers an adjacent realization ' differing from a realization of an original sample =( Z_i)_i=1^n by just one observation. It revolves around a privatization mechanism , mapping from original dataset =(_i)_i=1^n into a privatized version ^(m)=(_i)_i=1^m. Forto be (ε,δ)-differential private <cit.>, it satisfies: For any small ε≥ 0 and δ>0 and any measurable set B:P(() ∈ B|=) ≤ e^ε P(() ∈ B|=')+δ,where ε>0 is the privacy budget, controlling the strength of privacy protection. Smaller ε values indicate stronger privacy, while δ is a small probability allowance for the privacy guarantee, acknowledging minimal inherent risk. This definition accommodates ε-differential privacy with δ=0.To generate synthetic data satisfying (<ref>), we may employ techniques like Differentially Private Stochastic Gradient Descent (DP-SGD). DP-SGD injects a certain amount of Gaussian noise into the gradient updates during model training, ensuring that the model satisfies the desired privacy guarantees <cit.>. Differentially-private diffusion models <cit.> is an example of a point. This method is particularly effective in generating differentially private synthetic data while maintaining the utility of the data.Unlike traditional differentially private samples, differentially private synthetic variants through these methods preserve the distributional characteristics of the original data. This preservation is advantageous as it can boost statistical analysis through sample size argumentation while enhancing privacy. However, this approach often requires more extensive model training due to the added noise, which can be a computational challenge.§ DISCUSSION: FUTURE OF DATA SCIENCE This article unveils the Syn paradigm—a novel approach to data analytics using high-fidelity syntheticdata derived from real-world insights. By addressing challenges in traditional data analytics, such asdata scarcity and privacy issues, this paradigm underscores that high-fidelity synthetic data can amplify the precision and efficiency of data analytics with sample size augmentation,as evidenced by the case studies herein. However, it is crucial to acknowledge the generational effect inherent to the Syn framework. Consequently, a fusion of reality and simulation is essential for unlocking the potential of synthetic data, offering fresh perspectives for both scientific and engineering domains.The future of data science may pivot on our capability to harness raw and synthetic data. Large pre-trained generative models, equipped with extensive knowledge, offer a promising pathway. These frameworks distill domain knowledge, a testament being the successes of Generative Pre-trained Transformers in text and imagery contexts. As developing domain-centric generative models is gaining momentum, these generative models promise significant enhancements in synthetic data generation, paving the way for breakthroughs across a wide range of disciplines.A systematic evaluation of the Syn framework across diverse applications is essential for advancing data science.§ PROOFS§.§ Proof of Theorem <ref> Let g(m) = C_θ m^-α + m LTV. The assumption that LTV > (θ(^(m_0)))/m^* implies that g(m^*)-C_θ (m^*)^-α > (θ(^(m_0))). Moreover, note that ^(m)≥^(m^*) if m > m^*. Hence, when m>m^*,(θ(^(m)))≥ C_θ (m^*)^-α + ^(m) + C_θ (m^-α - (m^*)^-α)≥ C_θ (m^*)^-α + ^(m^*) + C_θ (m^-α - (m^*)^-α)≥ g(m^*) - C_θ (m^*)^-α > (θ(^(m_0))).Thus, by the definition of m_0, that m_0 ≤ m^* < +∞. This completes the proof.§.§ Proof of Theorem <ref> By definition, |_m|=| ∫ L(θ_m( t),θ)   (d F( t) - d F̃( t))|≤ 2UTV(F_ Z^(m), F_Z̃^(m)) ≤ 2mUTV(F̃, F),yielding (<ref>). This, together with the assumption that (θ(^(m))) = C_θ m^-αyields that(θ(^(m_0))) ≤(θ(^(s_0))) ≤(θ(^(m_0))) ≤(θ(^(s_0))) when s_0 =(2U/α C_θ )^α/1 + α·TV(F̃,F)^-1/1 + α, yielding the right hand of (<ref>). This gives an optimal upper bound of (θ(^(m_0))) in (<ref>). Moreover, (<ref>) implies that (θ(^(m_0))) ≤(θ(^(n))) provided that TV(F̃,F) is sufficiently small, sufficiently, TV(F̃,F) ≤ C_θ, α, U· n ^-1 - α where C_θ, α, U = C_θ (2U)^-1 (α^-α/1+ α + α^1/1 + α)^-1 + α/α.This completes the proof.§.§ Proof of Theorem <ref>Let Z̃ be a random vector following F̃ = F_Z̃| Z^(n) and Z be one following F. First, we bound the empirical distribution F^D_T̃(x)=D^-1∑_d = 1^D X^(m,d) in (<ref>), where X^(m,d)=I(T(^(m,d)) ≤ x) ∈ [0, 1] for any x ∈ R. Let F_T̃(x)=_^(m,1)|^(n) X^(m,1). Note that ^(m,d) is a conditionallyindependent sampleof size m given ^(n) following F̃=F_|^(n). By Hoeffding's Lemma, _^(m,d)|^(n)exp(s(X^(m,d) - _^(m,d)|^(n) X^(m,d))) ≤exp(s^2/8) a.s. for any s>0, where _^(m,d)|^(n) is the conditional expectation with respect to ^(m,d) given ^(n). By Markov's inequality and the conditional independence between ^(m,1), …, ^(m,d) given ^(n), for any t>0 and s=4 t,(F^D_T̃(x) - F_T̃(x)≥ t)= (|D^-1∑_d = 1^D (X^(m,d) - _^(m,d)|^(n) X^(m,d))| ≥ t)≤ 2 exp(-s D t)_^(n)∏_d=1^D _^(m,d)|^(n)( exp(s(X^(m,d) - _^(m,d)|^(n) X^(m,d))) ) ≤ 2 exp(-s D t) exp(D s^2 / 8) ≤ 2 exp(-2 D t^2), wheredenotes the probability, taking into account all sources of randomness.For any δ∈ (0, 1), by choosing t = √(log2/δ/2D),we obtain that F^D_T̃(x) - F_T̃(x)≤√(log2/δ/2D), with probability at least 1 - δ. On the other hand, F_T̃(x) - F_T(x)≤TV(^(m), ^(m)) given Z^(n) for any x ∈ R, where Z^(m) is a sample of size m from F. Using the union bound to combine these results, we obtain thatsup_ xF_T̃( x) - F_T( x)≤√(log2/δ/2D) + TV(^(m), ^(m)),with probability at least 1 - δ. Note that TV(^(m), )= m ·TV(F̃, F). Consequently, as m ·TV(F̃, F) → 0 in probability and D →∞, sup_ x_T( x) - F_T( x)→ 0 in probability. Using Syn-Test, the control of the empirical Type-I error at the α level can be obtained as if we were using the true null distribution.Concerning the power gain, note that ϕ̃_m, α - ϕ_n, α = ϕ̃_m, α - ϕ_m, α + Δ. By definition and assumption, |ϕ̃_m, α - ϕ_m, α | ≤TV(^(m), ^(m)) =m TV(F̃, F) < Δ, indicating that ϕ̃_m, α - ϕ_n, α> 0. This completes the proof. This work was supported in part by NSF grant DMS-1952539 and NIH grants R01AG069895, R01AG065636, R01AG074858, U01AG073079 (corresponding author: Xiaotong Shen, mailto:[email protected]@umn.edu).DatasetsInformation concerning datasets in Section <ref>: Case studies. imsart-nameyear
http://arxiv.org/abs/2310.17848v2
{ "authors": [ "Xiaotong Shen", "Yifei Liu", "Rex Shen" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20231027015727", "title": "Boosting Data Analytics With Synthetic Volume Expansion" }
Reduction of Necessary Conditions for the Variational Collision Avoidance Problemfootnoteinfo [ January 14, 2024 =============================================================================================^†Correspondence to : 〈[email protected] 〉, 〈[email protected]〉Large language models (LLMs) demonstrate their promise in tackling complicated practical challenges by combining action-based policies with chain of thought (CoT) reasoning. Having high-quality prompts on hand, however, is vital to the framework's effectiveness. Currently, these prompts are handcrafted utilising extensive human labor, resulting in CoT policies that frequently fail to generalise. Human intervention is also required in order to develop grounding functions that ensure low-level controllers appropriately process CoT reasoning. In this paper, we take the first step towards a fully integrated end-to-end framework for task-solving in real settings employing complicated reasoning. To that purpose, we offer a new leader-follower bilevel framework capable of learning to ask relevant questions (prompts) and subsequently undertaking reasoning to guide the learning of actions to be performed in an environment. A good prompt should make introspective revisions based on historical findings, leading the CoT to consider the anticipated goals. A prompt-generator policy has its own aim in our system, allowing it to adapt to the action policy and automatically root the CoT process towards outputs that lead to decisive, high-performing actions. Meanwhile, the action policy is learning how to use the CoT outputs to take specific actions. Our empirical data reveal that our system outperforms leading methods in agent learning benchmarks such as Overcooked and FourRoom. § INTRODUCTIONLarge language models (LLMs) with Chain-of-thought (CoT) prompts <cit.> have achieved impressive performance improvements for solving complex natural language processing (NLP) tasks. Moreover, techniques such as reward incentives <cit.>have been shown to enhance the quality of Chain-of-Thought prompts for addressing intricate tasks. Two notable approaches, Tree-of-Thought (ToT) <cit.> and Reasoning via Planning (RAP) <cit.>, have emerged to be useful techniques that leverage LLM-generated reward functions for guiding the step-by-step problem-solving process.With the increasing reasoning capabilities of CoT, the reasoning outputs of LLMs can be used to provide useful `thought' inputs to policies that perform tasks in practical environments.This involvement of CoT reasoning has given rise to the promise of unlocking the power of LLMs to be able to assist in performing complex automated reasoning and acting in real-world environments.While LLMs such as ChatGPT possess a wealth of human knowledge, in general, current methods <cit.> heavily depend on meticulously crafted prompts designed by humans for each specific task. Moreover, the performance of CoT reasoning can be sensitive to the quality of the prompt input — poor prompts provided even to powerful LLMs are unlikely to generate useful CoT outputs. Additionally, despite the obvious potential of using CoT reasoning for guiding a low-level control policy, human-intelligible CoT reasoning can often be ambiguous for a downstream control policy, such as a rule-based planning method <cit.> and an action policy implemented by a reinforcement learning (RL) algorithm <cit.>.As such, a natural consideration is for the need to generate CoT outputs that are interpretable to the action policy and, provably reduce the uncertainty of the action policy. Therefore, the ambition of embedding CoT reasoning within a generalist artifical intelligence (AI) framework has produced a series of critical challenges that have yet to be fully resolved. In this paper, we take the first step towards a fully unified LLM framework that learns to perform complex tasks. In order to achieve this goal, both the prompt design and the policy that outputs actions to be executed have to be sufficiently flexible and useful so as to adapt to the current task at hand. Tackling this challenge necessitates learning both to generate appropriate questions (a.k.a. prompts) given environment observations as well as learning how to perform actions that enable the task to be solved. To this end, we introduce a decision-making framework whichlearns to ask pertinent questions or perform introspection, performs CoT reasoning and then learns to take the best actions in the environment.The first component of the framework is enacted by a prompt-generation policy that learns a suitable prompt question given the current challenge and overall task and given its observations of the environment. These prompts serve as inputs to a CoT process; this then allows the framework to perform desired and complex reasoning given the prompt. The CoT thoughts are then inserted into the action-policy that learns to find solutions to tasks that may require both interaction experience and human knowledge embedded in CoT reasoning to solve.Learning how to generate in-demand prompts for the CoT process produces formidable challenges. One such challenge is to ensure that the resulting CoT thoughts enhance the performance of an action policy. Departing from a fixed set of pre-selected, human-crafted prompts and learning to find useful prompts to be fed into the CoT process presents an important challenge. Specifically, ensuring that the resulting CoT thoughts improve the performance of an action-policy that can solve the task. We resolve this challenge by designing a leader-follower Bilevel structure that generates mutually adaptive policies.Each policy is endowed with its own objective — the prompt-generation policy observes the effect of its prompt on the action policy and learns to generate useful prompts, and subsequent CoT outputs that are correctly interpreted. In particular, the prompts and CoT output are chosen so as to minimise the uncertainty of the action policy i.e. the prompt-generation policy chooses prompts that minimise the entropy of the action-policy. The action policy, on the other hand, learns to maximise the environmental reward while taking into account the outputs of the CoT process. Ultimately, the generated thoughts serve to learn a more effective action policy, providing additional information beyond the observation of the environment. These natural language insights embody human knowledge, reducing the need for redundant exploration compared to traditional RL algorithms, which typically require extensive exploration of specific environments to train a competent agent. In numerous task environments, expert prompt data for the task is available, such as a well-defined set of subtasks <cit.>. Making use of this in decision-making tasks requires prompts that induce CoT reasoning for performing desirable actions at each state. Nevertheless, often, the information in expert prompt sets is not refined to capture useful specifics at the state level producing a challenge of how to select the appropriate prompt at a given state. In environments where such prompt candidates are not available, the challenge becomes autonomously generating useful prompts using only the environment observations. In Sec. <ref>, we demonstrate  is capable of tackling each of these challenges. First, we demonstrate that  successfully learns to select, from a global set of candidate prompts, the best prompt for each state. We then demonstrate that in problem settings where prompt candidates are not available, successfully generates desirable prompts at each state entirely from state observations.The contributions of this paper can be summarised as follows: ∙ A new framework for auto-generation of prompts for decision-making tasks. An integral component is a prompt-policy or prompt-generator which is trained by our framework to generate prompts that induce low uncertainty in the action-policy which receives thoughts generated by CoT reasoning triggered by the prompts from the prompt-generator. Therefore,the prompt-generator (and hence CoT process) behaves adaptively toward the needs of the action-policy. ∙ A chain-of-thought generation framework in which the thought output of the CoT process are used to guide a policy that takes actions within an environment in order to solve practical tasks. This leverages the benefits of natural language models and CoT reasoning that encapsulate worldly experience and the capacity for deductive reasoning while efficiently tuning the thought pipeline process by tuning the prompt generation policy. ∙ Prompt-tuning plus learning of LLM input-based policy that acts in environment (dual framework).∙ A new bilevel learning algorithm that uses natural language to guide what actions and finds prompts for this desired textual guidance.§ PROBLEM FORMULATIONIn this setting, an agent aims to solve some task by performing a sequence of actions in an environment. Formally, the problem is described by a partially observable Markov decision process (POMDP), which is defined by the following tuple ⟨𝒮,𝒜, P, 𝒪,T,R,γ⟩ where 𝒮 is the set of environment states,𝒜 is the set of actions for the agent,P:𝒮×𝒜→Δ(𝒮) is the the state transition kernel for the environment, 𝒪 is the set of observations. The function ℛ:𝒮×𝒜→Δ(ℝ) is the reward function, which returns a scalar reward conditioned on a state-action pair whose realisation at time step t we denote by r_t∼ R and lastly, γ∈ [0,1] is the discount factor. We introduce an additional variable x_t∈ X contained inthe situation space X. The variable x_t represents extra observed information that can be encoded as text that can help an action-policy to determine its optimal action. Lastly, the observation function is T:𝒮×𝒜× X →𝒪 which is amapping from the environment state, action and situation to the observation set of the agent.In challenging problems, standard methods such as RL struggle to solve these tasks in a sample efficient way. In order to solve complex decision-problems, an agent may be challenged with needing to perform deductive reasoning in order to resolve the challenge of finding an optimal policy. To tackle these challenges, we equip the agent with both a dual LLM structure that enables the agent to first, generate its own pertinent prompts from its observations of the current. Then, using these prompts, perform CoT reasoning to perform complex reasoning about the best course of action. Lastly, an action is taken in the environment. The framework can therefore be split into three components: a prompt-generating policy π_ϕ: (𝒪)^j<∞→𝒯. This policy learns to generate prompts after observing (a window of) j<∞ observations and outputs a thought in textual thought space[The Markov property is ensured by setting j=0.]. Second,a thought reasoning policy π^ re: 𝒮→𝒯 — an LLM that reasons about the task at that particular state by performing CoT to generate a thought output. Denote 𝒱 is the vocabulary (with finite words in it). Each thought t∈𝒯∈𝒱^M is described as a sentence with M<∞ tokens in it where 𝒯 is the set of thoughts. The thought reasoning policy π^ re does step-by-step thought reasoning, e.g. Opening the box requires finding the key and then unlocking it in natural language space. The CoT reasoning is performed by an LLM, since  is a plug & play framework, any choice of LLM can be used to perform the CoT reasoning (in our experiments we use GPT 3.5). Lastly, an action-policy π_θ: 𝒪×𝒯→Δ(𝒜). The action-policy makes an observation of the environment and takes the CoT thought as an input then executes actions in the environment. Therefore, at times t=0,1,…, a prompt p_t is generated bythe prompt generation policy i.e. p_t∼π_ϕ(·|o_t,…,o_t-j∧ 0). The prompt is then used by an LLM to trigger a CoT process whose output is a thought υ_t^+∈𝒯. Last, the action-agent samples an action from its policy a_t^+∼π_θ(·|o_t^+ ,υ_t^+), where t^+ is the time immediately after querying the thought reasoning policy π^re at time step t. Therefore, sequence of events proceeds as follows:1. At time t=0,1,… the system is at an environment state s_t∈𝒮.2. A prompt p_t is produced by the prompt generation policy i.e. p_t∼π_ϕ(·|o_t,…,o_t-j∧ 0).3. An action a_t^+∼π_θ(·|o_t,υ_t^+) is taken given the output of the CoT process υ_t^+∼π^ re(p_t).4. The environment state transitions according to s_t+1∼ P(·|s_t,a_t^+). To tackle the problem of learning how to generate prompts while learning the action-policy, we structure the problem as a leader-follower bilevel optimisation <cit.>. This allows the prompt-generator policy to learn how its actions affect the action-policy while action-policy and prompt-generator policy learn concurrently. In this way, the prompt-generator policy alters its output to produce desirable actions from the action-policy while the action-policy learns both how to interpret the CoT outputs and take desirable actions. Since LLMs already contain a vast amount of world knowledge, we here fix the LLM that performs the CoT reasoning, that is we assume that π^ re is pretrained and fixed. We update the prompt-generator policy and action-policy. The aim of the prompt-generator policy is to generate prompts minimise the uncertainty of the action policy action.The optimisation objective can be expressed as a bilevel optimisation problem:(π^*_θ,π^*_ϕ)∈_(π_θ,π_ϕ)∈Π_θ×Π_ϕ_π_θ,π_ϕ,π^ re[-∑_t≥ 0γ^tℋ^π_θ(y_t^+)|y_t^+=(o_t,υ_t^+), υ_t^+∼π^ re(p_t)]s.t. π^*_θ ∈_π_θ∈Π_θ _π_θ,π_ϕ,π^ re[∑_t≥ 0γ_I^tr_t^+ |p_t∼π_ϕ], ∀π_ϕ∈Π_ϕ,∀π^ re∈Π^ re, where ℋ^π_θ(y_t^+)=∑_a_t∈𝒜π_θ(a_t|y_t^+)logπ_θ(a_t|y_t^+), y_t^+=(o_t,υ_t^+) which is the entropy of the policy π_θ and γ_I,γ∈ [0,1) are the discount factors and r_t∼ℛ.Note that the bilevel aspect incorporates the nested nature of the optimisation <cit.> — in order to find the optimal prompt, the prompt-generator policy must take into account the anticipated behaviour of both the LLM π^ re and the action policy π_θ and thereafter make its choice accordingly.§ METHODOLOGY In this section we describe the training procedure of proposed the Bilevel framework. The prompt generation policy is optimised via the policy gradient with the behavior of action policy as a reward. The action policy is served by an LLM with PPO updater, which benefits from avoiding human-crafted engineering by grounding the CoT reasoning to executable actions. In the bilevel framework, the prompt generation policy and action policy are concurrently optimised until convergence.CoT reasoning with LLMs has proven to be effective in aiding decision-making when well-designed prompts are used <cit.>. However, the quality of CoT reasoning heavily depends on the quality of prompts, which are typically manually designed by humans <cit.>. In traditional Natural Language Processing (NLP) tasks such as sentiment classification <cit.> and news classification <cit.>, prompts are usually provided through sets of input-output pairs. Unlike these NLP tasks with clearly defined input-output examples, the desired format of prompts varies across different decision-making tasks, often requiring substantial manual engineering. Prompt generation policy training via policy gradient. In this work, we integrated task descriptions into GPT-3.5 to curate a set of prompt candidates. The prompts candidates, for example,in the Overcooked environment, where the goal is to prepare a lettuce-tomato salad, prompt candidates might include queries like “how to slice lettuce," “how to slice tomato," and “how to deliver a lettuce-tomato salad." While human assistance is engaged in generating prompt candidates, our work focuses solely on generating prompts about the critical subtasks, similar to the approach in <cit.>, but less extensive than in <cit.>, where human-designed prompt formats are required for the entire decision-making process, encompassing ensuring a subgoal, thinking and acting.Due to the difficulty of training a model that is able to automatically generating reasonable prompts from scratch, we alternatively use pre-defined prompt candidates which can be obtained by human deliberately writing or being generated by GPT3.5. We have involved a experiment about using GPT3.5 to generate prompts, where the task description, state and abstracted state situation are inputted into the GPT3.5 to produce some simple prompt questions about how to achieving the goal. An example of letting GPT3.5 generate prompt candidates is illustrated in Appendix <ref>. With a prompt candidate set 𝒫={p_1, p_2, ⋯ p_K}, we train a prompt generation policy π_ϕ(·|o_t,…,o_t-j∧ 0) over the prompt candidates according to historical observation o_t,…,o_t-j∧ 0. Each of these natural language prompt candidates can be represented as a high-dimensional vector using a pre-trained and frozen Bert<cit.> model. Denote the embedding of prompt candidate p_i as e_i and the embedding of the historical observations (o_t,…,o_t-j∧ 0) as e_o=ℰ(o_t,…,o_t-j∧ 0) with the encoder ℰ(·). The prompts' embedding and the observations' embedding are projected into the same vector space. Denote the mapped embedding as ê_i=ℳ_p(e_i), ∀ i=1⋯ K, ê_o=ℳ_o(e_o), where ℳ_p and ℳ_o are projectors for prompts and observation sequence respectively. During the decision-making process, the prompt policy estimates the probability of selecting a prompt candidate p_i based on the similarity between the prompt candidate embedding ê_i and the historical observation sequence's embedding ê_o. The prompt policy is updated via the policy gradient with the minus entropy of action policy as a reward incentive andparameters of the observation encoder ℰ and projectors ℳ_p, ℳ_o are trainable. The detailed procedure is described as below: ∙ For a given decision-making task, we employ GPT-3.5, along with the provided task description, to generate appropriate prompt candidate sets. As a second case, we used human-crafted assists to generate valuable prompt candidates. ∙ With these K prompts, the prompt generation policy is updated with the objective of maximising the minus action-policy entropy.The objective function is given by:J_ϕ(y|π_θ,π_ϕ,π^ re) =_π_θ,π_ϕ,π^ re[-∑_t≥ 0γ^tℋ^π_θ(y_t^+)|y_t^+=(o_t,υ_t^+), υ_t^+∼π^ re(p_t), y_0=y]=∑_τ∼π_θ,π_ϕ,π^reρ(τ) [-∑_t≥ 0γ^tℋ^π_θ(y_t)]=∑_τ∼π_θ,π_ϕ,π^reρ(τ) R^o(τ),where ρ(τ) denotes the probability of sampling a whole trajectory under the policies π_ϕ, π_θ, π^re. The cumulative and discounted reward related to action entropy for the outer loop is defined as: R^o (τ)= -∑_t≥ 0[γ^tℋ^π_θ(y_t)|y_t=(o_t,υ_t^+), υ_t^+∼π^ re(p_t)]. ∙ We use a policy gradient <cit.> to optimise the prompt generation policy which obeys the following expression:∇_ϕJ(y|π_θ,π_ϕ,π^re) ≈1/N∑_t≥0∇_ϕlogπ_ϕ(p_t|o_t,…,o_t-j∧ 0) R̂^o_t(τ).The prompt generation policy π_ϕ is updated according to N sampled trajectories from polices π_θ, π_ϕ, and π^re. We denote R̂^o_t(τ)= -∑_i≥ t[γ^i-tℋ^π_θ(y_i^+)|y_i^+=(o_i,υ_i^+), υ_i^+∼π^ re(p_i)] as the return-to-go from step t to the end for the outer loop. CoT reasoning with Prompts. With the selected prompt p_t sampled from the prompt candidate set, the CoT reasoning information is obtained by υ_t^+∼π^re(·|o_t,p_t), where the CoT reasoning policy π^re is served by an LLM such as GPT3.5. The motivation of integrating the CoT reasoning into our bilevel framework, is we hope to use the prior human experts' knowledge to provide a high-level guideline of how to solve complicated decision-making tasks. For example, as shown in <ref>, in Overcooked game, the CoT LLM can generate a sequence of intermediate steps need to be done with a prompt about the subtasks “how to slice lettuce" given. About how to finish the intermediate steps, previous studies <cit.> rely on some hand-crafted and rule-based strategies to understand CoT reasoning and perform actions. In this work, we fed CoT reasoning into the action policy served by a small LM to automatically interpret CoT outputs. To reduce the time and cost associated with frequent queries to GPT-3.5, we abstract situations to represent states and stored CoT reasoning outputs for the same situations. For example, in the case of two distinct states, even though the agent may be in different positions and neither state involves holding lettuce, they are considered part of the same situation because the steps to slice lettuce remain the same: picking up a lettuce, placing it on the cutting board, and then proceeding to slice it. Action policy training via PPO with LLM. Existing works <cit.> utilise LLMs as the action policy and fine-tune these LLMs to adapt to decision-making tasks, taking advantage of the comprehensive capabilities of LLMs. In our work, we also utilise an LLM as the action policy. Within our framework, in addition to considering the textual observations provided by the environment, we also incorporate additional CoT reasoning from GPT-3.5 when performing actions.To ground the outputs from the action LLM into executable actions, we fine-tune the action LLM, denoted as π_θ, using Proximal Policy optimisation (PPO) <cit.>. The objective of the action policy is to maximise environment return:max_θ𝔼_π_θ,π_ϕ, π^re[∑_t≥0γ_I^t r_t^+ |a_t^+∼π_θ,υ_t^+∼π^re, p_t∼π_ϕ].During the training phase of the action policy π_θ, we freeze the prompt generation and CoT reasoning policies and finetune the action policy with collected trajectories. Additionally, we use the pre-trained LLM, Flan-T5 small <cit.> with parameters less than one billion as the action policy. Bilevel Optimisation.In our leader-follower Bilevel LLM framework, the prompt generation policy and the action policies are trained alternately, with the other policy being kept frozen. On one hand, the prompt generation policy selects a prompt for the CoT reasoning LLM, which outputs are expected to be interpreted by the action policy. Thus, the goal of the prompt generation policy is to reduce the uncertainty of the action policy when it encounters challenging decisions. In practical terms, the objective is to minimise the entropy of the action policy. On the other hand, the action policy is trained to effectively solve specific decision-making tasks while benefiting from CoT reasoning and the experience gathered during exploration. The overall training process of the Bilevel framework is detailed in <ref>.§ EXPERIMENTSIn this section, we verify the effectiveness of Bilevel framework on three environments. Further details on experimental settings and ablation studies can be found in the Appendix. We perform our empirical examinations on the following three environments: ChainWorld. The ChainWorld game contains a linear sequence of states and the available actions for the agent are go left or go right. The agent gains a reward 100 at a random end of the chain and -5 at the other end, with -1 penalty for each move. At each episode, the award randomly appears on the left or right end, and the initial position of the agent is randomized, except for the ends.There are two situations corresponding to different sides with high rewards. We consider two settings: ChainWorld(Full), where full observation of the situation and position information are provided, and ChainWorld(Partial), where only partial observation of the agent's position is available. In the case of ChainWorld(Partial), since the position with a reward of 100 is randomized, the agent must learn to make decisions based on historical trajectory information.FourRoom. In this game, four rooms are circularly interconnected by four hallways, and an agent needs to reach a goal in these rooms. The agent's position and the goal position are randomly initialized within these four rooms at the start of each game. The objective for the agent is to reach the goal as fast as possible. During each step, the agent receives two types of information: a global observation of the goal's position and its own current position, and a partial observation of the hallways within its current room. Based on this information, the agent decides and moves one cell. Overcooked. We validate bilevel optimisation framework with LLMs for facilitating solving the Overcooked environment <cit.>. Overcooked has a discrete action space with 4 directions; North, East, South, and West. There game has the following: tomato, lettuce, two cut boards and two plates. Agent can pick up food items and chop them on the cutboard, or place the chopped food on plate. The goal is to make and deliver the desire meal. We have designed candidate prompts with which we also get CoT examples from GPT3.5. We consider two different recipes in Overcooked; delivering a chopped tomato and delivering a tomato-lettuce salad. BabyAI-text. BabyAI is a collection of navigation and interaction tasks for evaluating reinforcement learning agents. BabyAI was introduced to bridge the gap between current algorithm capabilities and human-level intelligence by facilitating the training of agents to understand language instructions. In this work, we use BabyAI-text <cit.>, which extends the framework with natural language observations. Using BabyAI-text, the prompt policy, which has been trained in natural language, can make immediate use of the current context and produce valuable output prompts. In this environment, the observation consists of a natural language description of the gridworld (e.g. "You see a yellow box on the left") and the action set contains three movement actions that allow the agent to move forward and turn left/right, and another action for picking up objects. Finally, each specific task contains a description of the goal that needs to be achieved (e.g. "Open the red door"). We comparewith two trainable baselines and two baselines that directly prompt GPT3.5 to perform actions, namely:GFlan <cit.>. GFlan adopts the LLM Flan-T5 large as the foundation of action policy and optimises it via PPO algorithm. GFlan solely relies on textual observations as input and employs this information to estimate the conditional probabilities of the action tokens. Vanilla PPO <cit.>. Unlike GFlan which leverages LLMs, Vanilla PPO employs conventional neutral networks such as MLPs as the backbone architecture and trains the action policy from scratch. We use the symbolic embedding of states as the input of action policy.GPT-3.5. Previous studies <cit.> show that LLMs have impressive reasoning capability on natural language, we test the zero-shot decision-making capability of GPT-3.5 with task descriptions, textual context, and executable action candidates as input prompt and let GPT-3.5 infer the action at the current state.GPT3.5 with CoT prompt. CoT prompts have the potential to substantially enhance the performance of CPT3.5 on complex reasoning tasks. Besides the inputs used in the GPT-3.5 setting, we further incorporate examples of human interactions with the environment or human-established task decompositions as a part of the input prompt and instruct GPT-3.5 to think step by step.. We propose the Bilevel LLM framework that integrates prompt generation, CoT reasoning, and action policies. Compared with GFlan, leverages the additional prompt generation policy to select a suitable question for CoT reasoning LLM relying on historical observations. With the selected question, the CoT LLM can reason the human-like high-level solution of the question from human experts' knowledge contained in LLMs. The CoT reasoning, i.e, high-level solution can assist the action policy to solve the more complicated tasks. Comparison with baselines. The results of comparisons with baselines are shown in <ref>. outperforms other baselines in all environments also exhibits a smaller standard error than the suboptimal GFlan. In most environments, GFlan also outperforms Vanilla PPO. This suggests that using a pre-trained LLM as the backbone of the action policy improves performance and training efficiency, due to the rich prior knowledge contained in the pre-trained LLM. In addition, for most environments, except for ChainWorld (Full), GPT-3.5 and GPT-3.5 (CoT) struggle to solve decision-making tasks effectively. This indicates that although GPT-3.5 is powerful in generating useful high-level task solutions (thoughts), it faces challenges in long-term decision-making processes due to the complexity of the world model and rules in the environment. Additionally, grounding the output of GPT-3.5 into executable actions proves to be challenging. Doeslearn to automatically generate prompts? We tested the case for when prompt information is not available which requires our method to learn its own prompts. -Auto is displays the performance of   when the prompt candidates are automatically generated by GPT3.5 using only the observation and situation (which maybe limited for a task) descriptions. As shown in <ref>,   (combined with GPT3.5) automatically generated prompts that learn to successfully induce desirable prompts, CoT reasoning and actions that solve the task well in the ChainWorld(Full). The examples our automatically generated prompts relying on state and situation description can be found in Appendix.§.§ Ablation StudiesWe conducted a series of ablation studies to confirm the usefulness of the components of  . In the following, we modified components of   in order to validate the following claims: Does the prompt policy with policy gradient improve performance? In order to validate the claim that the prompts generated by   lead to improved performance, we tested   against the baseline(Random), which is   but with the prompt policy replaced so that we randomly select a prompt from the candidate set at each time step. In addition,(UCB) views the prompt selection from a candidate set as the multi-armed bandit problem and uses Upper Confidence Bound (UCB) to select the prompt. In this setting, the UCB algorithm does not consider the historical observation but only relies on the environment rewards, i.e., the minus entropy of the action policy to select a prompt. In addition, the UCB counts are reset for each episode.As shown in <ref>,outperform all other prompt policy versions on all environments.The bad performance of(UCB) might be due to the lack of consideration of environmental state when performing prompt selection. l0.50< g r a p h i c s >< g r a p h i c s > Ablation of the entropy objective on Chainworld (Partial). Left: Normalized AUC reward. Right: Entropy of the action policy. Does the entropy objective improve performance? To validate the claim that the entropy objective leads to better performance we tested   against the baseline (Env), which replaces the negative entropy with the reward from the environment. As shown in Fig. <ref>,with entropy objective outperforms(Env) and exhibits lower entropy of the aciton policy. Can theframework accommodate multimodal state representation? We design a baseline -Symbolic, where the action policy is replaced by that of Vanilla PPO, but taking both the embedding of the CoT output and symbolic environment state as the input. As shown in <ref>,outperforms GFlan and -Symbolic outperforms Vanilla-PPO, which indicates that the utilization of prompt questions and CoT reasoning is helpful to improve the capability of action policies with both textual and symbolic state representation.Overcooked.§ RELATED WORK* Let's check: AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated PromptsAsk me anything: A simple strategy for prompting language models An information-theoretic approach to prompt engineering without ground truth labels Learning how to ask: Querying LMs with mixtures of soft promptsReasoning with LLMs.Previous studies have confirmed that stage-by-stage reasoning significantly enhances the capability of LLMs to solve complex tasks such as mathematical and logistic reasoning problems.Chain-of-Thought (CoT)<cit.> prompts containing a series of intermediate reasoning steps which is shown to improve the inference ability of LLMs. Self-consistency <cit.> marginalizes over several independent CoT reasoning paths and then selects the most consistent answer. PAL <cit.> integrates executable programs into the CoT reasoning, addressing computation-related problems.Besides using the prior world knowledge contained in LLMs, ReAct <cit.>, Tree-of-Thought (ToT) <cit.> and RAP <cit.> make use of external or internal LLM feedback to produce reasoning traces.ToT <cit.> and RAP<cit.> explore extensively compared to CoT. Both engage in multiple reasoning paths and construct a reasoning tree to determine the next crucial reasoning action through self-evaluation. In this work, LLMs are applied to address natural multi-step decision-making problems, such as the game of Overcooked, where rational reasoning is essential for each action.LLMs for RL. Due to the impressive reasoning capabilities of humans and the wealth of knowledge preceding LLMs, a series of studies have attempted to incorporate LLMs into planning algorithms to address decision-making tasks. ICPI <cit.> solves a number of simple interactive RL tasks (such as Maze) without the need for expert demonstrations or gradient computations, which is achieved by using LLMs as the world model and the rollout policy with historical interactions as in context examples. <cit.> leverage historical trajectories to prompt LLM to generate the next step actions on the textWorld game. GFlan<cit.> aims to ground the LLM Flan-T5 <cit.> on solving a textual interactive task named BabyAI-Text.In this approach, Flan-T5 serves as the action policy and is fine-tuned via online PPO <cit.>.LFG <cit.>, utilise an LLM with a polling strategy to recommend and subsequently rank subgoals. In our work, we integrate complex CoT reasoning with LLMs into RL to enhance interpretability and the value of each action while eliminating the need for meticulous engineering to interpret LLM outputs.Entropy in RL. Entropy has been used extensively in RL as a tool for regularisation <cit.>. The policy in actor-critic methods is often trained with an additional term that aims to maximise the entropy of the learned actions, with the goal of exploring the environment without having a policy collapse early to suboptimal actions <cit.>. A more formal use of entropy is explored in maximum entropy reinforcement learning <cit.>, where the optimisation objective aims to learn the optimal policy that has the maximum entropy. In this work, we take a different approach, and look at finding prompts that minimise the entropy of the action policy. Intuitively, this would push the CoT process to provide reasoning that makes the policy sure about its action. Such minimization of the entropy has also been explored: <cit.> formulate a hierarchical approach to intrinsic options, where entropy is minimised to improve the option sub-trajectories, and <cit.> consider entropy for decision making in the exploration-exploitation trade-off.Automated Prompt Engineering.The quality of prompts plays a crucial role in determining the output quality of LLMs. Many works hand-craft quality prompts such as the Generative Agents <cit.> and ProAgent <cit.>. Apart from completely using human-crafted prompts, there are other studies that adopt different degrees of automation when generating meaningful prompts. For example, APE <cit.> and DLN <cit.> generate prompts from multiple examples and utilise LLM to rank the prompt candidates. PromptPG <cit.> trained an additional prompt selection network using the policy gradient technique, where the deep network generation probability distribution over a predefined set of prompt examples.We also aim to minimise human-labor on prompt engineering, we therefore adopt the PromptPG method where we preset a group of prompts and let the algorithm choose for itself depending on the environment state. § CONCLUSIONWe introduce , a bilevel framework that is capable of learning introspective questions (in the form of prompts), then performing complex reasoning for guiding actions executed by an action-policy. The bilevel nature of the framework enables the accommodation of separate objectives for the two learning components, namely the prompt-generation policy uses an action-policy entropy minimisation objective which enables it to induce unambiguous and useful prompts to be fed to the action-policy. Meanwhile, the action-policy learns how to perform actions in the environment while making use of the CoT thoughts which it learns to interpret. We showed that this leads to a powerful framework that outperforms leading baselines in complex benchmark environments. We believe our framework takes an important step towards generalist artificial intelligence that is capable of introspection and complex decision-making. iclr2024_conference
http://arxiv.org/abs/2310.18127v1
{ "authors": [ "Xue Yan", "Yan Song", "Xinyu Cui", "Filippos Christianos", "Haifeng Zhang", "David Henry Mguni", "Jun Wang" ], "categories": [ "cs.LG", "cs.AI", "cs.CL" ], "primary_category": "cs.LG", "published": "20231027131919", "title": "Ask more, know better: Reinforce-Learned Prompt Questions for Decision Making with Large Language Models" }
INR-TH-2023-019 K-inflation: the legitimacy of classical treatmentY. Ageeva^a,b,[email: [email protected]], P. Petrov^a,[email: [email protected]]^a Institute for Nuclear Research ofthe Russian Academy of Sciences, 60th October Anniversary Prospect, 7a, 117312 Moscow, Russia^b Institute for Theoretical and Mathematical Physics, M.V. Lomonosov Moscow State University,Leninskie Gory 1, 119991 Moscow, Russia In this paper we consider general theory of k-inlation and find out, that it may be in strong coupling regime. We derive accurate conditions of classical description validity using unitarity bounds for this model. Next, we choose simple toy model of k-inflation and obtain the explicit condition, which guarantees that the generation of perturbations is performed in a controllable way, i.e the exit from effective horizon occurs in the weak coupling regime. However, for the same toy model the corresponding experimental bounds on a non-linear parameter f^equil_NL associated with non-Gaussianities of the curvature perturbation provide much stronger constraint than strong coupling absence condition. Nevertheless, for other known models of inflation this may not be the case. Generally, one should always check if classical description is legitimate for chosen models of inflation. § INTRODUCTION Nowadays, inflation <cit.> is a very successful paradigm for understanding the properties of the early Universe. Among many models of inflation, we choose k-inflation model <cit.> for our purposes. In such models the lagrangian involves non-canonical kinetic term, which drives the cosmological evolution. Although k-inflation theories are known as free from obvious pathologies, we address to the examination of strong coupling problem in k-inflation. The energy scale of strong coupling is an important parameter in an effective QFT. In other words, it is the maximum energy below which the effective QFT description is valid. The strong coupling energy scale can often be qualitatively estimated by naive dimensional analysis, see for example <cit.>. However, in <cit.> it was shown, that more accurate estimates using unitarity bounds that follow from general unitarity relations must be used in order to proceed to the correct analysis of mentioned problem. We show - firstly by the preliminary analysis of cubic order action for scalar perturbation - that strong coupling problem indeed arises in the most general setup for k-inflation.The simple estimation of s-channel matrix element for the 2→2 scattering processesand applied unitarity bound, provide some non-trivial conditions on model functions and parameters. Next, in order to improve our estimation, we turn to the explicit calculation of all channels: s-, t-, and u-matrix elements. The structure of k-inflation model lagrangian and cubic order action for scalar perturbation in this model lead to non-trivial cancellations in the final answer for matrix element. It also turned out that t- and u-elements are suppressed compared to s-channel and the factor of suppression is the slow roll parameter ϵ, which is usually a small quantity during inflation ϵ≪ 1. Using more accurate result for s-channel element, unitarity bound provides final constraint on the parameters of the model. Our next step is to choose a simple toy model of k-inflation in order to show how to apply theunitarity constraints. It turns out that the latter gives the lower bound on the slow-roll parameter ϵ. As we mention above, the analysis of strong coupling involves cubic order action for scalar perturbation. The same expansion is used in the calculations of non-Gaussianity of the curvature perturbation. Thus, it is interesting to compare the conditions on the parameters of the model that comes from observational bound on non-Gaussianity <cit.> and from the validity of classical description. Note that these bounds have different nature: conditions from non-Gaussianity are experimental constraints, while the strong coupling absence guarantees that our classical description is legitimate during considered times. Again, working with a specific toy model of k-inflation, the non-Gaussianities also lead to the lower bound on the slow roll parameter ϵ. However, the condition from non-Gaussianities turns out to be much stronger than the bound from strong coupling analysis for the chosen toy model of k-inflation. We emphasize that this result is obtained for the specific model of k-inflation: the situation may differ in other models of inflation. In other words, one should check if the classical description is legitimate for the chosen theory. For example, other models of inflation may not lead to the cancellations in cubic order action between the leading terms, so this can make the conditions ofstrong coupling absence more restrictive. This paper is organized as follows: a brief review of general k-inflation model is given in Sec.<ref>. Then the analysis of the strong coupling problem is addressed in Sec.<ref>: simple estimations of s-channel matrix element and naive condition from unitarity bound are given in subsection <ref>. This allows us to highlight the terms which provide the strongest constraints. More accurate calculations of s-, t-, u- channels matrix elements and the final condition from unitarity bound are given in subsection <ref>. The Sec. <ref> dedicated to short discussion of the formulas for a non-linear parameter f^equil_NL associated with non-Gaussianities of the curvature perturbation. Finally, in Sec.<ref> we stick to the specific simple model of k-inflation, find corresponding constraints on model parameter from strong coupling analysis as well as from bounds for non-Gaussianities. The paper ends with the conclusion in Sec.<ref>. There are two Appendices: Appendix <ref> collects full expressions of couplings from cubic order action for scalar perturbation, while Appendix <ref> contains the discussion of one interesting subtlety which arises in the calculation of s-channel matrix element from subsection <ref>. § GENERALITIESIn this paper weconsider a class of k-inflation models in the framework of the following action:𝒮 = ∫ d^3x dt√(-g)ℒ,where √(-g)≡√(γ) with three dimensional metric tensor with the determinant γ≡det(^(3)γ_ij) andL = G_2(ϕ, X)+ M_Pl^2/2R,X=-1/2g^μν∂_μϕ∂_νϕ,and G_2(ϕ,X) is an arbitrary function of scalar field and its kinetic term;R is the Ricci scalar. Here we also note that we work in the Einstein frame through the whole paper. Further we will use the metric signature as (-,+,+,+). We consider the flat FLRW space-time with a scale factor a(t), where t is the cosmic time, so the background equations read <cit.> 3M_Pl^2H^2+G_2-2XG_2X=0,3M_Pl^2H^2+2M_Pl^2Ḣ+G_2=0, where H = ȧ/a is the Hubble parameter. As it was pointed out in <cit.>, one can obtain k-inflation cosmology solving these equations for the specific form of G_2 function.The inflation occurs in the slow roll regime, i.e. ϵ≪ 1 <cit.>, where ϵ is a standard slow-roll parameter which is given by:ϵ≡ -Ḣ/H^2 = X G_2X/M_Pl^2H^2 .The condition ϵ≪ 1 can be satisfied with some specific choice of G_2 form. For instance, one can choose G_2 as <cit.>G_2(ϕ,X) = K(ϕ)X + L(ϕ)X^2 ,where the dimensions of the functions K(ϕ), L(ϕ), and X are as follows [K] = 2, [L] = 0, and [X] = 2; moreover, we note that in our setup we have [ϕ] = 0. This form of G_2 indeed admits the slow-roll inflation solution, and a necessary condition for the accelerated expansion in this case reads <cit.>:X(K+2XL)/M_Pl^2H^2≪ 1.To obtain the latter we also use an expression for the Hubble parameter during inflation (up to the leading order by ϵ)<cit.>:H^2 ≈ -G_2/3M_Pl^2. In order to explore the stability of the model, the strong coupling problem as well as to calculate the primordial scalar non-Gaussianities we need to expand the action (<ref>) up to the second and the third order in the perturbations. In this paper we concentrate on the scalar sector of perturbations only, sincethis sector usually provides the strongest conditions; for instance, see Ref. <cit.>. Later, when we turn to the concrete model of k-inflation, we prove that scalar sector indeed gives the strongest constraints. To this end, considering the perturbations about some background solution, we choose the following form of the metric <cit.>: ds^2 = -[(1+α)^2 - a^-2e^-2ℛ(∂β)^2]dt^2+ 2∂_iβ dt dx^i + a^2 e^2ℛ d x^2 ,whereα and β are non-dynamical scalar perturbations, while ℛ is a physical one. We also note, that we work with the unitary gauge, i.e. δϕ = 0, which fixes the time-component of a gauge-transformation vector, see <cit.> for details. Solving the constraints for α and β, we writethe unconstrained action for scalar perturbation ℛ <cit.>𝒮^(2)_ℛℛ = ∫ dta^3 d^3x𝒢_S (ℛ̇^2 - c_S^2/a^2(∇⃗ℛ)^2), where𝒢_S = XG_2X+2X^2G_2XX/H^2= Σ/H^2,where Σ≡ XG_2X+2X^2 G_2XX; nextc_S^2 = M_Pl^2H^2ϵ/XG_2X+2X^2G_2XX = M_Pl^2H^2ϵ/Σ.Using the expression (<ref>), we can rewrite formula (<ref>) as𝒢_S= M_Pl^2ϵ/c_S^2,where the ratio ϵ/c_S^2 generally is not small. Briefly turning to the stability analysis, we require that 𝒢_S>0, c_S^2 >0,to avoid ghost and gradient instabilities as well as we require that the speed of perturbationsdoesnot exceed thespeed of light,c_S^2≤ 1 .The latter condition isnecessary for the existence of the UVcompletion, see<cit.> for the details.§ STRONG COUPLING REGIME IN K-INFLATION MODELThis Section is dedicated to the computation of the unitarity bounds and corresponding constraints on the parameters of the model. We remind, that we consider pure scalar sector and we take into account only cubic order expansion of the action (<ref>) by the scalar perturbation ℛ. This Section consists of two parts: in the first part we estimate which terms from cubic order action provide the leading contributions to unitarity bound, while in the second part we use these leading terms in order to proceed to the accurate calculation of the corresponding matrix elements and final conditions for the validity of the classical description. §.§ Preliminary analysis In order to show, that we indeed face the strong coupling regime in the considered class of k-inflation model (<ref>), let us firstly carry out the simple dimensional analysis of noted problem. To this end we write the full unconstrained cubic order action for scalar perturbation ℛ <cit.>: 𝒮^(3)_ℛℛℛ = ∫ dta^3d^3x {Λ_1 ℛ̇^3+ Λ_2 ℛ̇^2ℛ+ Λ_3 ℛ̇^2 ∂^2 ℛ/a^2 +Λ_4 ℛ̇ℛ∂^2 ℛ/a^2. + Λ_5 ℛ̇(∂_i ℛ)^2/a^2+Λ_6 ℛ(∂_i ℛ)^2/a^2+ Λ_7 ℛ̇(∂^2 ℛ)^2/a^4+ Λ_8 ℛ(∂^2 ℛ)^2/a^4+ Λ_9 ∂^2 ℛ(∂_i ℛ)^2/a^4+ Λ_10ℛ̇(∂_i ∂_j ℛ)^2/a^4+ Λ_11ℛ(∂_i ∂_j ℛ)^2/a^4 + Λ_12ℛ̇∂_i ℛ∂_i ψ+ Λ_13∂^2 ℛ∂_i ℛ∂_i ψ/a^2+Λ_14ℛ̇(∂_i ∂_j ψ)^2+ Λ_15ℛ(∂_i ∂_j ψ)^2 + . Λ_16ℛ̇∂_i ∂_j ℛ∂_i ∂_j ψ/a^2 + Λ_17ℛ∂_i ∂_j ℛ∂_i ∂_j ψ/a^2},where ∂^2 = ∂_i∂_i andψ=∂^-2ℛ̇.Actually, there are non-trivial cancellations in the models with the lagrangian (<ref>) among Λ_7, …,Λ_11 terms from the action (<ref>). Indeed, substituting the lagrangian (<ref>), as well as expressions (<ref>) and (<ref>) into the general formulas for these coefficients, which are listed in Appendix <ref>[All other couplings Λ_1-Λ_6, Λ_12-Λ_17 expressions are listed in Appendix <ref> as well.],we arrive toΛ_7 =M_Pl^2/2H^3, Λ_8= -3M_Pl^2/2H^2, Λ_9= -2M_Pl^2/H^2,Λ_10= -M_Pl^2/2H^3,Λ_11=3M_Pl^2/2H^2,and after quite simple integration by parts <cit.> this part of cubic action significantly simplifies as follows𝒮^(3)_7,8,9,10,11=∫dtd^3x1/a {Λ_9 ∂^2 ℛ(∂_i ℛ)^2+ (Λ_10ℛ̇+Λ_11ℛ) ((∂_i ∂_j ℛ)^2 - (∂^2 ℛ)^2) }= ∫dtd^3x{d/dt(Λ_10/3a)-Λ_11/a-2/3aΛ_9}ℛ( (∂^2 ℛ)^2-(∂_i ∂_j ℛ)^2) ,where curly brackets read-1/a(H(Λ_10/3)-d/dt(Λ_10/3)+Λ_11+2/3Λ_9) = -M_Pl^2ϵ/2aH^2,where non-zero contribution comes from the second term with the time derivative, i.e. from d/adt(Λ_10/3), while the combination of other three terms with Λ_9,Λ_10,Λ_11 give zero. After that, we will denote this contribution from formula (<ref>) asΛ_*≡-M_Pl^2ϵ/2H^2.To find the conditions of the validity of the classical description, we turn to the generalizedunitarity bound and use the method which was described in <cit.>. According to this method, we need to rewrite the quadratic action (<ref>) as:𝒮^(2)_ℛℛ =1/2∫ d^3x dη[ℛ̃^'2 -c_S^2 (∇⃗ℛ̃)^2], so for that oneintroduces new field ℛ̃ = z ℛ with z = a√(2𝒢_S). Here also dη = adt is a conformal time, which we will use in the calculations below; the prime means the derivative with respect to conformal time ' ≡d/dη. In Appendix <ref> we write down the cubic order action (<ref>) in terms of ℛ̃, formula (<ref>). Having the latter we can proceed tothe analysis of the potential strong coupling problem. To this end, making use of all terms in the cubic action (<ref>), with Λ_i replaced by Λ_i,(j)∝Λ_i/𝒢_S^3/2, [New index (j) can be explained by the replacement of ℛ to ℛ̃/z, where z depends on conformal time, so taking the derivative with respect to conformal time provide several terms with different Λ_i,(j), see Appendix <ref> for details.] it is straightforward to estimate 2→2 scattering amplitude, while in the subsection <ref> we calculate this amplitude accurately. Firstly, the dimensional analysis leads us to the schematic formula for the tree 2→2 matrix element[For this kind of estimations weconsider the s-channel matrix element only.] <cit.> M_i,(j)∼1/E^2·{Λ_i,(j)·E^a·(E/c_S)^b}^2,where a and b are the number of time and spatial derivatives for each term in (<ref>). We consider the center-of-mass frame for our purposes. The conservation laws for the latter are as followsp⃗_1 + p⃗_2 = p⃗_3 + p⃗_4 = 0,E_1 + E_2 = E_3 + E_4 = E,|p⃗_1| = |p⃗_2|, |p⃗_3| = |p⃗_4|,where p⃗_1,2, E_1,2 and p⃗_3,4, E_3,4 are the incoming and outcoming particles momenta and energies, respectively. Next, we findE_1,2,3,4 = E/2,where E is the center-of-mass energy. Due to (<ref>), the dispersion relation readsE_1,2,3,4 = c_S p_1,2,3,4,thusp_1,2,3,4 = E/2c_S. Coming back to the formula (<ref>), the factor 1/E^2 presents the s-channel propagator. Next, since the energy and momentum of the scalar are related by ω = c_S p (note, that we reserve the notation E for the center-of-mass energy), spatial momentum of an incoming or outgoing scalar is of order p ∼ E/c_S. This clarifies the factor (E/c_S)^b, coming from the Fourier of spatial derivative. Moreover, in the case of center-of-mass frame the energies of incoming (noted as ω_1,2) and outgoing (noted as ω_3,4) scalars are ω_1,2,3,4∼ E, thus we count the possible factor E^a from the Fourier of time derivative. We square the expression in curve brackets in eq. (<ref>) since for our naive estimations we consider the easier case when both vertices are the same. The corresponding partial wave amplitude (PWA) is given by <cit.>ã_l = 1/2c_S^31/32π∫ d(cosx)P_l(cosx) M,so, omitting all numerical coefficients we can write for l=0 and for each M_i,(j)(ã_0)_i,(j)∼M_i,(j)/c_S^3. It is known from Refs. <cit.> that the amplitudes at classical energy scales saturate the unitarity bound |ã_0| ≤ 1/2. The classical energy scale is given by Hubble parameter H, and thelatter was obtained in cosmic time t, see eqs. (<ref>). However, the amplitudes (<ref>) are given in conformal time η, so one should substitute conformal energies E at which unitarity bound saturates as E = E_class = a H.Finally, bound |ã_0| ≤ 1/2 provides the set of constraints from each matrix element M_i,(j)[Some of the amplitudes provide the same constraint.]: 1/ϵ^3/2≤H^3M_Pl^7/Σ^5/2, ϵ≤Σ^3/2/H^4 M_Pl^2, 1/ϵ^3/2≤M_Pl^3 Σ^3/2/H λ_1^2, 1/ϵ^3/2≤M_Pl^3/HΣ^1/2, 1/ϵ^7/2≤H^3 M_Pl^7/Σ^5/2, 1/ϵ^7/2≤ M_Pl^3/HΣ^1/2, where λ_1 ≡ X^2 G_2XX + X^3 G_2XXX/3and we put all related calculations in Appendix A. Since ϵ^-1 is an enhancement factor, the strongest conditions are (<ref>) and (<ref>), coming from Λ_3-Λ_6, Λ_*, Λ_13, Λ_16, Λ_17 terms. We will not consider the terms with other couplings in our more accurate analysis of the amplitudes since they provide suppressed contribution.§.§ Strong coupling absence: accurate analysis In this subsection we go ahead to precisely calculate tree matrix elements - s-, t-, and u-channels - and find more accurate constraints on model parameters from the strong coupling problem analysis. We mention once again, that we work with the center-of-mass frame. We start with s-channel, corresponding diagram is shown in Fig. <ref> (left one) and corresponding conservation laws are given by eqs. (<ref>). Thus, the s-channel matrix element isiM_s = - i (E^6 Σ^2 +E^4 Σ (-8 H^2 M_Pl^2+5Σ)a^2H^2 + 4 E^2(-2H^2 M_Pl^2+Σ)^2 a^4H^4)/128ϵ^2Σ M_Pl^4a^6H^6.Next, the expression for the t-channel matrix element (corresponding diagram is given on Fig. <ref>, central one) readsiM_t = - i {E^3ϵ(x^2-1) + 8Ea^2H^2(3+2x-4x^2+ 2H^2 M_Pl^2(x-2)/Σ)}^2/1024 ϵ M_Pl^2a^6H^4(x-1), where x ≡cos θ. Changing x → - x, one obtains the u-channel amplitude:iM_u =i {E^3ϵ(x^2-1) + 8Ea^2H^2(3-2x-4x^2- 2H^2 M_Pl^2(x+2)/Σ)}^2/1024 ϵ M_Pl^2a^6H^4(1-x), and the diagram for this process is the right one on Fig. <ref>. The matrix elements for t- and u-channels can be obtained straightforwardly (though the calculations are quite cumbersome), while the s-channel element calculation involves some subtlety, which is related to the terms with ψ factor (<ref>) in the cubic order action (<ref>). We discuss how to deal with such a subtlety in Appendix <ref>.Before turning to the partial wave amplitude, we note that M_t and M_u are suppressed by ϵ as compared to M_s, so we will use M ≈ M_s, where initially M is the full matrix element, given by the sum of all channels amplitudes. Finally, we find the PWA (<ref>) with l = 0, which provides the lowest bound on the amplitudeã_0 = (-8H^4 M_Pl^4 +12H^2M_Pl^2Σ-5Σ^2)√(Σ)/4096πϵ^7/2M_Pl^7H^3,where we substitute classical E = aH which saturates the unitarity bound|ã_0| ≤1/2.In Section <ref> we will choose a specific model of k-inflation and obtain the concrete constraint on model parameters. If the parameters of the model satisfy these constraint then the classical description is valid.§ PRIMORDIAL NON-GAUSSIANITIES Another conditions on the parameters of the model of k-inflation with the lagrangian (<ref>) comes from the observational constraints onprimordial scalar non-Gaussianities.The extent of non-Gaussianity can be quantified by evaluating the bispectrum of curvature perturbations ℛ, as⟨ℛ(k⃗_1) ℛ(k⃗_2) ℛ(k⃗_3)⟩=(2 π)^3δ^(3)(k⃗_1+k⃗_2+k⃗_3) B_ℛ(k_1, k_2, k_3),where ℛ(k⃗) is a Fourier component of ℛ with a wave number k⃗ and the bispectrum isB_ℛ(k_1, k_2, k_3)=(2 π)^4(𝒫_ℛ)^2/∏_i=1^3 k_i^3𝒜_ℛ(k_1, k_2, k_3),which translates into a non-linear parameter f_NL asf_NL = 10/3𝒜_ℛ/∑^3_i=1k_i^3,where 𝒫_ℛ is a power spectrum and 𝒜_ℛ being its amplitude. The bispectrum can be of different forms depending on the relation between the k⃗_1, k⃗_2, k⃗_3. In this paper, we stick to the well-known equilateral configuration f_NL^equil with k_1 = k_2 =k_3. The corresponding calculations of the scalar non-Gaussianities for the k-inflation with the lagrangian (<ref>) are given in Ref. <cit.>. The non-linear parameter f^equil_NLfor the equilateral form is given by <cit.> f_NL^equil = 85/324(1-1/c_S^2) - 10/81λ/Σ + 55/36ϵ/c_S^2 + 5/12η/c_S^2-85/54s/c_S^2,where the following notations were used:η≡ϵ̇/(Hϵ),s ≡ċ_S/(Hc_S),λ≡ X^2 G_2XX + 2 X^3 G_2XXX/3, and η≪ 1, s≪ 1, while λ is generally not small.In the next Section we choose a concrete model of k-inflation and find the specific form of conditions of model parameters coming from primordial scalar non-Gaussianities. § CONSTRAINTS ON THE MODEL PARAMETERS FROM STRONG COUPLING PROBLEM AND SCALAR NON-GAUSSIANITIES In this Section we choose a concrete model of k-inflation and show that some non-trivial condition on the parameter of the model indeed arises from the requirement of the classical description validity.To this end, we take the lagrangian (<ref>) withG_2(ϕ,X) =- 16M_Pl^2/9γ^2ϕ^2X + 16 M_Pl^2/9γ^2ϕ^2M^2 X^2,where γ is a parameter with [γ] =0, and M is another dimensional parameter, [M]=1; the setup with eq. (<ref>) is similar to the one from Ref. <cit.>.For this model equations of motion (<ref>) provide H = 2M/3√(3)γϕ,X = M^2/2,and for the scalar field we obtainϕ = Mt+c,choosing ϕ>0 during 0<t<+∞, without the loss of generality.Here c is a dimensionless constant. After that we find all other functions and they read:Σ = 16M^2M_Pl^2/9γ^2ϕ^2, 𝒢_S = 12 M^2_Pl, c_S^2 = √(3)/8γ,so γ >0 due to the stability requirement (<ref>). Also, the slow roll parameter (<ref>) for the model (<ref>) isϵ = 3·√(3)·γ/2≪ 1,which provides that γ≪ 1 as well as c_S^2≪ 1. This situation is similar to Ref.<cit.>, so this justifies that scalar sector provides the strongest conditions of classical description validity.In the considered model of k-inflation the cosmological perturbations with a slightly red-tilted power spectrum may be generated <cit.>. The power spectrum of ℛ perturbations is given by <cit.>:𝒫_ℛ = 𝒜_ℛ(k/k_*)^n_S-1=H^2/8π^2𝒢_Sc_S^3 ,where 𝒜_ℛ is an amplitude, k_* is a pivot momentum, n_S is a spectral tilt. Surely, we require that the exit beyond effective horizon must occur in the weak coupling regime. To this end we turn to unitarity bound (<ref>) to see whether this condition can be satisfied at the times when the relevant modes of perturbations exit the effective horizon. The corresponding PWA (<ref>) in the chosen model (<ref>) readsã_0 = -73 M^2/8748· 3^3/4·√(2)·π·γ^11/2M_Pl^2(Mt + c)^2,where we substitute eqs. (<ref>), (<ref>), and (<ref>). To obtain a rough estimate, we find the exit time t_f at k = k_*, keeping in mind the smallness of |n_S-1|:(M t_f + c)^2 = h_0^2/8π^2 𝒢_Sc_S^3𝒜_ℛ, h_0 = 2M/3√(3)γ,where we use eqs. (<ref>), (<ref>), (<ref>), and (<ref>).At t_f eq. (<ref>) takes quite simple formã_0 = -73 π𝒜_ℛ/432 γ^2,and counting all numerical factors together as well as substituting observational value of 𝒜_ℛ=2· 10^-9 <cit.> we arrive toã_0 = -1.1 · 10^-9/γ^2,and, finally, the unitarity bound (<ref>) provides γ≥ 4.6· 10^-5. Next, one can obtain an additional condition on the k-inflation model parameter based on the current experimental bounds for scalar non-Gaussianities, i.e. f_NL^equil = -26± 47 <cit.>. The leading term from eq. (<ref>) isf_NL^equil≈ -85/324c_S^2,and in the model with (<ref>) and c_S^2 given by eq. (<ref>) we obtainf_NL^equil≈ -1.2/γ,where the behaviour ∼ 1/γ coincides with Ref. <cit.>. The observational value of f_NL^equil has an error larger than the value itself, i.e. f_NL^equil = -26± 47, for 68 % CL<cit.>. Thus, let us choose the biggest confidence region, for example, 99.7 % CL from Ref. <cit.> (see Fig. 19 therein). Roughly, this confidence region provides the constraint |f_NL^equil|<180, so:γ > 0.0067.The result is as follows: in the considered model of k-inflation the absence of strong coupling problem (<ref>) is guaranteed in the presence of the observational bound for scalar non-Gaussianities (<ref>). Finally, let us find other constraints coming from the calculations of the spectral tilt n_S and r-ratio. We start with the spectral tilt, which reads <cit.>n_S - 1 ≈ -2 ϵ = -3√(3)γ,where for the second equality we have substituted ϵ from eq. (<ref>). For the observational values n_S = 0.9649± 0.0042 <cit.> the corresponding γ satisfies 0.0060<γ < 0.0075.Next and final constraint comes from the observational upper bound on r-ratio <cit.>:r ≡𝒫_T/𝒫_ℛ = 4 𝒢_Sc_S^3/𝒢_Tc_T^3 ,where 𝒢_T is a coupling from second order action for tensor perturbations𝒮_T = ∑_λ∫ dt d^3x a^3 𝒢_T [ḣ_λ - c_T^2/a^2(∂ h_λ)^2],and c_T is a tensor perturbation sound speed; λ means two polarization of tensor perturbation. The correspondingexperimental upper bound is <cit.>r < 0.032.For the model (<ref>) we have𝒢_T = 1/4 M_Pl^2,and so r-ratio (<ref>) isr = 6 · 3^3/4·√(2)·γ^3/2,where we also substitute eq. (<ref>). Finally, applying eq. (<ref>) we arrive toγ < 0.014. Thus we conclude, that the strongest conditions coming both from the observational bounds onn_S and f_NL^equil are0.0067<γ<0.0075.However, if one takes another confidence region when calculating f_NL^equil, for example 68 % CL, see Ref. <cit.>, then the model of k-inflation with (<ref>) will be ruled out due to the inconsistency among the conditions fromn_S, r-ratio and non-Gaussianities. § CONCLUSIONThis paper demonstrates that the specific model of k-inflation (<ref>) with G_2 given by eq. (<ref>) meets strong coupling problem. However, it is possible to find such parameters of the model that the approach of classic field theory is legitimate during considered k-inflation. We prove this statement with the proceeding to the accurate analysis of 2→2 processes and corresponding matrix elements, and then apply unitarity bound in order to obtain a non-trivial condition of the model parameter γ. Another constraint comes from the recent observational data for scalar non-Gaussianities from Planck <cit.>. We find out that the latter is much stronger than the condition from strong coupling absence. Thus, we conclude that the model of k-inflation with (<ref>) is healthy: choosing γ parameter from the permitted area, one obtains a stable theory, where the classical description is valid, and corresponding f_NL^equil is allowed by the current experimental bounds. We should also note, that forsimplicity we make our calculations for the scalar sector of primordial perturbations only, however, there are mixed and tensor sectors as well. We expect that, as usual (see, for instance Ref. <cit.>), these sectors give even weaker constraints than the scalar one. There remains another important question: does the same result hold for each known (and phenomenologically interesting) model of inflation or maybe it is not the case? For example,models of G-inflation contain higher order partial derivatives in the cubic action for scalars, thus, it potentially can strengthen strong coupling absence condition.§ ACKNOWLEDGMENTSThe authors are thankful to Valery Rubakov for valuable discussions and very useful comments at early stages of this work back in the days. The authors are grateful to Mikhail Shaposhnikov, Eugeny Babichev, Pavel Demidov, Sergei Demidov, Sergei Mironov, Petr Satunin, Vladislav Barinov, Andrei Kataev and Victoria Volkova for fruitful discussions and careful reading of this manuscript. This work has beensupported by Russian Science Foundation Grant No. 19-12-00393.§ EXPRESSIONS FOR Λ_I,(J) equationsection The purpose of this Appendix is to list the cubic order action coefficients from eq. (<ref>). The general expressions for Λ_i, with i = 1,…,17 are given in Ref. <cit.>, and the formulas for the specific model (<ref>) read: [Λ_1 [ℛ̇^3] 2l = 3Σ^2-2M^2_PlH^2X(3G_2X+4X(3G_2XX+XG_2XXX))/6M^2_PlH^5,; Λ_2 [ℛ̇^2ℛ] 2l = -3Σ(-2M^2_PlH^2+Σ)/2M^2_PlH^4,;Λ_3 [(ℛ̇^2/a^2) ∂^2 ℛ]2l = -Σ/H^4,;Λ_4 [(ℛ̇/a^2)ℛ∂^2 ℛ]=-2M^2_PlH^2+3Σ/H^3,;Λ_5 [(ℛ̇/a^2) (∂_i ℛ)^2]= -M^2_PlH^2+2Σ/H^3,; Λ_6 [(ℛ/a^2) (∂_i ℛ)^2]=M^2_Pl,Λ_7 [(ℛ̇/a^4) (∂^2 ℛ)^2] =M^2_Pl/2H^3,;Λ_8[(ℛ/a^4) (∂^2 ℛ)^2]= -3M^2_Pl/2H^2,Λ_9[(∂^2 ℛ/a^4) (∂_i ℛ)^2] = -2M^2_Pl/H^2,; Λ_10[(ℛ̇/a^4) ( ∂_i ∂_j ℛ)^2] = -M^2_Pl/2H^3,Λ_11[(ℛ/a^4) ( ∂_i ∂_j ℛ)^2]=3M^2_Pl/2H^2,;Λ_12[ℛ̇∂_i ℛ∂^i ψ]= -2Σ^2/M^2_PlH^4,Λ_13[(∂^2 ℛ/a^2) ∂_i ℛ∂^i ψ] = 2Σ/H^3,;Λ_14[ℛ̇( ∂_i ∂_j ψ)^2]= -Σ^2/2M^2_PlH^5, Λ_15[ℛ( ∂_i ∂_j ψ)^2]= 3Σ^2/2M^2_PlH^4,; Λ_16[(ℛ̇/a^2) ∂_i ∂_j ℛ∂^i ∂^j ψ]= Σ/H^4,Λ_17[(ℛ/a^2) ∂_i ∂_j ℛ∂^i ∂^j ψ]= -3Σ/H^3. ]One can rewrite the expressions above, using eq. (<ref>) and introducingλ_1 ≡ X^2 G_2XX + X^3 G_2XXX/3,thus, we arrive to [Λ_1 [ℛ̇^3] 2l = (3M^2_Pl(H^2ϵ/c_S^2)^2-2H^2[3 ϵ M^2_Pl H^2+12λ_1])/6H^5,; Λ_2 [ℛ̇^2ℛ]2l = -3M_Pl^2ϵ/2c_S^2(-2+ϵ/c_S^2),;Λ_3 [(ℛ̇^2/a^2) ∂^2 ℛ] 2l = -M_Pl^2ϵ/c_S^2H^2,;Λ_4 [(ℛ̇/a^2)ℛ∂^2 ℛ] =-2M_Pl^2+3M_Pl^2ϵ/c_S^2/H,;Λ_5 [(ℛ̇/a^2) (∂_i ℛ)^2] = -M_Pl^2+2M_Pl^2ϵ/c_S^2/H,; Λ_6 [(ℛ/a^2) (∂_i ℛ)^2]=M_Pl^2,Λ_7 [(ℛ̇/a^4) (∂^2 ℛ)^2] =M_Pl^2/2H^3,;Λ_8[(ℛ/a^4) (∂^2 ℛ)^2]= -3M_Pl^2/2H^2,Λ_9[(∂^2 ℛ/a^4) (∂_i ℛ)^2] = -2M_Pl^2/H^2,; Λ_10[(ℛ̇/a^4) ( ∂_i ∂_j ℛ)^2] = -M_Pl^2/2H^3,Λ_11[(ℛ/a^4) ( ∂_i ∂_j ℛ)^2]=3M_Pl^2/2H^2,;Λ_12[ℛ̇∂_i ℛ∂^i ψ]= -2M_Pl^2(ϵ/c_S^2)^2,Λ_13[(∂^2 ℛ/a^2) ∂_i ℛ∂^i ψ]= 2M_Pl^2ϵ/c_S^2H,;Λ_14[ℛ̇( ∂_i ∂_j ψ)^2]= -M_Pl^2(ϵ/c_S^2)^2/2H, Λ_15[ℛ( ∂_i ∂_j ψ)^2] = 3M_Pl^2(ϵ/c_S^2)^2/2,; Λ_16[(ℛ̇/a^2) ∂_i ∂_j ℛ∂^i ∂^j ψ]= M_Pl^2(ϵ/c_S^2)/H^2,Λ_17[(ℛ/a^2) ∂_i ∂_j ℛ∂^i ∂^j ψ]= -3M_Pl^2(ϵ/c_S^2)/H. ] Using these expressions, as well as keeping in mind the discussion about Λ_7-Λ_11, see eqs. (<ref>)-(<ref>), we substitute field ℛ = ℛ̃/z into eq. (<ref>) and obtain:𝒮^(3)_ℛℛℛ = ∫ dηd^3x {Λ_1(1)ℛ̃^' 3+ Λ_1(2)ℛ̃ℛ̃^' 2+Λ_1(3)ℛ̃^2ℛ̃^'+Λ_1(4)ℛ̃^3 +Λ_2(1)ℛ̃ℛ̃^' 2+Λ_2(2)ℛ̃^2ℛ̃^' + Λ_2(3)ℛ̃^3+Λ_3(1)ℛ̃^2 ∂^2 ℛ̃ +Λ_3(2)ℛ̃ℛ̃^'∂^2 ℛ̃ + Λ_3(3)ℛ̃^' 2∂^2 ℛ̃+Λ_4(1)ℛ̃^2∂^2 ℛ̃ + Λ_4(2)ℛ̃ℛ̃^'∂^2 ℛ̃+Λ_5(1)ℛ̃(∂_i ℛ̃)^2 + Λ_5(2)ℛ̃^'(∂_i ℛ̃)^2+Λ_6(1)ℛ̃(∂_i ℛ̃)^2 + Λ_*(1)ℛ̃( (∂^2 ℛ̃ )^2-(∂_i ∂_j ℛ̃ )^2) +Λ_12(1)∂_iℛ̃ℛ̃^'∂_i ∂^-2ℛ̃^'+ Λ_12(2)ℛ̃^'∂_iℛ̃∂_i ∂^-2ℛ̃+ Λ_12(3)ℛ̃∂_iℛ̃∂_i ∂^-2ℛ̃^'+ Λ_12(4)ℛ̃∂_iℛ̃∂_i ∂^-2ℛ̃+ Λ_13(1)∂^2 ℛ̃∂_i ℛ̃∂_i ∂^-2ℛ̃ + Λ_13(2)∂^2 ℛ̃∂_i ℛ̃∂_i ψ̃+Λ_14(1)ℛ̃^' (∂_i∂_j ∂^-2ℛ̃^')^2+ Λ_14(2)ℛ̃^'∂_i∂_j∂^-2ℛ̃∂_i∂_j ∂^-2ℛ̃^'+ Λ_14(3)ℛ̃(∂_i∂_j∂^-2ℛ̃^')^2 + Λ_14(4)ℛ̃^'(∂_i∂_j∂^-2ℛ̃)^2 + Λ_14(5)ℛ̃∂_i∂_j∂^-2ℛ̃∂_i∂_j∂^-2ℛ̃^' + Λ_14(6)ℛ̃(∂_i∂_j∂^-2ℛ̃)^2 +Λ_15(1)ℛ̃(∂_i∂_j∂^-2ℛ̃^')^2+ Λ_15(2)ℛ̃∂_i∂_j∂^-2ℛ̃∂_i∂_j∂^-2ℛ̃^'+ Λ_15(3)ℛ̃(∂_i∂_j∂^-2ℛ̃)^2+Λ_16(1)ℛ̃∂_i∂_jℛ̃∂_i∂_j ∂^-2ℛ̃+ Λ_16(2)ℛ̃∂_i∂_jℛ̃∂_i∂_jψ̃+ Λ_16(3)ℛ̃^'∂_i∂_jℛ̃∂_i∂_j ∂^-2ℛ̃+ Λ_16(4)ℛ̃^'∂_i∂_jℛ̃∂_i∂_jψ̃+ Λ_17(1)ℛ̃∂_i∂_jℛ̃∂_i∂_j ∂^-2ℛ̃+ Λ_17(2)ℛ̃∂_i∂_jℛ̃∂_i∂_j ψ̃},whereΛ_1(1) =Λ_1/2√(2)𝒢_S^3/2a^2,Λ_1(2) = - 3Λ_1H/2√(2)𝒢_S^3/2a, Λ_1(3) =3Λ_1H^2/2√(2)𝒢_S^3/2, Λ_1(4) =-Λ_1aH^3/2√(2)𝒢_S^3/2, Λ_2(1) =Λ_2/2√(2)𝒢_S^3/2a,Λ_2(2) = - Λ_2H/√(2)𝒢_S^3/2, Λ_2(3) =Λ_2aH^2/2√(2)𝒢_S^3/2, Λ_3(1) =Λ_3H^2/2√(2)𝒢_S^3/2a,Λ_3(2) = - Λ_3H/√(2)𝒢_S^3/2a^2, Λ_3(3) =Λ_3/2√(2)𝒢_S^3/2a^3, Λ_4(1) =-Λ_4H/2√(2)𝒢_S^3/2a,Λ_4(2) =Λ_4/2√(2)𝒢_S^3/2a^2, Λ_5(1) =-Λ_5H/2√(2)𝒢_S^3/2a,Λ_5(2) =Λ_5/2√(2)𝒢_S^3/2a^2, Λ_6(1) =Λ_6/2√(2)𝒢_S^3/2a, Λ_*(1) =Λ_*/2√(2)𝒢_S^3/2a^3. Λ_12(1) =Λ_12/2√(2)𝒢_S^3/2a,Λ_12(2) = - Λ_12H/2√(2)𝒢_S^3/2,Λ_12(3) = - Λ_12H/2√(2)𝒢_S^3/2, Λ_12(4) =Λ_12aH^2/2√(2)𝒢_S^3/2, Λ_13(1) =-Λ_13H/2√(2)𝒢_S^3/2a,Λ_13(2) =Λ_13/2√(2)𝒢_S^3/2a^2, Λ_14(1) =Λ_14/2√(2)𝒢_S^3/2a^2, Λ_14(2) =-Λ_14H/√(2)𝒢_S^3/2a, Λ_14(3) =-Λ_14H/2√(2)𝒢_S^3/2a, Λ_14(4) =Λ_14H^2/2√(2)𝒢_S^3/2,Λ_14(5) =Λ_14H^2/√(2)𝒢_S^3/2, Λ_14(6) =-Λ_14aH^3/2√(2)𝒢_S^3/2 Λ_15(1) =Λ_15/2√(2)𝒢_S^3/2a,Λ_15(2) =-Λ_15H/√(2)𝒢_S^3/2, Λ_15(3) =Λ_15aH^2/2√(2)𝒢_S^3/2, Λ_16(1) =Λ_16H^2/2√(2)𝒢_S^3/2a,Λ_16(2) = - Λ_16H/2√(2)𝒢_S^3/2a^2,Λ_16(3) = - Λ_16H/2√(2)𝒢_S^3/2a^2, Λ_16(4) =Λ_16/2√(2)𝒢_S^3/2a^3, Λ_17(1) =-Λ_17H/2√(2)𝒢_S^3/2a,Λ_17(2) =Λ_17/2√(2)𝒢_S^3/2a^2.We use these Λ_i,(j) to naively estimate the matrix elements M_i,(j) (<ref>). Finally, unitarity bound |(ã_0)_i,(j)|≤1/2 for each (ã_0)_i,(j) (<ref>) provide the conditions on the slow roll parameter ϵ (<ref>).§ SUBTLETY IN THE CALCULATION OF S-MATRIX ELEMENT equationsectionIn this Appendix we discuss a subtlety, which arises in calculations for s-channel matrix element (<ref>). Formulas for t-and u-channels can be obtained in a quite straightforward way. We remind, that we consider only such vertices in the matrix element which involve only Λ_3-Λ_6, Λ_*, Λ_13, Λ_16, Λ_17 couplings, since these terms provide the strongest naive constraints (<ref>) and (<ref>), i.e. contributions from other terms are suppressed with ϵ. The mentioned subtlety in s-channel is related to the terms with Λ_13, Λ_16, and Λ_17 couplings which involve ψ = ∂^-2ℛ̇ in the cubic action (<ref>). Recalling that the momentum of propagator equals to zero for the s-channel (see conservation law for momenta (<ref>)), we consider the following terms firstlyΛ_13(1)∂^2 ℛ̃∂_i ℛ̃∂_i ∂^-2ℛ̃ + Λ_13(2)∂^2 ℛ̃∂_i ℛ̃∂_i ψ̃.One can easily see, that we get a 1/0^2 factor as ∂^-2 acting on the propagator. To deal with such contributions, let us introduce a new parameter η⃗→ 0, which satisfies p⃗_1,2⊥η⃗, so we change the center-of-mass frame to a new frame with p⃗_1 '+p⃗_2 ' = η⃗,wherep⃗ '_1,2→p⃗_1,2 + η⃗/2. Next, we find(p_1,2')^2 = p_1,2^2 +(η⃗ )^2/4, (p⃗_1 ',p⃗_2 ') = (p⃗_1,p⃗_2)+(η⃗ )^2/4,as well asE_1'=E_1√(1+c_S^2(η⃗ )^2/4E_1^2)≈ E_1(1+c_S^2(η⃗ )^2/8E_1^2). Thus, considering the left vertex from the diagram in Fig.<ref> we write a related expression for the vertex connected with (<ref>) terms iΛ_13(1) (ip⃗ '_1)^2 (ip⃗ '_2,-iη⃗ )/(-iη⃗ )^2 + iΛ_13(1) (ip⃗ '_2)^2 (ip⃗ '_1,-iη⃗ )/(-iη⃗ )^2+iΛ_13(2) (ip⃗ '_1)^2 (ip⃗ '_2,-iη⃗ )/(-iη⃗ )^2 (iE ') + iΛ_13(2) (ip⃗ '_2)^2 (ip⃗ '_1,-iη⃗ )/(-iη⃗ )^2 (iE').The same “trick” should be done for Λ_16 and Λ_17 terms. The final result with all contributions from Λ_13, Λ_16, and Λ_17 for the left vertex in Fig.<ref> readsV_1 = 1/2(i Λ_13(1)p⃗_1 ^2 - Λ_13(2)p⃗_1 ^2 E + i Λ_13(1)p⃗_2 ^2 - Λ_13(2)p⃗_2 ^2 E )+|η⃗ |^2/2(-iΛ_16 (1) + Λ_16 (2)E - 1/2Λ_16 (3)E_1 -1/2Λ_16 (3)E_2 -i/2Λ_16 (4)E_1 E - i/2Λ_16 (4)E_2 E -i Λ_17 (1) + Λ_17 (2) E) = 1/2(i Λ_13(1)p⃗_1 ^2 - Λ_13(2)p⃗_1 ^2 E + i Λ_13(1)p⃗_2 ^2 - Λ_13(2)p⃗_2 ^2 E ),where in the last equality we take η⃗ = 0 and also we substitute (<ref>). Here we use (i p⃗ '_1, -i η⃗ ) = (p⃗_1 + η⃗/2,η⃗ ) = |η⃗ |^2/2 as well. Using the same logic one can consider the second right vertex with outcoming particles in Fig. <ref> and find out that similar contribution proportional to η⃗ vanishes in the same way as in (<ref>). This concludes our discussion related to a subtlety coming from ψ factor. We note once again, that t- and u-channels do not suffer from this problem, since the propagator's momentum is not zero in these cases. 99 Starobinsky:1980te A. A. Starobinsky,Phys. Lett. B 91, 99-102 (1980) doi:10.1016/0370-2693(80)90670-XGuth:1980zm A. H. Guth,Phys. Rev. D 23, 347-356 (1981) doi:10.1103/PhysRevD.23.347Sato:1980yn K. Sato,Mon. Not. Roy. Astron. Soc. 195, 467-479 (1981) NORDITA-80-29.Linde:1981mu A. D. Linde,Phys. Lett. B 108, 389-393 (1982) doi:10.1016/0370-2693(82)91219-9Armendariz-Picon:1999hyi C. Armendariz-Picon, T. Damour and V. F. Mukhanov,Phys. Lett. B 458, 209-218 (1999) doi:10.1016/S0370-2693(99)00603-6 [arXiv:hep-th/9904075 [hep-th]].Garriga:1999vw J. Garriga and V. F. Mukhanov,Phys. Lett. B 458, 219-225 (1999) doi:10.1016/S0370-2693(99)00602-4 [arXiv:hep-th/9904176 [hep-th]].Ageeva:2018lko Y. A. Ageeva, O. A. Evseev, O. I. Melichev and V. A. Rubakov,EPJ Web Conf. 191, 07010 (2018) doi:10.1051/epjconf/201819107010 [arXiv:1810.00465 [hep-th]].Ageeva:2020gti Y. Ageeva, O. Evseev, O. Melichev and V. Rubakov,Phys. Rev. D 102, no.2, 023519 (2020) doi:10.1103/PhysRevD.102.023519 [arXiv:2003.01202 [hep-th]].Ageeva:2020buc Y. Ageeva, P. Petrov and V. Rubakov,JHEP 12, 107 (2020) doi:10.1007/JHEP12(2020)107 [arXiv:2009.05071 [hep-th]].Ageeva:2021yik Y. Ageeva, P. Petrov and V. Rubakov,Phys. Rev. D 104, no.6, 063530 (2021) doi:10.1103/PhysRevD.104.063530 [arXiv:2104.13412 [hep-th]].Ageeva:2022fyq Y. Ageeva and P. Petrov,Mod. Phys. Lett. A 37, no.26, 2250171 (2022) doi:10.1142/S0217732322501711 [arXiv:2206.10646 [gr-qc]].Ageeva:2022nbw Y. Ageeva and P. Petrov,[arXiv:2206.03516 [hep-th]].Planck:2019kim Y. Akrami et al. [Planck],Astron. Astrophys. 641, A9 (2020) doi:10.1051/0004-6361/201935891 [arXiv:1905.05697 [astro-ph.CO]].DeFelice:2011zh A. De Felice and S. Tsujikawa,JCAP 04, 029 (2011) doi:10.1088/1475-7516/2011/04/029 [arXiv:1103.1172 [astro-ph.CO]].Seery:2005wm D. Seery and J. E. Lidsey,JCAP 06, 003 (2005) doi:10.1088/1475-7516/2005/06/003 [arXiv:astro-ph/0503692 [astro-ph]].Chen:2006nt X. Chen, M. x. Huang, S. Kachru and G. Shiu,JCAP 01, 002 (2007) doi:10.1088/1475-7516/2007/01/002 [arXiv:hep-th/0605045 [hep-th]].Weinberg:2008zzc S. Weinberg, “Cosmology”, Oxford, UK: Oxford Univ. Pr. (2008) 593 p.Peng:2016yvb Z. P. Peng, J. N. Yu, X. M. Zhang and J. Y. Zhu,Phys. Rev. D 94, no.10, 103531 (2016) doi:10.1103/PhysRevD.94.103531 [arXiv:1611.02789 [gr-qc]].Ageeva:2022asq Y. Ageeva, P. Petrov and V. Rubakov,JHEP 01, 026 (2023) doi:10.1007/JHEP01(2023)026 [arXiv:2207.04071 [hep-th]].Kobayashi:2019hrl T. Kobayashi,Rept. Prog. Phys. 82, no.8, 086901 (2019) doi:10.1088/1361-6633/ab2429 [arXiv:1901.07183 [gr-qc]].Kobayashi:2015gga T. Kobayashi, M. Yamaguchi and J. Yokoyama,JCAP 07, 017 (2015) doi:10.1088/1475-7516/2015/07/017 [arXiv:1504.05710 [hep-th]].Adams:2006sv A. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nicolis and R. Rattazzi,JHEP 10, 014 (2006) doi:10.1088/1126-6708/2006/10/014 [arXiv:hep-th/0602178 [hep-th]].deRham:2013hsa C. de Rham, M. Fasiello and A. J. Tolley,Phys. Lett. B 733, 46-51 (2014) doi:10.1016/j.physletb.2014.03.061 [arXiv:1308.2702 [hep-th]].Grojean:2007zz C. Grojean,Phys. Usp. 50, 1-35 (2007) doi:10.1070/PU2007v050n01ABEH006157Planck:2018vyg N. Aghanim et al. [Planck],Astron. Astrophys. 641, A6 (2020) [erratum: Astron. Astrophys. 652, C4 (2021)] doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]].BICEP:2021xfz P. A. R. Ade et al. [BICEP and Keck],Phys. Rev. Lett. 127, no.15, 151301 (2021) doi:10.1103/PhysRevLett.127.151301 [arXiv:2110.00483 [astro-ph.CO]].Tristram:2021tvh M. Tristram, A. J. Banday, K. M. Górski, R. Keskitalo, C. R. Lawrence, K. J. Andersen, R. B. Barreiro, J. Borrill, L. P. L. Colombo and H. K. Eriksen, et al.Phys. Rev. D 105, no.8, 083524 (2022) doi:10.1103/PhysRevD.105.083524 [arXiv:2112.07961 [astro-ph.CO]].
http://arxiv.org/abs/2310.18402v1
{ "authors": [ "Y. Ageeva", "P. Petrov" ], "categories": [ "hep-th", "gr-qc" ], "primary_category": "hep-th", "published": "20231027180006", "title": "K-inflation: the legitimacy of classical treatment" }
[email protected] Institute of Technology [email protected] of Georgia and Georgia Institute of Technology [email protected] Institute of Technology USADomain fronting is a network communication technique that involves leveraging (or abusing) content delivery networks (CDNs) to disguise the final destination of network packets by presenting them as if they were intended for a different domain than their actual endpoint. This technique can be used for both benign and malicious purposes, such as circumventing censorship or hiding malware-related communications from network security systems. Since domain fronting has been known for a few years, some popular CDN providers have implemented traffic filtering approaches to curb its use at their CDN infrastructure. However, it remains unclear to what extent domain fronting has been mitigated.To better understand whether domain fronting can still be effectively used, we propose a systematic approach to discover CDNs that are still prone to domain fronting. To this end, we leverage passive and active DNS traffic analysis to pinpoint domain names served by CDNs and build an automated tool that can be used to discover CDNs that allow domain fronting in their infrastructure. Our results reveal that domain fronting is feasible in 22 out of 30 CDNs that we tested, including some major CDN providers like Akamai and Fastly. This indicates that domain fronting remains widely available and can be easily abused for malicious purposes.Measuring CDNs susceptible to Domain Fronting Pierros Skafidas January 14, 2024v2.2 - notao G_p^n=============================================§ INTRODUCTION Domain Fronting, a technique designed to mask the true endpoints in network communications, works by leveraging (or abusing) shared hosting infrastructure provided by widespread services such as Content Delivery Networks (CDNs). By leveraging a CDN's shared infrastructure, applications may appear to connect to a domain A served by the CDN while, in reality, the traffic is intended for a different destination domain B that is served by the same CDN as well. As a result, a network traffic monitor (e.g., an intrusion detection system or a censorship enforcement device) may believe a client is connecting to domain A, rather than B. In countries with stringent internet restrictions, such as China and Iran, domain fronting has been instrumental for activists and ordinary citizens alike to bypass digital barriers and access platforms like Signal and Telegram <cit.>. However, the same technique has found favor among malicious actors. For instance, APT29, also known as Cozy Bear, reportedly used domain fronting to camouflage their malware command-and-control (C2) infrastructure, complicating detection and attribution <cit.>. Furthermore, according to a recent study <cit.>, about 3.5% of all Cobalt Strike Beacons were configured to use domain fronting to effectively evade detection for a prolonged period of time. The growing reliance on Content Delivery Networks (CDNs) for efficient content distribution has inadvertently opened doors for malicious activities <cit.>. Among other types of attacks, CDNs are leveraged via techniques such as Domain Fronting to hide malware command-and-control(C2) communications<cit.>and to host malicious content such as phishing sites, malicious software downloads, etc. According to a recent study <cit.>, about 3.5% of all Cobalt Strike Beacons were configured to use domain fronting to effectively evade detection for a prolonged period of time.In order to detect or defend against domain fronting, censors and network operators are compelled to adopt drastic CDN traffic blocking measures, often with considerable collateral damage, in an attempt to mitigate the associated risks <cit.>. Rather than blocking CDN traffic altogether, a more effective approach to counter this threat lies within the infrastructure of CDNs themselves. To prevent unintended consequences from nationwide censorship, few popular CDNs have taken measures to prevent domain fronting on their platforms. For example, Google and Amazon disabled domain fronting in their services in 2018 <cit.>, while Microsoft Azure only disabled it recently in November 2022, following its use by Meek, a Tor plugin for traffic tunneling <cit.>. Irrespective of these measures, there exists evidence that domain fronting may still be leveraged for both benign <cit.> and malicious purposes <cit.>. However, it remains unclear to what extent domain fronting can still be successfully used and on what CDN infrastructure.In this paper, we present a comprehensive measurement of CDNs that are still prone to domain fronting, offering valuable insights for CDN customers, researchers, and security administrators. To this end, we develop an automated system capable of measuring the potential for domain fronting in a variety of real-world CDNs. Previous work <cit.> by Fifield et al. exposed and tested domain fronting on a limited number of popular web services and major CDNs, using mostly manual effort <cit.>. However, the proposed approach is costly, does not scale, and is insufficient to perform a comprehensive test of CDN infrastructure to determine what parts of the infrastructure is prone to domain fronting. Unlike <cit.>, our proposed measurement system leverages readily available DNS data to discover information on domains linked with CDNs and automatically perform domain fronting testing at a large scale without the need for registering any new domain names or hosting any new services behind each CDN, thus largely reducing associated manual efforts andmonetary cost.Using our proposed measurement system, we first collect domain names served by 38 different CDNs. We found that, contrary to the belief that popular domains are associated only with popular CDNs (e.g., Akamai, Cloudflare, etc.), popular domains within the top 10k ranking according to the ranking list, Tranco <cit.>,also use less popular CDNs. Using our automated measurement system, we then performed domain fronting testing on 30 of the 38 CDNs and we found that domain fronting remains possible in 22 of these CDNs. Contrary to results reported in a previous study <cit.>, our findings reveal that domain fronting is currently still possible for popular CDN service such as Fastly and Akamai, as well as for a variety of less popular CDNs. This finding is also corroborated by third-party evidence suggesting that Fastly is being used as an alternate service in Tor plugins like Meek and SnowFlake <cit.>.In summary, we make the following contributions: * We design a new measurement system that leverages DNS analysis to find domain names related to web content served via CDNs and that can automatically test whether a CDN is prone to domain fronting.* Unlike previous work, our system can automatically test for domain fronting vulnerabilities without the need for registering new domain names or subscribing new web services with a CDN, thus eliminating manual efforts and allowing us to continuously test for domain fronting vulnerabilities at scale.* Using our measurement system, we tested 30 different CDNs, and found that 22 of them are still currently vulnerable to domain fronting, including popular CDNs such as Fastly and Akamai. § BACKGROUND AND MOTIVATION§.§ Domain FrontingDomain fronting exploits the discrepancy between the TLS server name indication (SNI) and the Host header in HTTPS requests related to web content served via Content Delivery Networks (CDNs). CDNs typically rely on the Host header to identify the origin web server[<https://www.cloudflare.com/learning/cdn/glossary/origin-server/>] responsible for satisfying an HTTP request, while the SNI is used for correctly establishing a TLS session (e.g., identify and deliver the correct SSL certificate to the client). Because the SNI is visible to network traffic monitors but the Host header is not (since it is encrypted via TLS), the true endpoint of the communication (the domain in the Host field) can be hidden “behind” the front domain expressed in the SNI.The inherent ability to conceal the true destination makes domain fronting an ideal choice for different use cases.For instance, domain fronting has proved to be a valuable tool in internet censorship circumvention. This is demonstrated by its adoption in a number of widely used applications such as Telegram <cit.> and Signal <cit.>, which are otherwise restricted as part of nation-wide censorship enforcement. At the same time, domain fronting is viewed as a threat by authorities that implement censorship restrictions. Unfortunately, domain fronting has also been adopted by malware developers to hide the communications from malware compromised machines to their command-and-control (C2) server <cit.>. By abusing a legitimate popular domain name as a front domain, they can hide C2 communications from network security and traffic analysis systems. This allows the malware to evade detection and to maintain control over a compromised system for longer periods of time, enabling stealthy data ex-filtration, malware updates, etc. Figure <ref> provides an overview of the steps involved in the use of domain fronting. As an example, we consider the case of a compromised machine that uses domain fronting to hide malware C2 communications, though the steps are similar in other applications (e.g., for censorship circumvention). We assume that the attacker already knows that a benign domain name legitsite.com is served by a given CDN. Before infecting the victim, the attacker registers a new domain evilsite.com and subscribes it to the same CDN used by legitsite.com. Afterwards, the attacker infects victim machines with malware that uses domain fronting to connect to evilsite.com. As a first step, the victim machine issues a DNS query for legitsite.com, to find the IP address of a CDN server. Then, the malware initiates a TLS session with the CDN server, sets the SNI to legitsite.com, and sends an HTTP request to the server in which the Host header is set to evilsite.com. Upon reaching the CDN server, the web request is processed based on the information provided in the Host header (i.e., evilsite.com). If the related web content is not cached at the CDN's edge server, the CDN will forward the request to the evilsite.com origin server, obtains a response, and forwards the response back to the malware.Notice that it is common for enterprise networks to use DNS and SNI monitoring to detect and block malicious communications. However, since both the DNS query (step 1) and SNI set by the victim (step 3) indicate a legitimate domain name, both the DNS request and the HTTPS connection will not be blocked.§.§ Motivations for this StudyDomain Fronting has undeniably opened doors for attackers to continue their malicious operations in a covert manner. As a result, it poses significant challenges for network security and content filtering systems while allowing attackers to evade censorship, distribute malware, and establish covert communication channels. Irrespective of its application, a number of steps have been taken to thwart Domain Fronting.To enforce censorship in the presence of domain fronting, some nations, knowingly or unknowingly, have taken drastic measures to block CDN traffic, which resulted in blocking access to popular services such as Google and Amazon <cit.>, thus affecting millions of users. While this may be a potentially effective censorship strategy, this type of extreme countermeasure cannot be easily used to block malware communications in non-censored countries. For instance, consider Exfiltrator-22 <cit.>, a malware that uses domain fronting and abuses Akamai's CDN infrastructure to hide its C2 communications. Suppose, an enterprise network has been compromised by such a malware, and that the malware uses a set of popular legitimate domains as front domains. First, detecting the malware infection via network traffic analysis can be very challenging without sophisticated analysis of encrypted traffic <cit.>, which may also be prone to false positives. Second, if the malware infection is identified and is found to abuse a set of popular domains, the network operator would need to block all traffic to those domains, which may include significant amounts of legitimate traffic. Alternatively, a defender may attempt to block all IP addresses related to the abused CDN, but this would further increase the collateral damage. Furthermore, if traffic is blocked for a given front domain and CDN, the malware could automatically switch to a secondary front domain hosted on a different CDN that allows domain fronting.Some popular CDNs have started to proactively mitigate domain fronting in their platforms by ensuring consistency between SNI and the Host header on all incoming web requests. While this is an effective countermeasure for preventing abuse, it is unclear to what extent domain fronting has actually been mitigated and the related challenges. For instance, * Do popular CDNs block domain fronting throughout their entire infrastructure? * Are there other (perhaps less popular) CDNs that do not block domain fronting at all?* Do these CDNs serve content from popular legitimate domains that can be abused as front domains?These are some of the research questions we aim to investigate in this study. During our initial stages of research, we observed that popular domains do not exclusively use services of popular CDNs and optfor less popular CDNs as well(refer section XX). This is further plausible when domains opt for services such as Multi-CDN support to improve reliable content distribution <cit.>. As a result, this allows attackers to rely on CDNs other than popular CDNs and makes switching to different CDNs easier. Based on these observations, Domain Fronting is still possible, provided if it hasn't been blocked by all CDN platforms. Therefore, in this research, we aim to measure the extent of Domain Fronting possibility using CDNs and devise tools that can be used by censored users and network operators alike to identify CDNs prone to Domain Fronting.§ MEASUREMENT METHODOLOGY In this section, we provide an overview of our proposed measurement system to automatically test whether a CDN is prone to domain fronting. Figure <ref> provides an overview of our system and its components.Our main goal is to facilitate automated testing of domain fronting across many CDNs while avoiding the cumbersome manual process required for registering domains, subscribing CDN services, and paying for any associated costs.Our approach is based on the idea that, rather than registering our own domains with each CDN under test, we can identify domains registered by third-parties that already use those CDN services and leverage them to detect whether a CDN is prone to domain fronting. To achieve this, we first perform DNS traffic analysis to discover a list of domain names served byeach CDN. Then, we test for domain fronting by selecting pairs of domains served by the same CDN to be assigned as either target domain or front domain. The target domain represents the actual destination of the HTTPS requests we will issue, while the front domain serves as the domain used to disguise the true destination of our HTTPS traffic. The underlying idea is that if our tests succeed using existing domain names and web content served by a CDN, any actor (benign or malicious) can register a new domain name and subscribe to the same CDN's services and then use a third-party legitimate domain as front domain. To further confirm that the abuse of CDNs using domain fronting is in fact a real and current threat, we also measured the presence of malicious domains among the domain names that we identified as being associated with each of the CDNs in our data set. Specifically, we leverage Virus Total <cit.> to determine the percentage of CDNs that serve content from malicious domains. Our findings revealed that approximately 31% of the domain fronting prone CDNs served content from one or more malicious domains flagged by at least 2 security vendors in Virus Total platform. While at a high-level this testing approach appears as quite straightforward, in practice, finding domain served by CDNs and testing CDN infrastructure for domain fronting is non-trivial. We discuss our approach in more details in the remainder of this section. §.§ System OverviewAs shown in Figure <ref>, our measurement system consists of three key components: (1) CDN Domain Discovery, (2) URL Discovery, and (3) Domain Fronting Tester. These components work together to (1) perform DNS analysis to discover website-related domain names whose content is served by a given CDN, (2) discover specific URLs under those domains that point to existing web content served via the CDN, and (3) use the information gathered from the two previous components to enable automated testing of domain fronting. We elaborate on the role of each system component below. §.§ Discovering Domain Names Served by CDNs Domain Fronting is possible if, and only if, the fronting domain and target domain are hosted on the same CDN. The first component of our measurement system focuses on finding the mapping between domain names and the CDNs that serve their content, by extracting relevant information from DNS records. When a web service w under domain d subscribes to a CDN, the CDN may assign it a custom subdomain s.c of a domain c owned by the CDN. This newly assigned subdomain can then be added as an alias of the subscribed domain in the DNS database for redirecting traffic to the CDN. Namely, a CNAME resource record can be registered for d (the resource record name) that points to s.c (the resource record data for the CNAME). To find a list of domains served by a CDN, we proceed as follows. Given a CDN C (e.g., Akamai, Fastly, etc.), we first compile a list L_C of effective second-level domains (SLDs) used by C to assign CNAMEs to its customers. We derive the list L_C from an openly available list of CDNs <cit.> and extract SLDs for each CDN via manual search. Notice that this initial “seeding” step is the only manual step in our system, which serves to bootstrap our automated measurements.Afterwards, to identify the list of domains related to websites that are served to a given CDN, we use passive and active DNS analysis. Specifically, we analyze DNS traffic passively collect at two large academic networks (with IRB approval) and openly available DNS data collected by the ActiveDNS project <cit.>. For every CDN's SLD, c ∈ L_C, we search the DNS datasets for CNAME records whose record data match c (we use suffix matching for this). We then extract all resource records of the type s.c. To obtain the corresponding domain related to the website hosted by the CDN, we inspect the DNS response that included the CNAME s.c and extract the query name q from the question section of the DNS response where the CNAME was found. We repeat the process for each CDN.Consider the following example to understand the steps involved. Let c be <edgekey.net>. In this case, we use DNS analysis to collect subdomains of c such as <www.microsoft.com-c-3 .edgekey.net>, denoted as s.c. To obtain the corresponding domain related to the website hosted by the CDN, we inspect the DNS response that included the CNAME s.c and extract the query name q from the question section of the DNS response where the CNAME was found. In this example, q = <www.microsoft.com>. By repeating this search for every CDN domain c ∈ L_C, we derive the list D_C of all domains visible from our DNS dataset that are related to websites served by CDN C.§.§ Discovering CDN-served URLs Once we have gathered the list of domains D_C served by a CDN, as explained above, we proceed to map URLs that point to actual web objects under those domains. Namely, given a domain q ∈ D_C such as <assets.example.com>, simply issuing a “GET /” HTTP(S) request for the “root” path under domain q (i.e., <https://assets.example.com/>) may not work (an HTTP 4xx message may be returned) without specifying a full path to an existing resources under q. Therefore, to find valid URLs we proceed as follows.Given a domain q ∈ D_C such as <assets.example.com>, we first compute its effective second-level domain (in this case, <example.com>). Then, to find valid URLs under domain q, we have developed a custom Chromium-based web crawler using Puppeteer<cit.>. Our crawler is designed to visit and crawl a generic domain and capture details of all network requests and responses issued by the browser during the browsing session. This includes all requests related to web objects located under the visited domain directly as well as any of its subdomains.Specifically, we point our crawler to the computed effective second-level domain, s, and record all URLs requested by the browser while visiting s, and record the list of all observed URLs whose domain was q.These allows us to discover a subset of full URLs U_C for web objects served by the CDN. In the example above, pointing our instrumented browser to <https://example.com> and crawling its content allows to also discover web objects (i.e., their full URL) hosted under domain <assets.example.com>, which is the domain served by the CDN that we are interested in. To enable consistent fronting testing results, among the U_C URLs we only retain those that correspond to static web resource, such as images, .js and .css files, etc., for which their content remains stable across multiple requests and can be used for our domain fronting testing module.Once we have gathered the list of domains D_C served by a CDN, as explained above, we proceed to map URLs that point to actual web objects under those domains.INamely, given a domain q ∈ D_C, simply issuing a “GET /” HTTP(S) request for the “root” URL under domain q may not work, as the CDN may be used to serve web objects under specific paths or subdomains. Therefore, to find valid URLs for a domain q served by a CDN, we have developed a custom Chromium-based web crawler using Puppeteer<cit.>. Our crawler is designed to visit a generic domain d and capture details of all network requests and responses issued by the browser while rendering web content under d. This includes all requests related to web objects located under domain d directly as well as its subdomains. We then use the crawler as follows.Given a domain q ∈ D_C, we first compute its effective second-level domain, s. Then, we point our crawler to s and record all URLs requested by the browser while visiting s, and record the list of all observed URLs whose domain was q. These allows us to discover a subset of full URLs U_C for web objects served by the CDN C that we are interested in. To enable consistent testing, we only retain URLs that correspond to static web resource, such as images, .js and .css files, etc., for which their content remains stable across multiple requests and can be used for our domain fronting testing module. §.§ Domain Fronting Tester Given a CDN C and the related domain names and URLs it serves, which we discover as explained in Sections <ref> and <ref>, the high-level domain fronting testing process is relatively straightforward: (1) select a domain name d ∈ D_C, (2) select a URL u ∈ U_C whose domain d_u is different from d, (3) establish a TLS session with SNI set to d and (4) issue an HTTPS request for URL u with the Host header set to d_u. If we are able to fetch a web object pointed by u with no error, while the SNI points to d, the test succeeds and the CDN is prone to domain fronting.Unfortunately, in practice, the process explained above is insufficient. The reason is that we also need to make sure that the object returned by the HTTPS request to u is the same as the web object that the CDN would serve in a normal transaction (i.e., one in which URL u is requested through the CDN without altering the SNI). Furthermore, we need to verify that this process works consistently for any paris of domains (d, d_u) that are served by the CDN, to check whether all or only part of the CDN infrastructure is prone to domain fronting.Therefore, we refine the testing process as follows. First, given a CDN C, we randomly select up to N chosen tuples consisting of (d_f, d_t, u_t), where d_f is the front domain, d_t ≠ d_f is the target domain, and u_t is a URL under d_t that is served by the CDN (notice that d_t and d_f belong to the set D_C, whereas u_t ∈ U_C). The number N depends on the cardinality of the sets D_C and U_C. For each selected tuple (d_f, d_t, u_t), we proceed as follows:* Step 1: Request Target URL with Target Host First, we craft a regular HTTPS request for URL u_t, so that both the Host header and SNI are set to the same domain d_t. We then store the response content (i.e., the requested web object), r_t, and use it as a reference to validate the result of the next test. * Step 2: Request Traget URL with Front Domain and Target Host In this step, we test domain fronting. Specifically, we issue an HTTPS request for URL u_t but set the TLS SNI to the front domain d_f (the Host header is set to d_t). On receiving a valid response, we store it as r_v and proceed with the next step. * Step 3: Request Target URL with Front Domain and Front Host In this step, we craft a regular HTTPS request for u_t but we replace d_t with d_f. Namely, we set both the Host header and the SNI to d_f. We perform this step to ensure that the requested URL is not available under the fronting domain as well, since this would make the success of fronting test invalid. We store the response as r_f. Test Tuple Validation: While selecting the (d_f, d_t, u_t) tuples for testing, we apply additional filtering to avoid cases that would lead to potential false positives. For instance, domain fronting may be explicitly allowed between domains d_t and d_f if they are related to one another, for instance because one is a subdomain of the other or because the domains are owned by the same organization. In these cases, our test may lead to successful fronting tests even if a CDN proactively blocks domain fronting in general, when unrelated domains are set in the SNI and Host field. In practice, we confirm that domains d_t and d_f are related if: (i) they share the same effective second-level domain, in which case we refer to them as “sibling” domains; or (ii) they are listed in a shared SSL certificate. To check for this latter condition, we analyze valid SSL certificates for each domain programmatically and check if d_t and d_f appear together in the Subject Alternative Name field (e.g., if <*.example-1.com> and<*.example-2.net> appear together in a valid SSL certificate, we consider them to be owned by the same organization). This further improves the confidence in the correctness of the results of our domain fronting tester. Fronting Test Validation: By analyzing the responses, we determine that a single test was successful if (i) r_t is a valid HTTP response (no HTTP or SSL error); (ii) r_v matches r_t; (iii) r_f is empty (i.e., no web object was retrieved) or is different from r_t. To compare the content of each response, we compute and compare their SHA1 hash. We repeat these tests up to N times per each CDN (each test is based on a different randomly chosen tuple (d_f, d_t, u_t), as explained earlier).Testing Domainless Fronting: We also measure another variant of Domain Fronting that uses IP shared by many possibly legitimate domains as the Front. To test domainless fronting, we perform an additional test similar to step 2 except we use the server IP collected in crawler logs to replace the front domain in front URL. § MEASUREMENT RESULTSIn this section, we present our results. Overall, our findings reveal that, despite domain fronting being known for a few years, there still exist many CDNs that are prone to it. Specifically, 22 out of 30 CDNs we tested are prone to domain fronting. Notably, we also observed successful domain fronting tests for popular CDNs such as Fastly and Akamai, which serve thousands of highly ranked domains (see Figure <ref>) that could be abused as fronting domains. Besides detailed results regarding domain fronting, we also present additional findings and insights related to domain names served via CDNs, which can help understand the extent to which domain fronting may be successfully abused in practice. §.§ Domain Analysis Results To build a list of domains served by different CDNs, we leverage 10 days of passive DNS traffic collected (with IRB approval) from two large academic networks and via the ActiveDNS <cit.> project, between March 20, 2023 and March 30, 2023.Specifically, we focus on CNAME resource records, which are typically used to direct web requests for domains served by CDNs to an edge CDN server (e.g., at the time of writing, querying for <www.microsoft.com> returns a CNAME chain pointing to an Akamai edge server). We inspect the CNAME resource records that match a large, manually curated list of second-level domains (SLDs) used by CDNs (see Section <ref>). To match the CNAME records against the list of CDN SLDs, we use suffix matching. We then keep only those CNAME records that match any of the SLDs in the curated list and discard the rest. Figure <ref> shows the distribution of CDNs observed from different DNS traffic. Overall, we were able to discover 47 CDNs from the three datasets. As can be observed, it is possible to collect wide range of CDN-related domain information from only using data from Active DNs even if someone doesn't have access to Passive DNS records. Now, let D_f be the set of fully qualified domain names (FQDNs) for which at least one CNAME matched (via suffix matching) a CDN-related SLD. Our next step is to discover full URLs (including the full path to a web object) under those domains that are served by a CDN. To facilitate this next step, we proceed as follows. First, let D_s be the set of all effective second level domains (SLDs) extracted from the FQDNs in D_f. We issue a “GET /” for each domain name d ∈ D_s and keep all domains for which the “GET /” request returned a “200 OK” response. We call this reduced set D'_s (domain names that return an error are filtered out). Overall, we found 38,567 domain names belonging to D'_s. We then consider all FQDNs f ∈ D_f whose effective SLD belongs to D'_s, and call this new reduced set D'_f. Namely, D'_f considers all subdomains of each domain in D'_s whose content we found to be served by a CDN. After this step, we found 124,585 distinct FQDNs that are served by 38 different CDNs. Figure <ref> shows the distribution of domains we identified per each CDN. As can be seen, most of the domains we collected are served by major CDN providers, with Cloudfront being responsible for serving 63% of the domains.Now, to discover full URLs served by CDNs, let C represent one of the 38 CDNs we discovered so far. We randomly select up to 100 SLDs from D'_s whose subdomains (at least one) or the SLD itself are served by C, and call this set of domains D^(C)_s. For each domain d ∈ D^(C)_s, we crawl the website pointed by d and logs all HTTP requests and responses issued by our instrumented browser while rendering each web pages under d. We then store all full URLs for which a “200 OK” response was recorded in a set U_C. Finally, we reduce the set U_C by only keeping a url u ∈ U_C if its corresponding domain name d_u belongs to D_f, thus forming a smaller set of URLs that we call U'_C. In summary, U'_C is a set of URLs that point to web objects that are served by CDN C. Thus, at this point we know that an HTTPS requests for a URL u' ∈ U'_C will go through C's CDN infrastructure. We repeat the above process for all 38 CDNs discovered so far. In the end, we were able to find valid URLs for 30 out of the initial 38 CDNs. Specifically, we found 52,998 URLs related to 1,310 distinct FQDNs served by those 30 different CDNs. For the remaining 8 CDNs, we were unable to find any full URL that we could use to issue HTTPS requests and fetch a valid web object via the CDN. It is possible that by crawling the web at large we could find URLs served by those remaining 8 CDNs as well. However, crawling the web in a non-targeted way can be quite time consuming and expensive in terms of resources (e.g., log storage). Therefore, we leave this enhancement step to future releases of our system. Further, a large portion of these URLS(i.e. 30,848) belonged to 739 subdomains.We also observed that we could only find potentially crawl-able top domains with rank<=100k linked to 15 CDNs. Therefore, by adopting crawling process to map URLs of subdomains, we were able to expand our experiments to more CDNs.However, we still were unable to discover these mappings for the rest of 8 CDNs. For instance, even though we found 15 distinctdomains directly related to CacheFly CDN, these were subdomains(e.g. images.overdrive.com) of different websites could not be found by visiting their top domain(e.g. overdrive.com). Popular Domains: CDNs that host popular domains play a crucial role in the success ofdomain fronting, primarily due tothe reduced riskof being blocked. When a popular domain is hosted by a CDN, there is a higher chance that the IP addresses associated with the CDN's edge servers will be considered benign and permitted by network security policies, even if those IP are shared by multiple domains. This proves advantageous to actors (malicious or benign) who leverage domain fronting as a means to mask their traffic and evade detection. Therefore, we also explored the distribution of popular (i.e., high rank) domains across the different CDNs.To compute a domain's popularity, we usethe Tranco <cit.> popularitylist, which has been widely used in other web measurement studies. Specifically, we compute two different rankings for each domain name, one based on its fully qualified domain name (FQDN) and the other based on its effective second level domain (SLD) suffix. Figure <ref> shows the distribution of popular domains, belonging to different ranking bands, served each CDN. Surprisingly, there are 26 CDNs that serve popular domains with rank<=10k, based on their SLD. Even if we consider the ranking of FQDN, we can find 22 different CDNs that serve content from popular domains with rank<=500k. This shows that, contrary to what one may have thought, highly popular domain names are not served only via the most popular CDNs (e.g., Akamai, Cloudflare, Fastly, etc.). Instead, the web content of some highly popular domains is served by less well known CDNs as well. which accounted for a substantial percentage, 30%(at least), of the total domains and 34 CDNs were found to host one or more popular domains of the same ranking.Further, this signifies the role of lesser known CDNs in hosting popular domains, despite well-known CDNs like Cloudfront hosting 63% of the overall encountered domains.Malicious Domains: As mentioned before, CDNs can be abused for malicious purposes. Therefore, we also wanted to measure how many domain names served by each CDN are known to be malicious.To this end, we check each domain against VirusTotal <cit.>, and flag domains that are labeled as malicious by different security vendors. We found 11 CDNs that served domains labeled as malicious by at least two different security vendors, and 27 CDNs serving one or more malicious domains flagged by at least one security vendor (see Figure <ref>).§.§.§ Additional CDN Insights To verify whether our method for discovering CDN-served URLs produced consistent results across different time windows, we conducted an additional experiment based on analyzing DNS traffic spanning multiple days. Our objective was to examine whether specific FQDNs were consistently associated with a single CDN throughout our DNS data collected over 10 days. The findings revealed that 99.64% of the FQDNs consistently mapped to (i.e., were served by) a single CDN, ensuring stable measurement results. The small fraction (0.36%) of domains that we found to be associated with different CDNs over time may be related to the use of Multi-CDN services <cit.>. A domain subscribed to such multi-CDN service could in real-time be associated with different CDNsbased on various metrics, such as latency, performance overhead, proximity, demand and other factors. Considering that that number of such cases was negligibly small, we discard these domains from our dataset before conducting domain fronting tests. §.§ Domain Fronting Test ResultsEquipped with a large set of URLs served by 30 different CDNs (derived as explained in Section <ref>), we conducted our domain fronting tests (see Section <ref>) on those 30 CDNs.Figure <ref> and Table <ref> summarizes our results. We found that 22 out of 30 CDNs were prone to domain fronting, including some of the most popular CDN networks, such as Akamai and Fastly. To ensure that the results of our automated tests are accurate, we rely on the results of multiple test cases with varying parameters for each given CDN. In practice, for each CDN we generated multiple tuples, (d_f, d_t, u_t), which we used for testing domain fronting as explained in Section <ref>. Because the number of all possible tuples that we could form for each CDN can be very large, to reduce the total time for the experiments and avoid causing any significant load on CDN infrastructure, we set an upper bound on the number of domain and URL combinations that we used for testing each CDN. Specifically, we randomly select up to 25 domains per CDN and up to 10 URLs per each domain. During the testing, we only retain test tuples that satisfies the condition for a valid test case without causing any potential false positives. Similarly, once the tests are executed, we validate if the successful tests meet all 3 required conditions. At the end, we tested over 46,653 distinct tuples, 513 distinct domains and 2,963 URLs. The tests were successful for 70% of the tested domains.Figure <ref> shows the number of domains used for testing each CDN and the number of domains that were involved in successful domain fronting tuples. Overall, we found domain fronting to still work in 73% (22 out of 30) of the CDNs we tested. Among these, there were 16 CDNs for which domain fronting tests were successful for all the domains we tried. In other CDNs, we found that not all domains could be used for successful domain fronting. Notably, popular CDNs such as Fastly and Akamai resulted in successful domain fronting tests for 100% and 52% of the tested domains, respectively. On average, in case of all 22 CDNs prone to fronting, domain fronting was successful for at least 50% of the tested domains. Further, our results also indicate that 8 of the CDNs had deployed mitigation measures against domain fronting throughout their entire infrastructure. Specifically, cases where 100% of the tests failed includes popular CDNs such as Cloudfront and Cloudflare, which is consistent with their public stance against domain fronting. For a few CDNs (e.g., teridion, reblaze and inxy), the number of domains we were able to discover and use for testing was small (e.g., <=5) and our tests may be insufficient to confirm whether those CDNs have correctly implemented domain fronting mitigation throughout their entire infrastructure. Another interesting result is that, among the domains that we used in successful fronting tests, some were related to potentially sensitive domains, including <www.census.gov>. §.§.§ Analysis of CDNs with Partially Successful Fronting TestsTo verify our results we manually analyzed a subset of our domain fronting tests. Especially, we focused on the 6 CDNs (shown in Figure <ref>) whose domain fronting tests succeeded for only a portion of all the test tuples (i.e., combinations of domains and URLs). For instance, in case of Akamai, 13 of the 25 tested domains led to successful domain fronting. Our manual analysis revealed that, in all 12 cases that failed, the CDN servers used to provide the HTTP response were different from the CDN servers involved in the 13 successful domain fronting tests. Similarly, in case of StackPath, the 2 failed cases involved a CDN server not seen in any of the remaining 15 successful tests. Another case is represented by Adobe. In this case, all successful domain fronting tests involved CDN servers with domain name under the <adobeaemcloud.com> zone, whereas CDN servers with different names (e.g., under <omtrdc.net>, which is also an SLD related to Adobe's CDN) caused the domain fronting tests to fail. This indicates that while some of these CDNs have taken measures to mitigate domain fronting, they have not been able to do so consistently across their entire infrastructure. This insight was later partly confirmed during our responsible disclosure process, which we describe in Section <ref>.In rest of the 3 CDNs involving a total of 11 domains, the response received was from a server belonging to a different CDN than the tested CDN. This is commonly observed in less popular CDNs and was found to be negligible(see section <ref>). This could be resolved by expanding DNS data to obtain more testable domains. This demonstrates that even if a CDN has taken measures to prevent domain fronting, it may not be applicable to their entire infrastructure. We provide further details on this phenomenon in the discussion section. Further, in case of Adobe, we noticed that the difference between successful and failed cases were in the different CNAME SLDs(all owned by Adobe)that the domains were mapped to. Specifically, the tests were successful if the CNAME SLD was "adobeaemcloud.com". The predominant error in rest of theCDNs were the fact that the domains were mapped to different CDN during our tests as compared to the DNS records. This was mostly observed in less popular CDNs. Few other cases(2 domains) led to SSL errors. Overall, we confirmed that the partially successful cases are still sufficient to claim if a CDN is prone to domain fronting.@Roberto: I can add a table with the breakdown of domains and errors if needed§ DISCUSSIONBenefits of our proposed system: Our system to detect CDNs that are prone to domain fronting can be beneficial to a diverse group of stakeholders. First and foremost, Internet freedom activists and journalists operating in regions with stringent internet censorship rules could use our system to learn which CDNs allow domain fronting and may be used bypass online restrictions, ensuring uninterrupted access to global information. Second, cybersecurity professionals and IT administrators would gain a better understand of potential ways in which attackers may “hide” from monitoring in their networks, and thus make more informed decisions on what CDN traffic to prioritize for detailed (and computationally expensive) inspection. Furthermore, CDN customers could themselves leverage our system to assess and analyze if their business might get affected as a collateral damage due to their CDN being prone to domain fronting and help them make a more informed choice on hosting services.Challenges in Automated CDN Detection: In this project, our system automatically analyzes DNS records to identify domains whose web content is served by CDNs, provided the SLDs used by the CDN infrastructure are known. However, discovering the complete list of domains associated with each CDN and their SLDs is non-trivial. First, to the best of our knowledge, there are no public datasets of such CDN SLDs. Furthermore, by design, CDNs distribute content across a myriad of servers globally to optimize load times and provide redundancy. This distributed and dynamic nature inherently makes complete CDN infrastructure enumeration difficult. An alternate option is to use CDNFinder <cit.>, a system that takes websites or domains as input and returns a list of CDNs serving those sites. CDNFinder employs multiple techniques, such as pre-defined lists, HTML rewrite and IP-to-ASN mapping. However, CDNFinder is not suitable for our project, because we need to identify a wide range of CDNs, instead of focusing only on popular CDNs as in CDNFinder. Also, an initial analysis of CDNFinder results led us to discover a significant number of false positives. In a recent study <cit.>, the authors focus on actively collecting DNS and HTTP based measurements to detect CDNs used by a given website. Similar to CDNFinder, this involves crawling large numbers of non-targeted set of domains that may or may not be associated with CDNs. Domain Fronting Mitigation at CDNs: Cloudflare, a prominent CDN provider, offers an insightful example of how CDNs typically handle incoming web requests, which may have implications for domain fronting <cit.>. As explained in the cited article, Cloudflare handles web requests using a multi-tiered system that includes two separate reverse proxies, a TLS proxy and a business logic proxy. When an incoming HTTPS connection reaches Cloudflare's infrastructure, it first encounters the TLS proxy, which is responsible for terminating the TLS connection. Then, requests are processed by the business logic proxy. In the context of Domain fronting, the proxies need to check both the SNI for each TLS session and verify that the Host field in all subsequent HTTP requests matches the SNI in the corresponding TLS session. According to information provided to us by Cloudflare engineers, Cloudflare checks both the SNI and Host field at the TLS proxy, which allows for detecting Domain Fronting before forwarding requests to the business logic proxy. However, we speculate that other CDNs may be using a similar “split proxy” infrastructure without performing the SNI and Host checks at the same proxy <cit.>. In such cases, it may be costly to adapt the underlying CDN infrastructure to make sure that the two reverse proxies collaborate to check for consistency between the SNI and the Host field. In fact, our study revealed that, for some CDNs, only part of the CDN infrastructure is not prone to domain fronting. Based on discussions with CDN operators (as part of our disclosure process), we were told that only “new customers” (presumably served by newer infrastructure) are “protected” against domain fronting by default. However, without additional details from CDN engineers, it is difficult to definitively say whether the inability to mitigate Domain Fronting is due to technical difficulties or to explicit business/policy decisions.an attacker could set the SNI to a benign domain while setting the HTTP host header to a malicious domain. Given Cloudflare's multi-tiered approach, the TLS proxy would only see the benign SNI, while the business proxy would see the malicious host header. However, since Cloudflare's system is designed to ensure that the SNI and HTTP host header match before forwarding requests, Domain Fronting becomes less feasible. However, this may prove challenging for all CDNs as it requires additional infrastructure to handle large-scale requests. Further, some CDNs could also choose to deploy this validation for only certain services and users at their own discretion leading to inconsistencies about policies regarding domain fronting within the same CDN infrastructure.Ethical Considerations: It is important to note that our analysis of passive DNS traffic from two different academic networks was approved by the respective institutions. We only inspected DNS traffic to extract CNAME records and to map domain names to resolved IP addresses. Any other network traffic information was discarded. Also, it is worth noting that, to test CDNs for domain fronting, we only establish a limited number of HTTPS connection at a very low rate, and that all HTTP requests we issued are typical request for web objects. Therefore, we are confident that our measurements had no measurable impact on either the CDN infrastructure or the origin servers behind the CDNs. Responsible Disclosure: We have already disclosed our findings to two large CDNs: Fastly and Akamai. Fastly has responded by acknowledging their awareness about the possibility of domain fronting within their CDN infrastructure. They mentioned that they have started to prevent domain fronting by default in their newer CDN service offerings, and that they deal with it on a case-by-case basis for other scenarios. We are awaiting a response from Akamai and we plan to continue our disclosure process to share our results with all other CDNs that we found to be prone to domain fronting. § RELATED WORKS In this section, we discuss studies related to domain fronting and similar techniques. Fifield et al. were the first to introduce Domain Fronting in <cit.>, which included results related to manually testing the capabilities of a small number of popular CDNs. To test whether domain fronting was possible, the authors registered their own domains and manually subscribed them to each of the CDNs being tested. Unlike their mostly manual testing approach, our proposed method is developed to conduct domain fronting tests at a large scale while also avoiding the need to register any new domain names. This greatly reduces manual efforts and costs associated with hosting domains and acquiring CDN services. A subsequent paper <cit.> proposes a different technique, named “domain shadowing,” that can be used to abuse CDNs in combination with domain fronting. In <cit.>, the authors highlight the threat faced by domain manipulation techniques that further serves as a motivation for our work, which sheds light on CDNs that are still prone to domain fronting. In addition, the focus of <cit.> is on domain shadowing, while we measure the prevalence of domain fronting. Yet another work <cit.> proposes a technique named “domain borrowing,” which represents another way of abusing CDNs that is different from domain fronting.Censorship circumvention, which is one of the applications of domain fronting,has also been studied from different angles <cit.>. These works primarily discuss applications of TLS in censorship and associated evasion tactics.There also exists a number of studies <cit.> that showcase different methods related to abusing CDN infrastructure. Such works, in addition to recent reports of in-the-wild attacks that leverage domain fronting <cit.>, demonstrate the growing threat posed by CDN abuse and the importance of automatically discovering potential vulnerabilities or paths of abuse in CDNs. Furthermore, our work differs from their work by performing additional crawling for URL mapping and computing content hashes to validate testing. § CONCLUSIONIn this work, we successfully developed a measurement system that can be used to discover domain names and URLs that are served by a CDN, and to perform automated domain fronting tests to determine what CDNs are still prone to domain fronting. Through our evaluation, we discovered 22 CDNs that are prone to domain fronting, including highly popular CDNs such as Akamai and Fastly. The outcomes of our research offer valuable insights on CDNs and highlight the need for further efforts to prevent domain fronting abuse.ACM-Reference-Format
http://arxiv.org/abs/2310.17851v3
{ "authors": [ "Karthika Subramani", "Roberto Perdisci", "Pierros Skafidas" ], "categories": [ "cs.CR", "cs.NI" ], "primary_category": "cs.CR", "published": "20231027020419", "title": "Measuring CDNs susceptible to Domain Fronting" }
Effect of interfacial Dzyaloshinskii - Moriya interaction in spin dynamics of an Antiferromagnet coupled Ferromagnetic double - barrier Magnetic Tunnel Junction Reeta Devi[[email protected]], Nimisha Dutta[[email protected]], Arindam Boruah[[email protected]] and Saumen Acharjee[[email protected]] January 14, 2024 ===========================================================================================================================================================================Given a (projective) conifold transition of smooth projective threefolds from X to Y, we show that if the Gromov–Witten/Pandharipande–Thomas descendent correspondence holds for the resolution Y, then it also holds for the smoothing X with stationary descendent insertions. As applications, we show the correspondence in new cases. § INTRODUCTION Inspired and motivated by string theory, curve counting on Calabi–Yau threefolds has been one of the central topics in algebraic geometry for decades.There are different approaches to this problem. While Gromov–Witten (GW) theory uses stable maps from curves, Donaldson–Thomas (DT) and Pandharipande–Thomas (PT) theories use sheaves with or without extra structure. All three theories are conjectured to be equivalent. The correspondence, namely the equivalence of two theories, was first stated in terms of GW and DT theories by Maulik, Nekrasov, Okounkov and Pandharipande <cit.>.On the sheaf theoretic side, the DT/PT correspondence has been proven by Bridgeland <cit.> and Toda <cit.> for Calabi–Yau threefolds.Since the approaches are very different in nature, the GW/DT or GW/PT correspondence is more difficult to be studied. Considering the works by Bridgeland and Toda, we will focus on GW and PT theories and study the descendent correspondence. For simplicity, we state here the correspondence conjecturefor Calabi–Yau threefolds <cit.>, which does not require descendent insertions.Let M be a Calabi–Yau threefold. For a nonzero curve class β in M, we have the correspondence(M; u)_β = (M; q)_βunder the variable change - q = e^iu.Here, a Calabi–Yau threefold M is a smooth projective threefold with a trivial canonical line bundle and H^1 (_M) = 0.The expressions in the conjecture are generating functions of GW and PT invariants in curve class β respectively. We will review their definitions in <ref> and the descendent correspondence conjecture (Conjecture <ref>). The most important progress regarding GW/PT correspondence is due to Pandharipande and Pixton. They have proven the correspondence in the following cases:*smooth projective toric threefolds (<cit.>); *Fano or Calabi–Yau complete intersections in products of projective spaces with even cohomology insertions <cit.>.Oberdieck <cit.> introduced marked relative invariants, which provide new tools to study arbitrary cohomology insertions. In the stationary case, i.e., all descendent insertions are even classes of positive degree, Oblomkov, Okounkov and Pandharipande <cit.> propose an explicit formula for the GW/PT descendent correspondence via vertex operators.The purpose of the current paper is to prove the correspondence under conifold transitions (Definition <ref>). They are examples of extremal (or geometric) transitions. An extremal transition is a process of a crepant resolution Y → followed by a complex smoothing ⇝ X. We will denote this by X↗ Y. It is speculated <cit.> that (simply connected) Calabi–Yau threefolds can be related via extremal transitions, see <cit.> for a survey.The following is our main result for stationary descendants. For the precise formulation, see Theorem <ref>. Let X ↗ Y be a projective conifold transition of smooth projective threefolds. If Y satisfies the descendent / correspondence (Conjecture <ref>), then so does X for descendent insertions (<ref>). These are essentially stationary descendent insertions restricted from the total space of the degeneration. Following the strategy in <cit.>, we will fit X and Y into two degenerations whose special fibers have a component in common and apply the degeneration formulas <cit.>.The main application of Theorem <ref> will be to establish the correspondence in new cases from known cases (<ref>) and (<ref>). We show that the GW/PT correspondence with stationary descendent insertions holds for 44 deformation families of Fano threefolds (Corollary <ref>) and a few classes of smoothings of double solids (Corollary <ref>). The paper is organized as follows. In Section <ref>, we review the (relative) GW and PT-invariants and degeneration formulas. Section <ref> is devoted to the proof of Theorem <ref>.In Section <ref>, we provide applications. For a smooth variety V, we will denote the integral Mori monoid by (V), namely the set of effective curve classes in H_2 (V, ) / tors.If V is complete and β∈ (V), we set_β = _β^V ∫_β c_1 (T_V) . § GROMOV–WITTEN AND PANDHARIPANDE–THOMAS THEORIES We will briefly review the GW and PT-invariants, their correspondence,and the degeneration formulas. We refer the reader to <cit.> for details. Let M be a smooth projective threefold. Fix a curve class β∈(M), integers r∈ℤ_⩾ 0 and n,g∈ℤ.§.§ GW and PT-invariantsWe review descendent GW and PT-invariants of threefolds and the corresponding invariants relative to a divisor.§.§.§ Absolute theoriesFirst, let ' _g, r (M, β) denote the moduli space of r-pointed stable mapsC→ Mwith possibly disconnected domain curves C of (arithmetic) genus g and no contracted connected components. The latter condition requires each connected component of C to represent a nonzero class in (M) and hence β = [C] ≠ 0. The moduli space ' _g, r (M, β) is equipped with a virtual fundamental class <cit.> and its virtual dimension is _β + r. Consider the first Chern class of cotangent line bundle _i associated to the i-th marked point: ψ_i = c_1 (_i) ∈ H^2 (' _g, r (M, β), ), i = 1, ⋯, r. Let _i ' _g, r (M, β) → M,1⩽ i⩽ r,be the evaluation maps. Givenγ_1, ⋯, γ_r ∈ H^∗ (M, ), define the disconnected descendent GW-invariants by⟨τ_k_1 (γ_1) ⋯τ_k_r (γ_r) ⟩' _g, β = ∫_['_g, r (M, β)]^∏_i = 1^rψ_i^k_i∪_i^∗ (γ_i).Note that ' _g, r (M, β) is empty for g sufficiently negative.The associated partition function(M; u ∏_i = 1^r τ_k_i (γ_i) )_β = ∑_g ∈⟨∏_i = 1^r τ_k_i (γ_i) ⟩' _g, β u^2g - 2∈ ((u))is a Laurent series. Next we consider the moduli space of stable pairs. A stable pair(F, s_M → F)on M consists of a pure 1-dimensional sheaf F on M and a section s with 0-dimensional cokernel.Given n ∈ and nonzero β∈ (M), let P_n (M, β) be the moduli space of stable pairs with _2 (F) = β and χ (F) = n. The moduli space P_n (M, β) is fine and projective, and it admits a virtual fundamental class of virtual dimension _β <cit.>.Let →P_n (M, β)× Mbe the universal sheaf. Let π_P and π_M be the projections from P_n (M, β) × M onto the first and second factors respectively. For k ∈_⩾ 0, the k-th descendent insertion _k (γ) of a class γ∈ H^p (M, ) is defined by_k (γ) (ξ) = π_P ∗ (π_M^∗ (γ) ·_2 + k () ∩π_P^∗ (ξ)) ∈ H_∙ - 2k + 2 - p (P_n (M, β), )for every ξ∈ H_∙ (P_n (M, β), ). We use the same symbol to denote descendent insertions in GW and PT theories whose meaning should be clear from the context. We will soon see there is a close relation between the two types of insertions via the GW/PT correspondence.Given k_i∈ℤ_⩾ 0 and γ_i ∈ H^∗ (M, ) for i=1,⋯,r, the corresponding descendent PT-invariant is⟨_k_1 (γ_1) ⋯_k_r (γ_r) ⟩_n, β = ∫_[P_n (M, β)]^∏_i = 1^r _k_i (γ_i).Note that the moduli space P_n (M, β) is empty for n sufficiently negative. The associated partition function(M; q ∏_i = 1^r _k_i (γ_i) )_β = ∑_n ∈⟨∏_i = 1^r _k_i (γ_i) ⟩_n, β q^n ∈ (( q ))is a Laurent series as well. The following conjecture of the rationality of partition function was made in <cit.>.The partition function(M; q |∏_i = 1^r _k_i (γ_i) )_β is the Laurent expansion of a rational function in q.If M is Calabi–Yau, then the DT/PT correspondence was proved by Toda <cit.> (see also <cit.>) and Bridgeland <cit.>. Moreover, we have the rationality property: (M; q)_β∈ (q) which is invariant under q ↔ q^- 1. Hence, the variable change in Conjecture <ref> is well-defined. §.§.§ Relative theories Let D be a smooth (not necessarily connected) divisor on M. Relative GW and PT theories enumerate curves with specified tangency to the divisor D. To impose the boundary conditions along D, we need the notion of cohomology weighted partitions.Assume that D is connected, and letbe a basis of H^∗ (D, ). A cohomology weighted partition η with respect tois a set of pairs {(a_1, δ_1), ⋯, (a_ℓ, δ_ℓ)}, δ_j ∈ a_1 ⩾⋯⩾ a_ℓ⩾ 1,such that η (a_j) ∈^ℓ is a partition ofsize |η| ∑ a_j and length ℓ (η) ℓ. The automorphism group (η) consists of σ∈𝔖_ℓ (η) such that η^σ = η.Let D_1,⋯, D_k be the connected components of D and η_i = {(a_ij, δ_ij)}_j, 1⩽ i⩽ k,a cohomology weighted partition over D_i with respect to a fixed basis _i of H^∗ (D_i, ). Let β∈ (M) be a nonzero curve class satisfying β· D_i = |η_i| ⩾ 0 for each 1 ⩽ i ⩽ k.In relative GW-theory, the numbers a_ij record the multiplicities of intersection with the connected divisor D_i while the cohomology classes δ_ij record where the tangency occurs. More precisely, we consider the moduli space introduced by J. Li <cit.>'_g, r (M / D, β, η_1, ⋯, η_k)which parametrizes r-pointed relative stable maps of (arithmetic) genus g ∈ and degree β with possibly disconnected domain curves and relative multiplicities determined by η_1, ⋯, η_k. An element in the moduli space is a map to the stack of expanded relative pairs.As usual, a relative stable map has nonzero degrees on every connected component of its domain. It carries a virtual fundamental class of (complex) dimension _β^M + (ℓ (η_1) - |η_1|) + ⋯ + (ℓ (η_k) - |η_k|) + r,see for example <cit.>.For 1⩽ i⩽ k and 1 ⩽ j ⩽ℓ (η_i), the moduli space has a relative evaluation map_D_i, j'_g, r (M / D, β, η_1, ⋯, η_k) → D_i,which sends a relative stable map to the j-th intersection point with the divisor D_i (according to the fixed ordering). By abuse of notation, we write _D_i^∗ (δ_η_i) ∏_j = 1^ℓ (η_i)_D_i,j^∗ (δ_i j).Given γ_1, ⋯, γ_r ∈ H^∗ (M, ) k_i∈_⩾ 0,1⩽ i⩽ r,the relative descendent GW-invariants <cit.> are⟨τ_k_1 (γ_1) ⋯τ_k_r(γ_r) |η_1, ⋯, η_k ⟩' _g, β = 1/∏_j = 1^k | (η_j)|∫_['_g, r (M / D, β,η_1, ⋯, η_k)]^∏_i = 1^r (ψ_i^k_i∪_i^∗ (γ_i) ) ∪∏_j = 1^k _D_j^∗ (δ_η_j).Then the associated partition function(M / D ; u ∏_i = 1^r τ_k_i (γ_i) η_1, ⋯, η_k)_β= ∑_g ∈⟨∏_i = 1^r τ_k_i (γ_i) η_1, ⋯, η_k ⟩'_g, β u^2 g - 2is a Laurent series as before.In relative PT-theory, we consider the moduli space introduced by Li-Wu <cit.> (cf. <cit.>)P_n (M / D, β)which parametrizes stable pairs (F,s) relative to D, such that χ (F) = n ∈ and _2 (F) = β. It carries a virtual fundamental class of dimension <cit.>P_n(M/D,β)=_β^M = ∫_β c_1 (T_M).For each 1 ⩽ i ⩽ k, we have the intersection mapϵ_iP_n (M/D, β) →( D_i, |η_i| )to the Hilbert scheme of |η_i| = β· D_i points of the connected divisor D_i. We recall the Nakajima basis for the cohomology of Hilbert schemes of points and refer the reader to <cit.> for more details. Fix d ∈ and let η = { (a_j, δ_j)}_j be a cohomology weighted partition of size d with respect to _i. Set (η) = | (η)| ·∏_j = 1^ℓ (η) a_j.Following the notation in <cit.> and <cit.>, we writeC_η = 1/ (η) P_δ_1 [a_1] ⋯ P_δ_ℓ (η) [a_ℓ(η)] ·1∈ H^∗ ((D_i, d ), ).Here 1 is the vacuum vector =1∈ H^0((D_i,0), ). Then {C_η}_|η| = d is the Nakajima basis of H^∗ ((D_i,d), ). Suppose that the cohomology basis _i of D_i is self dual with respect to the Poincaré pairing, i.e., for each j, δ_j^∨ =δ_l for some l. The dual partition η^∨ is the cohomology weighted partition {(a_j, δ_j^∨)}_j (with respect to _i). Note that the Nakajima basis is orthogonal with respect to the Poincaré pairing,∫_(D_i,d ) C_η∪ C_ν = (- 1)^d - ℓ (η)/ (η),if ν=η^∨0,otherwise. Given (<ref>), the relative descendent PT-invariants are⟨_k_1 (γ_1) ⋯_k_r (γ_r) |η_1, ⋯, η_k ⟩_n, β = ∫_[P_n (M / D, β)]^(∏_i = 1^r _k_i (γ_i) ) ∪∏_j = 1^k ϵ_j^∗ (C_η_j).The associated partition function is (M / D ; q∏_i = 1^r _k_i (γ_i) η_1, ⋯, η_k )_β = ∑_n ∈⟨∏_i =1^r _k_i (γ_i) η_1, ⋯, η_k ⟩_n, β q^n.The following rationality statement here is parallel to the absolute case <cit.>. Assume D is connected. The descendent partition function (M / D; q∏_i = 1^r _k_i (γ_i) η)_β∈ (( q ))is the Laurent expansion in q of a rational function.§.§ GW/PT correspondenceThe key to relate descendent GW and PT-invariants, which are very different in flavor, is the correspondence matrices found by Pandharipande and Pixton <cit.>. See also <cit.>. The matrices relating GW and DT-invariants were predicted in <cit.>. §.§.§ Absolute versionLet = (_1, ⋯ , _), with _1 ⩾⋯⩾_⩾ 1, be a partition of length ℓ () = and size || = ∑_j.Let ι_ΔΔ→ M^ be the inclusion of the small diagonal in the product M^. For γ∈ H^∗ (M, ), we write γ·Δι_Δ∗ (γ) ∈ H^∗ (M^, )and _{1, ⋯, } (_1, ⋯, _ ) ' _g,(M, β) → M^. The descendent insertion τ_[] (γ) denotesτ_[] (γ) ψ_1^_1 - 1⋯ψ_^_ - 1·_{1, ⋯, }^∗ (γ·Δ).Alternatively, let {θ_j} be a basis of H^∗ (M, ). By Künneth formula, we haveγ·Δ = ∑_j_1, ⋯, j_ c^γ_j_1, ⋯, j_θ_j_1⊗⋯⊗θ_j_,and then we have <cit.>τ_[] (γ) = ∑_j_1, ⋯, j_ c^γ_j_1, ⋯, j_τ__1 - 1(θ_j_1) ⋯τ__ - 1(θ_j_).If γ is the classof a point, then τ_[] () = τ__1 - 1 () ⋯τ__ - 1 ().If = (_1), then τ_[α] (γ) = τ_α_1 - 1 (γ). A universal correspondence matrixbetween the descendent insertions in GW and PT theories was constructed in <cit.>. The elements_α, ∈ [i, c_1, c_2, c_3] (( u))of the matrix are indexed by partitions α andof positive size and depend on i = √(- 1) and the formal variables c_j and u. By convention the variable c_j has degree j.The elements ofsatisfy the following two properties <cit.>: *If |α| < ||, then_α,= 0. *The u coefficients of _α, ∈ [i, c_1, c_2, c_3] (( u)) are homogeneous in the variables c_i of degree|α| + ℓ (α) - || - ℓ () - 3 (ℓ (α) - 1). By specializing the formal variables c_i to c_i (T_M), the elements ofact by cup product on H^∗ (M, ) with [i]((u))-coefficients:_α,H^∗ (M, ) → H^∗ (M, [i]((u)))for each partitions α andof positive size.Let α = (α_1, ⋯, α_ℓ) be a partition and P a partition of {1, ⋯, ℓ}. For each S ∈ P, a subset of {1, ⋯, ℓ}, let α_S be the subpartition consisting of the parts α_j for j ∈ S and γ_S = ∏_j ∈ Sγ_j.For even cohomology classes γ_j ∈ H^2 ∗ (M, ), letτ_α_1 - 1 (γ_1) ⋯τ_α_ℓ - 1 (γ_ℓ) = ∑_Pset partitionsof {1, ⋯, ℓ}∏_S ∈ P∑_0 < || ⩽ |α_S|τ_[](_α_S, ·γ_S ). In general, a sign has to be included in Definition <ref> when there is odd cohomology, see <cit.>. But we will focus on even insertions. [<cit.>] Let α = (α_1, ⋯, α_ℓ) be a partition and γ_j ∈ H^2 ∗ (M, ) even cohomology classes. * We can write the descendent correspondence asτ_α_1 - 1 (γ_1) ⋯τ_α_ℓ - 1 (γ_ℓ) = (i u)^ℓ(α) - |α|τ_α_1 - 1 (γ_1) ⋯τ_α_ℓ - 1 (γ_ℓ) + ⋯,where the dots stand for terms τ_[](⋯) with || < |α|. * For the case α = (1^ℓ), we have τ_0 (γ_1) ⋯τ_0 (γ_ℓ) = τ_0 (γ_1) ⋯τ_0 (γ_ℓ). We are now in a position to state the conjectural GW/PT correspondence. Let α = (α_1, ⋯, α_r) be a partition. For (even) classes γ_j ∈ H^∗ (M, ), 1⩽ j⩽ r, we have(- q)^- _β / 2(M; q_α_1 - 1 (γ_1)⋯_α_r - 1 (γ_r) )_β = (- iu)^_β(M; uτ_α_1 - 1 (γ_1) ⋯τ_α_r - 1 (γ_r))_βunder the variable change - q = e^iu. The variable change is well-defined assuming Conjecture <ref>.For the toric case, <cit.> implies the following theorem by taking the non-equivariant limit. If M is a smooth projective toric threefold, then it satisfies the / correspondence, i.e., Conjecture <ref>. We next review the correspondence over the local ℙ^1. Let N = _^1 (- 1)^⊕ 2 and P the projective bundle[We are following the classical tradition, P(E) =( E^∨).] P(N ⊕_^1) over ^1. Let ⊆ P be the subcurve given by the inclusion _ℙ^1→ N⊕_ℙ^1 and E the hyperplane at infinity in P given by N → N ⊕_^1. By the Euler sequence, we havec_1 (T_P) = c_1 (_P (3)) = 3 [E],and hence ∫_ c_1 (T_P) = 0. Because P is a smooth projective toric threefold, the following statement is a special case of <cit.> (cf. <cit.>), which will be used in the proof of Theorem <ref>. For each d ∈, we have the correspondence(P; q)_d [] =(P; u)_d []under the variable change - q = e^iu. §.§.§ Relative versionA relative version of the correspondence matrix was introduced in <cit.>. Let D be a smooth divisor of M. For s ∈, let (M / D)^s be the moduli space of s ordered (possibly coincident) points in M relative to D: p_1, ⋯, p_s ∈ M / D,cf. <cit.>. Note that it is proper and smooth of dimension sM= 3 s. LetΔ_⊆ (M / D)^sbe the small diagonal where all the points p_i are coincident, which is isomorphic to M as a variety. As a variety, (M / D)^1 is isomorphic to M and (M / D)^2 is isomorphic to the blow-up _D × D (M × M). The small diagonal Δ_⊆ (M / D)^2 is the proper transform of the standard diagonal. In general, we have the natural small diagonal morphismι_Δ_ M ≅ (M / D)^1 Δ_⊆ (M / D)^s.For any subset S ⊆{1, ⋯, r} of cardinality s, the moduli space '_g, r (M / D, β, η) admits a canonical evaluation_S '_g, r (M / D, β, η) → (M / D)^s,which is well-defined by the definition of a relative stable map (the markings are never mapped to the relative divisor). Supposeis a partition of length . For γ∈ H^∗ (M, ), let γ·Δ_ι_Δ_∗ (γ) ∈ H^∗((M / D)^, ).We define the relative descendent insertion τ_[] (γ) byτ_[] (γ) ψ_1^_1 - 1⋯ψ_^_ - 1·_{1, ⋯, }^∗ (γ·Δ_)Let Ω_M (log D) denote the locally free sheaf of differentials with logarithmic poles along D. The logarithmic tangent bundle T_M (- log D) is the dual of Ω_M (log D). For the relative geometry M / D, the elements ofalso act on H^∗ (M, ) via the substitution c_i = c_i (T_M (- log D)) instead of the substitution c_i = c_i (T_M) used in the absolute case. Then, for even cohomology classes γ_j ∈ H^2 ∗ (M, ), we define τ_α_1 - 1 (γ_1) ⋯τ_α_ℓ - 1 (γ_ℓ) = ∑_Pset partitions of {1, ⋯, ℓ}∏_S ∈ P∑_0 < || ⩽ |α_S|τ_[](_α_S, ·∏_j ∈ Sγ_j )as before via (<ref>) instead of (<ref>). In the presence of odd cohomology classes, a sign must be included which is similar to the absolute case (see Remark <ref>).Now, we can state the conjectural relative descendent GW/PT correspondence <cit.>. Suppose that D is connected and α=(α_1,…,α_r) a partition. For (even) classes γ_j ∈ H^∗ (M, ), 1⩽ j⩽ r, we have(-q)^- _β^M / 2(M / D; q _α_1 - 1 (γ_1) ⋯_α_r - 1 (γ_r) η)_β= (- i u)^_β^M + ℓ (η) - |η|(M / D; u τ_α_1 - 1 (γ_1) ⋯τ_α_r - 1 (γ_r)η)_βunder the variable change e^i u = -q.The variable change is well-defined assuming Conjecture <ref>. §.§ The degeneration formulasLet W be a smooth variety of dimension four and B a smooth irreducible curve with a distinguished point 𝐨∈ B.A flat projective morphism π W → B is a simple degeneration if the following conditions are satisfied: * The morphism π has smooth fibers over B ∖{𝐨};* The special fiber is the unionW_𝐨 = M_0 ∪ M_1 ∪⋯∪ M_kof smooth irreducible components such that for each 1 ⩽ i ⩽ k, the nonempty intersection D_iM_0 ∩ M_i is a smooth connected divisor. Moreover, M_1, ⋯, M_k are pairwise disjoint. This definition is a special case of <cit.>.Let M denote a fixed general fiber W_b andD ∑_i D_i.We will also denote π briefly as MM_0 ∪_D (M_1 ⊔⋯⊔ M_k). We writeι M → W, ι_0M_0 → W, ι_1M_1 ⊔⋯⊔ M_k → Wfor inclusions. The degeneration formulas express the absolute invariants of M via the relative invariants of (M_0, D) and (M_1 ⊔⋯⊔ M_k, D). We state for completeness the formulas in both GW and PT theories. Suppose M is a smooth projective threefold. Suppose γ_1,⋯,γ_r are even cohomology classes on the total space W.For a nonzero class β^'∈ (W), we have∑_β∈ (M)ι_∗β = β^'(M ;uτ_α_1-1(γ_1)⋯τ_α_r-1(γ_r))_β= ∑(M_0 / D; u ∏_i∈ I_0τ_α_i-1(γ_i)η_1, ⋯, η_k )_β_0·∏_j = 1^k(η_j) u^2 ℓ(η_j)(M_j/ D_j;u ∏_i∈ I_jτ_α_i-1(γ_i)η_j^∨)_β_jwhere the summation on the second line runs over * splittings ι_0 ∗β_0 + ι_1 ∗ (∑_i=1^kβ_i) = β^'=ι_*βsuch that β_0 · D_i = β_i · D_i,* partitions I_0∐⋯∐ I_k={1,2, ⋯, r}, and * cohomology weighted partitions η_1,⋯,η_k such that |η_i|=β_i · D_i with respect to a fixed basis of H^∗ (D_i, ) for 1 ⩽ i ⩽ k.See for example <cit.> and <cit.>. For the degeneration formulas in symplectic geometry, see <cit.>. With notation as in Theorem <ref>, we have∑_β∈ (M)ι_∗β = β^'_(M; q τ_α_1-1(γ_1)⋯τ_α_r-1(γ_r))_β '= ∑_(M_0 / D; q ∏_i∈ I_0τ_α_i-1(γ_i)η_1, ⋯, η_k )_β_0·∏_j = 1^k (- 1)^|η_j| - ℓ (η_j)(η_j) q^- |η_j| _(M_j/ D_j; q∏_i∈ I_jτ_α_i-1(γ_i)η_j^∨)_β_jwhere the summation on the second line runs over the same index set in Theorem <ref>. See for example <cit.> and <cit.>. For the proof of a version of the statement, see <cit.>. For the parallel statement in DT-theory, see <cit.>. Many cases have been proven in <cit.> and <cit.>.Given a splitting (<ref>), we have the following constraints by adjunction for MM_0 ∪_D (M_1 ⊔⋯⊔ M_k), which are similar to <cit.>:_β^M = _β_0^M_0 + ∑_i = 1^k (_β_i^M_i - 2 β_i · D_i),β_0 · D_i = β_i · D_i1 ⩽ i ⩽ k.This will be important for our arguments.For notational convenience, we set _^' (M_i / D_i; u)_0=_ (M_i / D_i; q)_0 = 1 for each 1 ⩽ i ⩽ k as a convention. This will appear when the curve misses an irreducible component M_i of the degeneration in the application of degeneration formulas. We conclude this section with a well-known lemma. For the convenience of the reader, we provide an argument here.If the monodromy on H_2 (M, ) around 𝐨∈ B is trivial, then ι_∗ H_2 (M, ) → H_2 (W, ) is injective and so is ι_∗ (M) → (W). By hypothesis and the local invariant cycle theorem (see <cit.> or <cit.>), the restriction map ι^∗ H^2 (W, ) → H^2 (M, ) is surjective. According to the universal coefficient theorem, it follows that ι_∗ = (ι^∗)^∨ H_2 (M, ) ≅ H^2 (M, )^∨→ H^2 (W, )^∨≅ H_2 (W, )is injective. Note that ι_∗ preserves effective cycles by the definition of the pushforward.§ MAIN THEOREMWe first review the definition of conifold transitions. Let π→Δ be a projective flat morphism from a smooth fourfoldto the unit disk Δ inand X be a general fiber of it.Suppose the central fiber X=_0 of π has ordinary double points {p_1,⋯,p_k} as singularities. The morphism π together with a projective small resolution ψ Y→X is a (projective) conifold transition. We denote it as X ↗ Y.Let X↗ Y be a conifold transition and use the notation in the definition. The following is the main result of the paper. Suppose that β∈(X) is a nonzero class and α=(α_1,…, α_r) a fixed partition. Asuume γ_i∈ H^∗(), i=1,… ,r, are fixed even cohomology classes and if γ_i∈ H^0(𝔛), then α_i=1.*If Conjecture <ref> holds for Y, then it holds for X and descendent insertions γ_i|X, i=1,… ,r. *If furthermore the GW/PT correspondence, namely Conjecture <ref>, holds for Y, then it holds for X with descendent insertions (<ref>), i.e.,(- q)^- _β / 2(X; q _α_1 - 1 (γ_1|X)⋯_α_r - 1 (γ_r|X) )_β= (- iu)^_β(X; uτ_α_1 - 1 (γ_1|X) ⋯τ_α_r - 1 (γ_r|X))_βThe strategy of the proof is to put the smoothing X and the resolution Y into two different degenerations and apply the degeneration formulas. Sincehas only ordinary double points, the monodromy acts trivially on H^q (X, ) for q < 3 (cf. <cit.>), and thus for q > 3 by Poincaré duality. By the local invariant cycle theorem (see <cit.> or <cit.>), the restriction map H^q () → H^q (X) is surjective for q ≠ 3.§.§ Two simple degenerationsLet =_p_1, ⋯, p_k. We have the following diagramY [d, "ψ"][l,"ϕ",swap] [ld ]X. [l," sm."',rightsquigarrow]We will consider two simple degenerations →Δ and →Δ.Special fibers of bothandcontain the blow-up .For the degeneration →Δ, there exists a semi-stable degeneration[The semi-stable reductiondoes not require the existence of a small resolution of _0 =.]→Δ via a degree two base change and blow-ups of(for the construction, see <cit.>).The special fiber is_0=∪ Q_1∪⋯∪ Q_kwhere Q_i is a smooth quadric threefold in ^4. Let E_i be the exceptional divisors ofthe blow-up . Then E_i ≅^1 ×^1 is a hyperplane section of Q_i in ^4.The blow-upintersects Q_i transversally along E_i and Q_i ∩ Q_j = ∅ for all i ≠ j. Set E = ∑_i E_i. We also denote the degeneration →Δ asX ∪_E (⊔_i Q_i).According to the adjunction formula, K_Q_i = (K_^4 + Q_i)|_Q_i = _Q_i (- 3) and c_1 (T_Q_i) = 3 [E_i]. The other degeneration is the deformation to the normal cone →Δ which is the composition of the blow-up Φ_⊔_i C_i ×{0} (Y ×Δ) → Y ×Δwith the projection to Δ. Here, each C_i ψ^- 1 (p_i) is the exceptional (-1, -1)-curve of ψ.The special fiber is_0=∪_1∪⋯∪_kwhere each _i is isomorphic to P (_^1(- 1)^⊕ 2⊕_^1). Note thatis also the blow-up of Y along the exceptional curves C_i's. The transverse intersection E_i = ∩_i is now regarded as the infinity divisor of π_i _i → C_i ≅^1. We also denote the degeneration asY ∪_E (⊔_i _i).We include the following well-known fact about the associated monodromies of the degenerations / Δ and / Δ.The monodromy of /Δ (resp. /Δ) around the special fiber _0 (resp. _0) acts trivially on H_2 (X, ) (resp. H_2 (Y, )). Since the monodromy on H_2 (_t, ) (t ≠ 0) around 0 ∈Δ is trivial (see <cit.> or <cit.>), so is the monodromy on H_2 (_t, ) around 0 ∈Δ. The same holds for the family →Δ since the punctured family is trivial.Because H_2 (X, ) has trivial monodromy over Δ∖{ 0 }, it is canonically isomorphic to H_2 (, ) (see <cit.> or <cit.>). Moreover, we have the following exact sequence (see <cit.>)⊕_i = 1^k[C_i] → H_2 (Y, )H_2 (, ) ≅ H_2 (X, ) → 0.We set ϕ = Φ |_→ Y, which is regarded asthe blow-up → Y of Y along C_i's (via the identification Y ≅ Y ×{0}⊆ Y ×Δ). Note that ϕ induces an injective Gysin homomorphism (cf. <cit.>)ϕ^!PD_∘ϕ^∗∘PD_YH_2 (Y, ) ↣ H_2 (, )where PD_(-) is the Poincaré duality isomorphism.Moreover,(ϕ^!) = {β∈ H_2 (, ) |β· E_j = 0for 1 ⩽ j ⩽ k }.Under the identification H_2 (, ) ≅ H_2 (X, ), we have the following commutative diagram[column sep=tiny] H_2 (Y, ) [d, "ψ_∗", two heads] [rrr, "ϕ^!", tail] H_2 (, ) [d, "ι_0 ∗"] H_2(, ) [r, phantom, ""] H_2(X,) [rr, "ι_∗", tail]H_2 (, ).§.§ GW/PT correspondenceWe prove the following key proposition for both GW and PT-invariants, which relates invariants over X and Y to those over Y. This is done by applying the degeneration formulas to the degenerations (<ref>) and (<ref>). Let X ↗ Y be a conifold transition of smooth projective threefolds and keep the notation as in Section <ref>. Suppose that α=(α_1,…, α_r) is a fixed partition. *Let β∈ (X) be a nonzero class and γ_1,⋯,γ_r∈ H^∗() be even classes. Suppose if γ_i∈ H^0(𝔛), then α_i=1.Then we have^'_(X; u∏_i=1^rτ_α_i-1(γ_i))_β = ∑_ ψ_∗β_Y = β^'_( / E; u ∏_i=1^rτ_α_i-1(γ_i))_ϕ^!β_Y,_(X;q ∏_i=1^rτ_α_i-1(γ_i) )_β = ∑_ ψ_∗β_Y = β_( / E; q ∏_i=1^rτ_α_i-1(γ_i))_ϕ^!β_Ywhere the summations are finite.Here, by abuse of notation, γ_i on the left is viewed as a class on X as a pull-back via the inclusion X↪𝔛, and γ_i on the right is viewed as a class on Y via the obvious maps Y↪𝒳_0 →𝔛. * Let _j be the rational curve in_j via the inclusion _C_j→ N_C_j / Y⊕_C_j. Let β_Y ∈ (Y) be a nonzero class and γ_1,⋯,γ_r∈ H^*(Y) be even classes. Suppose either their restrictions to the exceptional curves C_j, j=1,…, k, are zero, or if γ_i∈ H^0(𝔛), then α_i=1.Then we have_^'(Y;u ∏_i=1^rτ_α_i-1(γ_i))_β_Y= ∑_^'( / E; u ∏_i=1^r τ_α_i-1(γ_i) )_ϕ^!β_Y^'∏_j = 1^k_^'(_j / E_j; u )_m_j [_j]_(Y ;q ∏_i=1^rτ_α_i-1(γ_i))_β_Y= ∑_( / E; q∏_i=1^r τ_α_i-1(γ_i) )_ϕ^!β_Y^'∏_j = 1^k _(_j / E_j; q)_m_j [_j] .The summations are over curve splittings in H_2(𝒴), omitting the obvious push-forwards, β_Y = ϕ^!β_Y^' + ∑_j=1^k m_j [_j]where β_Y^'∈ (Y) and m_i ∈_⩾ 0.Furthermore, the summations are finite. (<ref>) By the string equation for GW-invariants (<cit.>), if the descendent insertions involve τ_0(1), then the calculation can be reduced to one with fewer insertions, removing this τ_0(1).On the PT-invariants side, by the string equation for PT-invariants (<cit.>), if the descendent insertions involve τ_0(1), then the PT-invariant is zero.Thus we can assume γ_i ∈ H^>0(𝔛). We only need to consider cohomology classes of degrees in [2,6]. We consider the degeneration (<ref>). Since the map from Q_j to 𝔛 factors through {p_j}↪𝔛 <cit.>, the pullback of γ_i∈ H^>0(𝔛) to Q_j is zero for all i and j. Thus the degeneration formulas will simplify so that there are no desendent insertions over Q_j.For GW-invariants, we apply Theorem <ref>.First notice that it is enough to prove the corresponding statement without bars, namely without applying the universal transformation on descendent insertions.On the other hand, according to Lemmas <ref> and <ref>, we have (X)↪(). Then the degeneration formula is simplified to _ ^'( X ;u τ_α_1-1(γ_1)⋯τ_α_r-1(γ_r))_β= ∑_^'(Y / E; u ∏_i=1^rτ_α_i-1(γ_i)η_1, ⋯, η_k )_β·∏_j = 1^k (η_j) u^2 ℓ(η_j)_^'(Q_j/ E_j; uη_j^∨)_β_j.Here, the summation is over the curve splittings[Here and in (<ref>), we have omitted various push-forward symbols to simplify the notation.] in H^2(𝒳) β=β+∑β_jand cohomology weighted partitions η_1,⋯,η_k such that β· E_j=β_j· E_j=|η_j|.According to the virtual dimensions of the moduli spaces of relative stable maps (<ref>), ^Q_j_β_j+ℓ(η_j^∨)-|η_j^∨| = ^Q_j_β_j+ℓ(η_j)-|η_j|=0.Moreover, according to (<ref>), we have ℓ(η_j)=0=β_j· E_j=|η_j|1⩽ j⩽ k.Since E_j⊆ Q_j is a hyperplane section, β_j· E_j=0 implies that β_j=0.Then, according to (<ref>) and (<ref>), there is a unique β_Y∈ H_2(Y,ℤ) such that ϕ^!β_Y=β and ψ_∗β_Y=β.Thus, the degeneration formula simplifies to the desired form,using the diagram (<ref>).Similarly, for PT-invariants, we apply Theorem <ref>, obtaining_(X; q τ_α_1-1(γ_1)⋯τ_α_r-1(γ_r))_β=∑_(Y / E ;q ∏_i=1^rτ_α_i-1(γ_i)η_1, ⋯, η_k )_β·∏_j = 1^k (- 1)^|η_j| - ℓ (η_j)(η_j) q^- |η_j|_(Q_j/ E_j; q η_j^∨)_β_j.The summation is over curve splittings (<ref>) and cohomology weighted partitions η_1,⋯,η_k satisfying (<ref>). According to (<ref>), _β^X =∑_i=1^r(α_i-1+γ_i) _β^Y =∑_i=1^r(α_i-1+γ_i)+ 1/2∑_j=1^k_ℝ C_η_j.On the other hand, according to (<ref>), we haveCombined with (<ref>), we have ∑_j=1^k(_ℝ C_η_j/2+|η_j|)=0. Since all the summands are non-negative, we again have (<ref>) andβ_j=0. Then the equality follows from the degeneration formula.Note that the finiteness of the sums in the right-hand side of (<ref>) follows from that ϕ^!β_Y ∉() for all but finitely many β_Y, for a proof see <cit.>.(<ref>) Again by the string equations, we only need to consider insertions of positive degrees.We consider the pullback of the descendent insertion γ_i to 𝒴 via the composition𝒴 Y×Δ Y.Since the induced map E_j→ Y factors through C_j⊆ Y, the pullback to E_j is zero.We first consider GW-invariants. Again, it is enough to prove the corresponding statement without bars. Applying Theorem <ref> and Lemmas <ref> and <ref> to the degeneration (<ref>), we obtain_ ^'( Y; u τ_α_1-1(γ_1)⋯τ_α_r-1(γ_r))_β_Y=∑_^'(Y / E; u ∏_i=1^rτ_α_i-1(γ_i)η_1, ⋯, η_k )_β∏_j = 1^k (η_j) u^2 ℓ(η_j)_^'(E_j/ E_j; uη_j^∨)_β_j.The summation is over the curve splittingsβ_Y=β+∑_j=1^kβ_j,in H_2(𝒴) and cohomology weighted partitions η_1,⋯,η_k satisfying(<ref>).Note that _β_j^_j = 3 β_j · E_j = 3 |η_j|because of _j ≅ P(_^1(- 1)^⊕ 2⊕_^1) and (<ref>). Similar to the previous part, we can obtain (<ref>).Because [E_i] = c_1 (__i (1)),β_j = m_j [_j] for some m_j ∈_⩾ 0.Also, because of (<ref>), there is a unique β_Y^'∈ (Y) such that β = ϕ^! β_Y^' by (<ref>).Hence, (<ref>) is proven for GW-invariants.For PT-invariants, we apply Theorem <ref> and Lemmas <ref> and <ref> to the degeneration (<ref>),obtaining _(Y; q τ_α_1-1(γ_1)⋯τ_α_r-1(γ_r))_β_Y= ∑_(Y / E; q∏_i=1^rτ_α_i-1(γ_i)η_1, ⋯, η_k )_β·∏_j = 1^k (- 1)^|η_j| - ℓ (η_j)(η_j) q^- |η_j|_(E_j/ E_j; q η_j^∨)_β_j.The summation is over the curve splittings (<ref>) and cohomology weighted partitions η_1,⋯,η_k satisfying(<ref>). We can again deduce (<ref>). Thus, β_j takes the desired form, similar to the GW case. Then the degeneration formula reduces to the equality in the statement.Finally, we note that (<ref>) is equivalent to the curve splitting β_Y = ϕ_∗β + ∑ m_j π_j ∗ [_j] in (Y), where π_j _j → C_j. By <cit.>, the summations in the right-hand side of (<ref>) are finite.Now, we are ready to prove the main theorem. Keeping the notation of Proposition <ref> and its proof, we first prove the following correspondence (cf. Conjecture <ref>)(-q)^- _β^Y / 2(Y/ E ; q_α_1 - 1 (γ_1) ⋯_α_r - 1 (γ_r) η)_β=(- i u)^_β^Y + ℓ (η) - |η|( Y / E; u τ_α_1 - 1 (γ_1) ⋯τ_α_r - 1 (γ_r)η)_βfor all β∈Im (ϕ^!) ⊆ (Y) under the variable change - q = e^iu.Here γ_i, i=1,…,r, are pulled back to Y via the composition Y→ Y→𝔛. Under the map Y→𝔛, the curve C_j is mapped to p_j∈𝔛. Thus, the pullbacks are restricted to zero on C_j.By Proposition <ref> (<ref>), we have∑_β_Y ∈ (Y)_(Y ;q τ_α_1 - 1 (γ_1) ⋯τ_α_r - 1 (γ_r))_β v^β_Y= ( ∑_β_Y^'∈ (Y)_( / E; q τ_α_1 - 1 (γ_1) ⋯τ_α_r - 1 (γ_r))_ϕ^!β_Y^' v^β_Y^') ·∏_i = 1^k ( ∑_m_i ⩾ 0_ (_i / E_i; q)_m_i [_i] v^m_i [C_i]).We have a similar equality for GW-invariants. By abuse of notation, let (Y ∏_j=1^r τ_α_j - 1(γ_j) ) ∑_β_Y ∈ (Y)(Y; q ∏_j=1^r τ_α_j - 1(γ_j))_β_Y v^β_Y(Y)∏_i = 1^k ( ∑_m_i ⩾ 0 (_i / E_i; q)_m_i [_i] v^m_i [C_i]),and similarly for (Y ∏_j=1^r τ_α_j - 1 (γ_j)) (Y).Since _ (_i / E_i; q)_0 =_^' (_i / E_i; u)_0 = 1, the generating series (Y) and (Y) are invertible in (( q )) [[(Y) ]] and (( u )) [[(Y) ]] respectively. Then we may rewrite (<ref>) as (Y ∏_j=1^r τ_α_j - 1(γ_j))/ (Y) = ∑_β_Y^'∈ (Y)( / E; q ∏_j=1^r τ_α_j - 1(γ_j))_ϕ^!β_Y^' v^β_Y^',and similarly for (Y∏_j=1^r τ_α_j - 1 (γ_j)) /(Y).Since _⩾ 0 [_i] is an extremal ray of (_i) and _i does not intersect E_i, we have _ (_i / E_i; q)_m_i [_i] = _ (_i; q)_m_i [_i] _^' (_i / E_i; u)_m_i [_i] = _^' (_i; u)_m_i [_i]by applying the degeneration formula to the deformation to the normal cone _i __i_i ∪_E_i_i.The rationality result, namely Conjecture <ref>, holds for (<ref>) , since E_i is toric.According to Theorem <ref>, it follows that under the variable change q = - e^i u we have (_i / E_i; q)_m_i [_i] =(_i / E_i; u)_m_i [_i] and thus (Y) =(Y).If Conjecture <ref> holds for Y, then ( / E ; q ∏_j=1^r τ_α_j - 1(γ_j))_ϕ^!β_Y is rational. Thus, (X ; q ∏_j=1^r τ_α_j - 1(γ_j))_β is rational, according to Proposition <ref> (<ref>). We have proven (<ref>).If both Conjectures <ref> and <ref> hold for Y, then(-q)^-_β^Y/2(Y ∏_j=1^r τ_α_j - 1(γ_j) )/ (Y) = (-iu)^_β^Y(Y ∏_j=1^r τ_α_j - 1(γ_j))/ (Y),and (<ref>) follows by extracting the coefficient of v^β from both sides of the above equality.For each nonzero β∈ (X), the descendent / correspondence for X now follows from Proposition <ref> (<ref>) and taking the sum of both sides of (<ref>) over all β_Y ∈ (Y) with ψ_∗β_Y = β.§ APPLICATIONSWe apply our main result to Fano threefolds and double solids. We also raise a question (Question <ref>) about general small transitions of Calabi–Yau threefolds. §.§ Fano threefolds via small toric degenerationsWe fix notation for Fano threefolds, namely smooth projective threefolds with an ample anticanonical line bundle, as follows:* Let Q_2 denote a smooth quadric hypersurface in ^4. Let B_k (resp. V_k) denote the Fano threefold with Picard number 1, Fano index 2 (resp. 1) and anti-canonical degree (- K)^3 = 8k (resp. k).* Let V_ρ, n denote the n-th entry in the Mori–Mukai list <cit.> of Fano threefolds of Picard number ρ.Deformation families of Fano threefolds have been completely classified, see <cit.>. In Galkin's thesis (see also <cit.>), he described all conifold transitions from such Fano threefolds to toric weak Fano threefolds. There are 44 families of non-toric Fano threefolds X which admits conifold transitions X ↗ Y to toric threefolds Y: * For ρ (X) = 1, there are 4 families: Q_2, B_4, B_5, V_22.* For ρ (X) = 2, there are 16 families: V_2, n where n = 12, 17 or 19 ⩽ n ⩽ 32.* For ρ (X) = 3, there are 16 families: V_3, n where n = 7 or 10 ⩽ n ⩽ 24.* For ρ (X) = 4, there are 8 families: V_4, n where 1 ⩽ n ⩽ 8. Applying Theorems <ref>, <ref>, and <ref>, we conclude:Let X be one of the Fano threefolds in Theorem <ref>. Then the GW/PT correspondence (Conjecture <ref>) holds for X with descendent insertions (<ref>).§.§ Double solids Double covers of ^3 with at worst ordinary double point singularities, which obtained the name double solids, were studied by Clemens <cit.>. The construction of Clemens has straightforward generalizations to more general Fano threefolds (cf. <cit.>).We still call the resulting double cover a double solid.To apply Theorem <ref>, we need the following proposition, which is probably well-known. For lack of a suitable reference we will give sketch a proof.Suppose (Z,) is one of the following pairs: * (Z,) = (^3,_^3 (a)) for a = 2, 3 ,4;* (Z,) = (^1 ×^2, _^1(1) ⊠_^2(b)) for b = 1, 2;* (Z,) = (^1 ×^1 ×^1,_^1 (1)^⊠ 3).Let Y be the zero locus in Z ×^1 defined by a general section s ∈ H^0 (Z ×^1, ⊠_^1(2)), and X the double cover of Z branched along a smooth surface defined by a general section of H^0 (Z, ^⊗ 2). Then there is a conifold transition X ↗ Y from X to Y.First, the Y is a smooth hypersurface in a product Z ×^1 of projective spaces by Bertini’s theorem. Let x_0 and x_1 be homogeneous coordinates on ^1. For the general section s, there are sections s_ij∈ H^0 (Z, ) such that s = ∑_0 ⩽ i ⩽ j ⩽ 1 s_ij x_i x_j.Let Y →→ Z be the Stein factorization of the restriction of the projection Z ×^1 → Z to Y.Thenis a double cover of Z branched along a surface B defined by the discriminant of the qudratic equation (<ref>) in x_0 and x_1, given by s_01^2 - 4 s_00 s_11∈ H^0 (Z, ^⊗ 2). A local computation shows that B is a nodal surface and thushas only ordinary double points. By perturbing the general section of ^⊗ 2 to the discriminant of (<ref>), we get a projective degeneration of X to the double solidand hence get a conifold transition X ↗ Y.For = _^3 (4), the X is a Calabi–Yau threefold, which was studied in<cit.> (see also Example 5.8 in <cit.>). Example 1.7 in <cit.> considered the Fano threefold X associated to = _^3 (3). Table <ref> gives the corresponding Fano threefolds X in Proposition <ref>.Let X be one of the smooth double covers in Proposition <ref>. Then the GW/PT correspondence (Conjecture <ref>) holds for X with descendent insertions (<ref>).Let X ↗ Y be the conifold transition as in Proposition <ref>. If Y is Calabi–Yau, i.e., = _^3 (4), then Y satisfies the / correspondence by <cit.> since it's a smooth hypersurface in a product of projective spaces. For the others, we can also degenerate the weak Fano complete intersection threefolds Y by a similar factoring argument in the proof of <cit.>. Hence the / correspondence holds for Y. Then Corollary <ref> follows from Theorem <ref>. §.§ Concluding remark We make a comment on the ratio (<ref>) for a Calabi–Yau threefold Y. If ψ Y → is a flopping contraction, then the transformation formula of the ratio (Y) /(Y) under flops was proved by Toda (<cit.> & <cit.>) and Calabrese <cit.>, where the generating series of exceptional curves of ψ is defined by(Y) = ∑_β_Y ∈ (Y)ψ_∗β_Y = 0 (Y; q)_β_Y v^β_Y.Assumeis smoothable, and let X denote a smoothing of it. Such an extremal transition X ↗ Y is called small in <cit.>.If X ↗ Y is a conifold transition of Calabi–Yau threefolds, we have seen above (cf. <cit.>) that ψ_∗ (Y)/ (Y) =(X)by applying the variable change ψ_∗ (β_Y, n)(ψ_∗β_Y, n) to (<ref>). It is natural to ask further whether the equality holds ifhas at worst terminal singularities.Does the formula (<ref>) hold for a small transition X ↗ Y of Calabi–Yau threefolds?Acknowledgments. We would like to thank Yukinobu Toda, Chin-Lung Wang, Baosen Wu and Zijun Zhou for helpful discussions. YL is supported by grants from the Fundamental Research Funds for the Central Universities and Applied Basic Research Programs of Science and Technology Commission Foundation of Shanghai Municipality (22JC1402700). SSW is supported by the National Science and Technology Council (NSTC) under grant number 111-2115-M-001-003-MY3 and thanks Institute of Mathematics at Academia Sinica for providing support and a stimulating environment.alpha
http://arxiv.org/abs/2310.18170v1
{ "authors": [ "Yinbang Lin", "Sz-Sheng Wang" ], "categories": [ "math.AG", "Primary 14N35, Secondary 14D20" ], "primary_category": "math.AG", "published": "20231027142933", "title": "Gromov--Witten/Pandharipande--Thomas correspondence via conifold transitions" }
Department of Physics, University of Fribourg, 1700 Fribourg, Switzerland Department of Physics, University of Geneva, 1211Geneva, Switzerland Department of Physics, University of Fribourg, 1700 Fribourg, Switzerland Department of Physics, University of Fribourg, 1700 Fribourg, SwitzerlandThe metal-insulator transition of VO_2, which in equilibrium is associated with a structural phase transition, has been intensively studied for decades. In particular, it is challenging to disentangle the role of Mott physics from dimerization effects in the insulating phase. Femtosecond time-resolved experiments showed that optical excitations can induce a transient metallic state in the dimerized phase, which is distinct from the known equilibrium phases. In this study, we combine non-equilibrium cluster dynamical mean-field theory with realistic first principles modeling to clarify the nature of this laser-induced metallic state. We show that the doublon-holon production by laser pulses with polarization along the V-V dimers and the subsequent inter-orbital reshuffling of the photo-carriers leads to a population of orbital-mixed states and the filling of the gap. The photo-induced metal state is qualitatively similar to a hot electronic state in the dimerized structure, and does not involve a collapse of the Mott gap. Nature of the photo-induced metallic state in monoclinic VO_2 Philipp Werner January 14, 2024 =============================================================Introduction. Driving correlated electron materials out of their equilibrium state provides new perspectives on correlation phenomena and can shed light on competing or cooperative effects which are difficult to disentangle in equilibrium. Studies of the low-temperature phase of VO_2 provide an illustrative example of this general approach. VO_2 undergoes a metal-insulator transition (MIT) at T_MIT= 340 K, accompanied by a periodic lattice distortion <cit.>. While the metallic system above T_MIT exhibits a rutile structure (R), below T_MIT the dimerization of the V chain along the tetragonal c axis and a lateral zigzag-type displacement result in a monoclinic insulator (M1). The V^4+ cations have a 3d^1 configuration and are surrounded by oxygen octahedra, which splits the 3d levels into three low energy t_2g orbitals and two high energy e^σ_g orbitals. The tetragonal distortion further lifts the degeneracy of the t_2g orbitals into an a_1g orbital (d_x^2-y^2) and two e_g^π orbitals (d_xz, d_yz) <cit.>. Above T_MIT, the t_2g bands are partially filled, consistent with the metallic nature of the R phase. In the low-temperature M1 structure, the lattice distortion results in a bonding-antibonding splitting of the a_1g bands and an upward-shift of the e^π_g bands.Goodenough <cit.> proposed that this mechanism leads to a filled bonding a_1g band and a monoclinic insulator [The notations d_|| and π^* were used for a_1g and e^π_g orbitals respectively in his paper and many other works. ]. Although most first principles studies confirm this picture, the resulting band structure cannot explain the 0.6 eV insulating gap <cit.>. The insulating state was successfully reproduced by combining density functional theory (DFT) <cit.> with cluster dynamical mean field theory (cDMFT) <cit.>, suggesting a nontrivial interplay between the lattice distortions and electronic correlations in the M1 phase. Previous DFT+DMFT studies, however, used a wide range of interaction parameters and reached different conclusions regarding the insulating nature of the M1 phase <cit.>. Also GW calculations <cit.> and hybrid functional theory <cit.> can reproduce an insulating M1 phase.Doping, external electric fields, or strain engineering <cit.> have been used to explore the MIT in VO_2. The observation of a metallic M1 phase after an optical excitation <cit.> has motivated numerous studies on the underlying mechanism <cit.>. Two scenarios for the photo-induced transition are conceivable: a structural change from the M1 to the R phase <cit.>, or the existence a monoclinic metal (mM) state <cit.>. Disentangling the purely electronic from the lattice driven mechanism requires ultra-fast (<100 fs) time-resolved techniques <cit.>. Evidence of a transient mM phase was found by combining ultrafast electron diffraction with transmissivity measurements <cit.> and time-resolved terahertz spectroscopy <cit.>. A quasi-instantaneous gap collapse (< 50 fs) was also detected with time-resolved photoelectron spectroscopy <cit.> and extreme UV transient absorption spectroscopy <cit.>, excluding a transition controlled by the structural dynamics.In this Letter, we study the photo-induced dynamics in a realistic model of VO_2 using nonequilibrium cDMFT <cit.>. Our calculations with ab initio derived interaction parameters for the equilibrium M1 phase yield a gap size in agreement with experiments <cit.>. Using the same setup in nonequilibrium simulations, we demonstrate the existence of a photo-induced metallic state and study its dependence on the laser frequency and polarization. We show that the photo-induced charge transfer from the a_1g orbital to the initially empty e^π_g orbitals, via an orbital-mixed doublon state, plays an important role in the formation of the mM phase.Model and method. To derive a realistic model for VO_2 in the M1 structure, we start from the experimental lattice structure <cit.>, perform DFT calculations using Quantum ESPRESSO <cit.>, and downfold to the t_2g orbitals using Wannier90 <cit.>. The low-energy Hamiltonian at time t isĤ(t) = ∑_𝐑∑_ai{∑_bj∑_αβ,σ[h^aibj_αβ(𝐑,t)d_ασ^ai† d^bj_βσ+h.c.] -∑_ασμ n^ai_ασ+H_K^ai},where a, b∈{1,2} label the two V-V dimers in a given unit cell. Within each dimer, i,j ∈{1,2} are the indices for the two V atoms, α,β∈{1,2,3} label the three t_2g orbitals and σ={↑,↓} denotes spin. n is the occupation, μ the chemical potential and 𝐑 labels the unit cell. The interaction term is of the Kanamori type, H_K^ai = ∑_α U_α n^ai_α↑ n^ai_α↓ + ∑_α≠β U'_αβ n^ai_α↑ n^ai_β↓+ ∑_α<β, σ (U'_αβ-J)n^ai_ασ n^ai_βσ-J ∑_α≠β d_α↑^ai† d^ai_α↓d_β↓^ai† d^ai_β↑+J ∑_α≠β d_α↑^ai† d^ai†_α↓d^ai_β↓ d^ai_β↑. Here, U_α is the on-site Coulomb repulsion for orbital α, U^'_αβ the on-site interaction between different orbitals α and β, and J the Hund coupling. The interaction parameters are computed with the constrained random-phase approximation (cRPA) <cit.> as implemented in RESPACK <cit.>, yielding the static values U_α=2.2, 2.1 and 2.0 eV for α=d_x^2-y^2, d_xz, d_yz, respectively, and J=0.28 eV. The hopping amplitudes h^aibj_αβ(𝐑,t=0) extracted from the first principles calculation yield the bandstructure and densities of states (DOS) shown in Fig. <ref>, which reproduce the DFT results. The effect of the laser pulse is modeled with the Peierls substitution <cit.>h^aibj_αβ(𝐑,t) =h^aibj_αβ(𝐑,t=0) e^-ie/ħϕ_aibj(𝐑,t),where the Peierls phase for the uniform electric field E⃗(t) is ϕ_aibj(𝐑,t)= - ∫_0^t dt^'E⃗(t')·(r⃗_bj-r⃗_ai+𝐑), with r⃗_ai the position of site i in dimer a. The external electric field E⃗(t) = E⃗_0·exp(-(t-t_0)^2/2τ^2)·sin(ω_0(t-t_0)) with τ = 2.6 fs (FWHM 6.2 fs) has a Gaussian envelope, peak amplitude E_0=0.5 eV, polarization direction Ê_0 and frequency ω_0. To solve the lattice problem, we employ nonequilibrium cDMFT <cit.> with a simplified self-consistency <cit.> (see Supplemental Material (SM)) and a noncrossing approximation (NCA) impurity solver <cit.>. We also analyze an individual dimer using exact diagonalization (ED). The initial temperature is T=1/15 eV. No qualitative changes are expected at lower T. Equilibrium spectrum. We first discuss the equilibrium results from cDMFT and ED. As shown in Fig. <ref>, the cDMFT spectrum has a gap of 0.5 eV and almost all spectral weight below the Fermi energy is contributed by the d_x^2-y^2 orbital. The fillings of the d_x^2-y^2, d_xz, and d_yz orbitals are 0.96, 0.04, and ∼ 0 electrons, respectively, in good agreement with previous theoretical <cit.> and experimental <cit.> results. The DOS for the d_x^2-y^2 orbital features two main peaks at -0. 45 eV and 1.86 eV, and two satellites at -2 eV and 3.5 eV. In the d_xz spectrum, only a small feature is located below the Fermi energy, with peak position at -0.35 eV, while a prominent peak with a broad high-energy tail exists at 0.65 eV. The almost empty d_yz spectrum exhibits two peaks at 0.46 eV and 1.54 eV. The spectrum of the half-filled Hubbard dimer features two energy levels split by U in the atomic limit or by the bonding-antibonding gap 2h (h is the hopping) in the non-interacting limit <cit.>. Based on the ED analysis, we identify the ground state of the realistic V-V dimer as a singlet state with two d_x^2-y^2 electrons, |ψ_GS⟩ = 0.89|s⟩+0.45|d_+ ⟩, where |s⟩=1/√(2)(|↑, ↓⟩-|↓, ↑⟩) and |d_±⟩ =1/√(2) (|∅,↑↓⟩± |↓↑,∅⟩), as in the single-orbital Hubbard dimer (SM). Projecting the dimer state of the cDMFT solution onto the singlet state of the d_x^2-y^2 orbital yields a fidelity of 0.87 in equilibrium. The peaks of the d_x^2-y^2 spectral function below (above) the Fermi energy correspond to the removal (addition) of an electron from (to) the dimer. In each case, a satellite is split off from the main peak by ∼ 2h_d_x^2-y^2≈1.5 eV (bonding-antibonding splitting, see SM). The main peaks of the d_xz orbital in the ED model arelocated at -0.26 eV (small spectral weight due to the low filling of the orbital) and at 1.05 eV and 1.12 eV. The gap size corresponds to the inter-orbital same-spin interaction U-3J≈ 1.3 eV, as one would expect for an atomic 3-orbital system with filling n=1. The peaks of the empty d_yz orbital are located at 1.21 eV and 1.66 eV, consistent with the bonding-antibonding splitting 2h_d_yz≈0.54 eV. So, in contrast to Ref. biermann2005, the lowest peak above the Fermi energy is associated with the e^π_g orbitals, instead of the antibonding a_1g orbital, and represents the addition of an electron to the d_xz orbital.This analysis and the comparison between the cDMFT and ED spectra allows us to conclude that the ground state of VO_2 in the M1 phase is dominated by singlet states of the d_x^2-y^2 orbital. The gap in the M1 phase represents a multi-orbital Mott insulating state assisted by the dimerization, in good agreement with experimental results <cit.> and the analysis in Refs. lazarovits2010,brito2016. Photo-excited system.We next search for the pump frequency ω_0 and polarization Ê_0=E⃗_⃗0⃗/|E_0| which yields the maximum energy absorption and study the features of the excited states using both cDMFT and ED calculations. For this purpose, we tune the polarization angle θ(the angle between Ê_0 and the dimerization axis c_R, see inset of Fig. <ref>(c)) and the frequency ω_0 of the laser pulse. As shown in Fig. <ref>(b), both in the cDMFT and ED simulations, θ=0^∘ maximizes the absorption. With this polarization fixed, the cDMFT simulations predict the strongest energy absorption for pulse frequency ω_0=3.0 eV, see Fig. <ref>(a).In the ED analysis, the main absorption peak at ω_0=2.8 eV corresponds to excitations from the ground state |ψ_GS⟩ (singlet state of the d_x^2-z^2 electrons) to a doublon state |ψ⟩ = 0.65|d_-⟩_xz+0.76|d_-⟩_x^2-y^2 with mixed orbital character (henceforth referred to as orbital-mixed doublon state). This orbital mixture of the photo-doped state is a consequence of the pair hopping term in H_K, without which the peak would correspond to the doublon state excitation of a single-orbital Hubbard dimer. In the cDMFT calculations, oscillations in the site occupations and double occupations indicate that the pulse, with maximum at t=t_0=6.6 fs (red vertical line), drives the d_x^2-y^2 electrons between the two dimer sites, see Fig. <ref>(a)(c). Since the dimer here is embedded into a lattice environment (mimicked by the cDMFT bath), the injected energy can be converted into various excitations. We observe a rearrangement of charge between the different orbitals and associated with this a reduction of the double occupation in the d_x^2-y^2 orbital. In particular, as shown in Figure <ref>(a), there is a significant flow of charge from thed_x^2-y^2 to thed_xz orbitals, which starts before the end of the pulse at ∼13 fs (grey vertical line) and persists up to the longest simulation time. The average doublon density decreases, even during the pulse, because of this flow of charge out of the d_x^2-y^2 orbitals. In Fig. <ref>(b), we plot for each orbital α the occupation of the bonding {1/√(2)(|∅,↑⟩-|↑,∅⟩)_α, 1/√(2)(|∅,↓⟩-|↓,∅⟩)_α} and antibonding{1/√(2)(|∅,↑⟩+|↑,∅⟩)_α, 1/√(2)(|∅,↓⟩+|↓,∅⟩)_α} states. Most electrons (87%) are initially in the bonding d_x^2-y^2 orbital. The laser pulse excites the electrons mostly to the antibonding d_x^2-y^2 and the d_xz orbitals. After the end of the pulse, the electrons in both the bonding and antibonding d_x^2-y^2 states flow to the d_xz orbital, via (strong) pair-hopping and (weak) inter-orbital hopping. For the following analysis, we define the spinsinglet states {1/√(2)(|↑,↓⟩-|↓,↑⟩)_α, |∅,↑↓⟩_α, |↑↓,∅⟩_α} andtriplet states{|↑,↑⟩_α, 1/√(2)(|↑,↓⟩+|↓,↑⟩)_α,|↓,↓⟩_α} for each of the three orbitals. As shown in Fig. <ref>(d), in the equilibrium M1 phase, the sector with n=2 electrons, which contains the ground state, has fidelity >0.9. The pulse populates mainly states within the n=2 sector, but also creates states with n=1 and 3 through charge excitations between the dimers. This charge reshuffling is indicated by the colored shading. After the end of the pulse (t≳ 13 fs), the fidelity of the n=2 sector slowly increases, and in the absence of energy dissipation (e. g. to phonons) will finally converge to the value corresponding to the thermalized electronic system (T∼ 2240 K, calculated from the total energy), which is represented by the color bar on the right. This thermalization takes several hundred fs. Within the two-electron sector, the fidelity of the d_x^2-y^2 singlet decreases rapidly during and after the pulse, while the triplet population in the d_x^2-y^2 and d_xz orbitals increases only slightly. In fact, during and after the excitation, an orbital-mixed state, with one electron in the d_x^2-y^2 and the other in the d_xz orbital, emerges as the most probable state (black line and Fig. <ref>(e)). We present the analogous results obtained with ED in the SM. In the ED analysis, the single dimer is isolated and thus the doublons cannot hop to or exchange energy with other sites, which leads to long-lived oscillations. An important finding is that the population of the orbital-mixed states generates spectral weight in the gap region and is responsible for the almost instantaneous partial gap filling seen in Fig. <ref>(c). Our calculations demonstrate how the metallic phase emerging in the photo-doped regime is a consequence of this charge reshuffling between d_x^2-y^2 and d_xz orbitals. We also note that the nonequilibrium spectrum after the pulse is similar to a thermal spectrum corresponding to a high electronic temperature in the M1 structure (β≈ 5eV^-1↔ T≈ 2200 K), although it has a higher in-gap population. We finally present a detailed analysis of the time-dependent cDMFT spectra, obtained for the optimal absorption frequency (ω_0=3.0 eV) and polarization (θ=0^∘). The orbital-resolved results are shown in Fig. <ref>, with the left (right) row plotting the total spectral functions A^ret (occupations A^<) obtained from the retarded and lesser components of the Keldysh Green's function by the Wigner transformation <cit.>. The photo-excited population exhibits additional peaks above the Fermi energy. In the d_x^2-y^2 occupations, we observe two short-lived peaks at 2.45 eV and 0.95 eV and in the d_xz occupation a peak around 1.6 eV. These transient peaks are visible only around the pump maximum t≈ t_0, indicating a rapid decay of the photo-generated states. The vertical lines mark the poles from the ED analysis of the excited state (SM, Fig. 9(e)), whichare in good agreement withthe cDMFT data. The ED results indicate that the transient peaks in A^< correspond to the removal of an electron from the orbital-mixed doublon state, which results in a bonding or anti-bonding state of the d_x^2-y^2 (two red lines in panel (b)) or d_xz (two blue lines in (d)) orbital. The orbital-mixed doublon state is short lived, and decays into orbital-mixed singly occupied states (Fig. <ref>(e)). Hence, duringthe photo-doping pulse, orbital-mixed doublons are created, which rapidly decay by transferring an electron to the neighboring unoccupied d_x^2-y^2 or d_xz orbital within the dimer. This dynamics has been observed in experiments as an instantaneous charge transfer effect <cit.>. Since the resulting orbital-mixed singly occupied states yield spectral weight in the gap (SM Fig. 9 right column) we obtain a transient metallic state in the M1 structure.Conclusion. Ourab initio nonequilibrium cDMFT simulations of VO_2 clarify the photoinduced charge dynamics after photo-excitation with a ω_0=3 eV laser in the M1 phase. The optical excitation induces a quasi-instantaneous gap filling by transiently populating a specific orbital-mixed doublon state, which rapidly decays into orbital-mixed singly occupied states, followed by a slow (hundreds of fs) thermalization to a high-T metallic state. On this longer timescale, the lattice is expected to respond and a reliable simulation would require the introduction of phonon degrees of freedom. For the short-time dynamics, our study unambiguously shows that multi-orbital Mott and Hund physics play a key role in the formation of the insulating M1 phase in equilibrium, and in driving electrons into the mM phase after an optical excitation, extending the previously proposed hole-driving mechanism <cit.> from the band picture <cit.> to the strongly correlated context appropriate for VO_2. The orbital-mixed states generate spectral weight in the gap, while the Mott related features (Hubbard bands) persist. The transient mM phase thus shows a gap filling, but not a gap collapse, similar to what is observed at high electronic temperature. Acknowledgments. This work was supported by the Swiss National Science Foundation via the German Research Unit QUAST (J.C.) and NCCR Marvel (V.C.). The calculations were run on the beo05 cluster at the University of Fribourg, using a code based on NESSi <cit.>. Nature of the photo-induced metallic state in monoclinic VO_2 Supplemental Material§ NONEQUILIBRIUM CDMFT To study the interacting system with electric field excitation, we use the nonequilibrium generalization of cDMFT <cit.> with the non-crossing approximation (NCA) <cit.> as cluster impurity solver. Given the complexity of the system, with four V atoms (two dimers) in one unit cell and three t_2g orbitals per V atom, we use a simplified self-consistency which is adequate for strongly correlated systems. This approach is based on a Bethe-lattice-inspired real-space construction of the impurity hybridization function. It loses the information on the details of the energy dispersion, but is very economical in terms of memory requirement, which is helpful for nonequilibrium applications <cit.>.The time-dependent hybridization function for impurity cluster i can be written asΔ̂_i(t,t')=j≠ i∑ĥ_ij(t)Ĝ_j^[i](t,t')ĥ_ji(t'),where ĥ_ij(t) is the time-dependent hopping matrix between the clusters i and j, and Ĝ_j^[i] is the cavity Green's function for the lattice with the i-th cluster removed <cit.>. The internal indices (site, orbital, spin) are not explicitly shown, but taken into account by the matrix structure. By approximating the cavity Green's function Ĝ_j^[i] with the (cluster) Green's functionĜ_j – which is only exact in a system with infinite coordination number – one obtains a self-consistency condition relating the hybridization function directly to the (cluster) Green's function <cit.>, similar to the case of the infinitely connected Bethe lattice <cit.>. We choose impurity clusters, rather than single impurity atoms, in order to capture the strong nonlocal correlations within the V-V dimers.The unit cell of M1 VO_2 contains two V-V dimers a and b, and for each dimer, we define a four-orbital cluster containing the d_x^2-y^2 and d_xz orbitals (impurity cluster 1), and a two-orbital cluster containing the d_yz orbitals (impurity cluster 2). These clusters allow us to treat the strong intra-dimer hopping within the impurity model. We retain the Slater-Kanamori type interaction within the four-orbital clusters, and the local Hubbard interactions within the two-orbital clusters, whereas the interactions between the d_x^2-y^2,xz and d_yz orbitals on a given site are treated at the Hartree level. Previous experimental and theoretical studies have shown that the d_yz orbital is less relevant for the optical excitations <cit.> and also our ED analysis suggests that the d_yz orbital is less relevant because of the higher local energy.In the cDMFT simulations, we hence treat four impurity clusters, with indices a1, a2, b1, b2 and 4, 2, 4, 2 orbitals, respectively. Associated with each cluster impurity model is a 4× 4 (or 2× 2) contour hybridization function Δ̂(t,t') constructed from the DFT-derived hopping amplitudes and the 4× 4 (or 2× 2) contour Green's functions Ĝ(t,t'). Once all the cluster Green's functions have been obtained by the NCA solver, the hybridization function can be updated using Eq. (<ref>) and used as input for the next iteration. Since there are on average two electrons in one V-V dimer, we only keep the Hilbert space sectors with up to three electrons in the NCA solver. This implementation is appropriate for the M1 phase of VO_2, where the intra-cluster hopping is larger than the inter-cluster hoppings. The Green's function defines the time-dependent density matrix as ρ̂(t) = iĜ^les(t,t). For any local operator Ô, we can then calculate the corresponding expectation value O(t)=Tr(ρ̂(t)Ô). In particular, for a state |ψ⟩, we can define the density matrix (projection operator) ρ̂_ψ=|ψ⟩⟨ψ| and measure the fidelity F(t)=Tr( ρ̂(t) ρ̂_ψ). The electron number operator for a given orbital α is n̂_α=ĉ^†_αĉ_α, so that n_α(t)=Tr(ρ̂(t)n̂_α ).§ EXACT DIAGONALIZATION ANALYSIS OF THE V-V DIMERTo analyze the response of the system to optical excitations, we use exact diagonalization (ED) to solve a time-dependent Kanamori-Hubbard dimer, with HamiltonianH(t) =H_tb(t)+∑_iH_K_i-μ∑_il σ n_i l σ,where the hopping Hamiltonian H_tb(t=0) is determined by the DFT calculation,H_tb(t=0) =∑_ij∑_lm,σ[h^ij_lm d_i l σ^† d_j m σ+h.c.],and the laser pulse is incorporated through the time-dependent Peierls phase ϕ_ij(t) = A⃗(t)·r⃗_ijof the hopping parameters: h^ij_lm(t) = h^ij_lme^-ie/ħϕ_ij(t). The Kanamori interaction on site i is H_K_i = ∑_l U_l n_il ↑ n_il ↓ + ∑_l ≠ m U'_lm n_il ↑ n_im ↓+ ∑_l<m, σ (U'_lm-J)n_il σ n_im σ-J ∑_ l ≠ m d_il ↑^† d_il ↓d_im ↓^† d_im ↑+J ∑_l ≠ m d_il ↑^† d^†_il ↓d_im ↓ d_im ↑.Here, i,j ∈{1,2} are the indices of the two V atoms in the dimer, σ∈{↑,↓} denotes the electron spin, and l,m∈{1,2,3} are the indices of the three t_2g orbitals. The interactions U, U' and the Hund's coupling J are the cRPA values mentioned in the main text. The form of the electric field pulse E⃗(t), which determines the Peierls factor via E⃗(t)=-∂_t A⃗(t), is the same as defined in the main text. Figure <ref> presents simulation results similar to those in Fig. 4(a,c) of the main text. Panel (a) shows thatthe pulse triggers charge oscillation between the two dimer sites and transfers electrons from the d_x^2-y^2 to the d_xz orbital. In panel (b), which shows the evolution of the double occupation, direct evidence for the creation of amixed-orbital doublon state is presented. In these panels, the vertical red and gray lines indicate the maximum and end of the pulse, respectively. Since in the ED analysis, the single dimer is isolated, the energy spectrum is discrete and the time evolution is periodic, without damping. In Fig. <ref> we show the equilibrium ED spectra for rescaled hopping amplitudes (rescaling factor c) and in Fig. <ref> those for rescaled interactions (rescaling factor a), i.e., for the HamiltonianH =cH_tb+a∑_iH_K_i-μ∑_il σ n_i l σ.Figure <ref>(a) with c=0 corresponds to the atomic limit. Here, to better visualize the poles p with weight A_p obtained from the Lehmann representation, we plot A(ω)=∑_pη A_p/(ω-ω_p+η^2), with broadening η=0.04. The upper Hubbard band splits into three subpeaks, corresponding to the creation of different doublon states. As explained for example in Ref. de_medici_janus-faced_2011, in the atomic limit, the three-orbital Kanamori interaction leads to gaps with sizeU-3J, U-J, U+2J, corresponding to 1.2, 1.7, 2.6 eV for our parameter set. As the intradimer hopping is turned on (c>0), the lower Hubbard band is split into bonding-antibonding peaks, see panels (b)-(f). Figure <ref>(f) with c=1 shows the ED spectra for the actual model parameters. The gap around the Fermi energy is formed by peaks of d_x^2-y^2 and d_xz character and corresponds to the inter-orbital same-spin Hubbard interaction U-3J. Figure <ref>(a) with a=0 shows the ED spectrum in the non-interacting limit. The bonding-antibonding peaks of the d_x^2-y^2 orbitals split up by 2h_x^2-y^2≈ 1.5 eV, while the d_yz peaks at -0.03 and 0.51 eV are split by 2h_yz≈ 0.54 eV and the splitting of the d_xz peaks is very small (2h_xz≈ 0.07 eV). The d_xz peaks and the bonding state of the d_yz orbital are close to the Fermi energy. As the interactions are turned on (a>0), the d_xz and d_yz orbitals are pushed up, see panels (b)-(f), and satellite peaks are created around -2 eV (d_x^2-y^2 orbital) in the occupied part of the spectrum, and also about 2 eV above the unoccupied antibonding peaks (d_x^2-y^2, d_xz, d_yz orbitals). Combing the information from Figs. <ref> and <ref>, we conclude that both interaction effects and bonding-antibonding splittings play important roles in shaping the electronic structure of the realistic system, while the gap size is determined by the inter-orbital same-spin Hubbard interaction U-3J. Among all states in the Hilbert space of a three-orbital Kanamori-Hubbard dimer, we are interested in the sector with n=2 electrons. The ground state energy and wave function of a Hubbard dimer with interaction U and hopping h are U-√(U^2+16h^2)/2 and |ψ_GS⟩ = λ/√(1+λ^2)|s⟩+1/√(1+λ^2)|d ⟩, with λ = 4h/-U+√(U^2+16h^2) and |s⟩ and |d⟩ defined as in the main text. For our parameter set U=2.2 and h=h_x^2-y^2 = 0.75, |ψ_GS⟩ = 0.89|s⟩+0.45|d ⟩. In the realistic system with crystal field splittings, the energy difference between the ground state and theorbital-mixed doublon state with energy U-J+Δ E_loc is roughly U-J+Δ E_loc-U-√(U^2+16h^2)/2≈ 2.8, where Δ E_loc=0.08 eV is the energy level splitting between the d_x^2-y^2 and d_xz orbital. § NONEQUILIBRIUM SPECTRA In Fig. <ref>, we present the ED spectra for the ground state, the excited state after the pulse, and the orbital-mixed singly occupied state. As discussed before, in Fig. <ref>(a,d,g), the ground state has a gap between the d_xz and d_x^2-y^2 orbital, which is determined by the inter-orbital interaction. The peaks below the Fermi energy are associated with the creation of single-electron d_x^2-y^2 bonding states (main peak) and anti-bonding states (satellite generated by the interactions). In panels (b,e,h), we present the spectra for the excited state after the laser excitation. This state is a superposition of the ground state (∼87% weight) and the orbital-mixed doublon state (∼12% weight). In panel (e), we observe two peaks in the d_x^2-y^2 occupation at 2.45 eV and 0.95 eV, and a peak in the d_xz occupation around 1.6 eV, which correspond to transitions from the doublon state to a singly occupied state. As discussed in the main text, these peaks are also captured by the cDMFT simulation, but only as transient features when t≈ t_0, indicating a very short lifetime of the orbital-mixed doublon state. In panels (c,f,i), we present the ED spectra of the orbital-mixed singly occupied states. The relevant observation here are the in-gap peaks contributed by both the d_xz and d_x^2-y^2 orbitals.
http://arxiv.org/abs/2310.18195v1
{ "authors": [ "Jiyu Chen", "Francesco Petocchi", "Viktor Christiansson", "Philipp Werner" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20231027150853", "title": "Nature of the photo-induced metallic state in monoclinic VO$_2$" }
Department of Astronomy, School of Physics, Peking University, Beijing, 100871, People's Republic of ChinaKavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, People's Republic of [email protected] National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Department of Astronomical Science, SOKENDAI (The Graduate University for Advanced Studies), 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Department of Physics, National Sun Yat-Sen University, No. 70, Lien-Hai Road, Kaohsiung City 80424, Taiwan, R.O.C. Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Road, Shanghai 200030, P. R. China Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, 85748 Garching, Germany INAF Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Florence, Italy Boston University Astronomy Department, 725 Commonwealth Avenue, Boston, MA 02215, USA East Asian Observatory, 660 N. A'ohōkū Place, University Park, Hilo, HI 96720, US Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan, Republic of China Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA Departament de Física Quàntica i Astrofísica (FQA), Universitat de Barcelona (UB), c. Martí i Franquès, 1, 08028 Barcelona, Spain Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona (UB), c. Martí i Franquès, 1, 08028 Barcelona, Spain Institut d’Estudis Espacials de Catalunya (IEEC), c. Gran Capità, 2-4, 08034 Barcelona, Spain National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China Computational Astronomy Group, Zhejiang Lab, Hangzhou, Zhejiang 311121, China University of Chinese Academy of Sciences, Beijing 100049, China SOFIA Science Center, NASA Ames Research Center, Moffett Field, CA 94 045, USA Green Bank Observatory, PO Box 2, Green Bank, WV 24 944, USA Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4235, USA Department of Space, Earth & Environment, Chalmers University of Technology, SE-412 96 Gothenburg, Sweden Traditionally, supersonic turbulence is considered to be one of the most likely mechanisms to slow down the gravitational collapse in dense clumps, thereby enabling the formation of massive stars. However, several recent studies have raised differing points of view based on observations carried out with sufficiently high spatial and spectral resolution. These studies call for a re-evaluation of the role turbulence plays in massive star-forming regions. Our aim is to study the gas properties, especially the turbulence, in a sample of massive star-forming regions with sufficient spatial and spectral resolution, which can both resolve the core fragmentation and the thermal line width.Weobserved NH_3 metastable lines with the Very Large Array (VLA) to assess the intrinsic turbulence. Analysis of the turbulence distribution histogram for 32 identified 3 cores reveals the presence of three distinct components. Furthermore, our results suggest that (1) sub- and transonic turbulence is a prevalent (21 of 32) feature of massive star-forming regions and those cold regions are at early evolutionary stage. This investigation indicates that turbulence alone is insufficient to provide the necessary internal pressure required for massive star formation, necessitating further exploration of alternative candidates;and (2) studies of seven multi-core systems indicate that the cores within each system mainly share similar gas properties and masses. However, two of the systems are characterized by the presence of exceptionally cold and dense cores that are situated at the spatial center of each system. Our findings support the hub-filament model as an explanation for this observed distribution.Subsonic turbulence is ubiquitously found in high-mass star formation Wang et al. The role of turbulence in high-mass star formation: Subsonic and transonic turbulence are ubiquitously found at early stages Chao Wang 1,2, Ke Wang2corresponding author email: [email protected], Feng-Wei Xu 2,1, Patricio Sanhueza 3,4,Hauyu Baobab Liu 5, Qizhou Zhang 12, Xing Lu 6, F. Fontani 8, Paola Caselli 7,Gemma Busquet 13,14,15,Jonathan C. Tan 22,23, Di Li 17,18,19, J. M. Jackson 20,21, Thushara Pillai 9, Paul T. P. Ho10,11, Andrés E. Guzmán 16, Nannan Yue 2 Received xx xx, 2023; accepted xx xx, 2023 =================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Massive stars (M_* >8) play a major role in the energy budget of galaxies via their radiation, winds, and supernova events, yet the picture of their formation remains unclear. Specifically, assuming a conversion efficiency from the core to the star is at 50%, a massive star of 10 M_⊙ would require a core of at least 20 M_⊙ <cit.>. However, under typical conditions (e.g., sound speed of c_s=0.2and gas number density n_H=10^5 cm^-3), the Jeans mass is less than 1 M_⊙ <cit.>. Therefore, it is unclear how a core with more than 100 Jeans masses would survive fragmentation, rather than giving rise to hundreds of low-mass cores.To address this, <cit.> proposed the turbulent core accretion model (hereafter, TCA), where highly turbulent gas provides additional support against gravitational collapse. As a consequence, the equivalent “turbulent Jeans mass” enables the formation of massive stars <cit.>. This theoretical model has been supported by following observations <cit.>. However, several studies have revealed thermal fragmented cores, which are consistent with the Jeans length in high-mass star-forming regions <cit.>. Similarly,<cit.> and <cit.> found that fragmentation in massive protostellar cores is dominated by thermal Jeans fragmentation rather than turbulence levels, which is further proven in <cit.>. This fact aligns with the competitive accretion model <cit.>. This model posits that molecular clouds thermally collapse into Jeans cores. Then, cores that are located at the center of the gravitational potential can grow into high-mass stars through the gas inflow. In this model, supersonic turbulence is not needed.Since dense cores with such high mass are typically located at a few kiloparsecs, interferometers are necessary to reveal their gas properties in detail. Recent studies have used NH_3 to study samples of the massive star-forming regions and discovered that supersonic turbulence is universal in regions of massive star formation <cit.>. However, it is important to note that <cit.> have found that the turbulence became transonic on the scale of cores (at 0.1 pc) with NH_3 as well N_2H^+.The change of the supersonic to transonic turbulence from those observations suggests that the former consequence may have resulted from limited resolution. With sufficient resolving power, the intrinsic turbulence may be revealed as transonic or even subsonic. To achieve this sufficient resolution, we require: (1) high spectral resolution capable of resolving thermal line widths (typically 0.2 at 15 K); (2) high spatial resolution that can resolve dense cores (0.1 pc or 4” at 5 kpc); and (3) high mass sensitivity that can resolve thermal Jeans mass (typically 1 M_⊙).Previous VLA observations in NH_3 <cit.> used a correlator setup with a channel width of 0.6-0.7 , which was much larger than the thermal width, making it impossible to detect the potential subsonic turbulence. In contrast, <cit.> combined VLA-C configuration 3 image cubes at 0.2 resolution with VLA-D configuration data at 0.6resolution to study clump P1 in the infrared dark cloud (IRDC) G28.34+0.06. They found that all dense cores coincided with a reduced line width compared to the general clump, indicating dissipation of turbulence from the clump to the core scales <cit.>.In addition, <cit.> have found the existence of the subsonic turbulence with the VLA observation under about0.3 in the Orion molecular cloud. Later, <cit.> have found a trend of turbulence dissipation, where the turbulence changed from transonic to subsonic as the spatial resolution increased from 10^4 au to 10^3 au. In the study of IRDC G035.39-00.33, <cit.> found that more than a third of all the pixels, which coincide with star-forming dense cores, show subsonic non-thermal motions.Similarly, <cit.> have found the turbulence of the filaments and cores in NGC6334S are sub- or transonic. They credited this result to the high spatial and spectral resolutions (0.02 pc and 0.2) of the observations by VLA and ALMA.These studies all suggest that the turbulence is resolution-dependent, highlighting the possibility of finding an intrinsic subsonic turbulence. To reveal the turbulence properties in massive star-forming regions, observation with sufficient spatial and spectral resolution are needed. In addition, the targets should be at the early evolutionary stage (e.g., IRDCs) so that the intrinsic turbulence can avoid being affected by star-formation activities. As for evolved sources at the same resolution, our pilot work in the case of G35.20 <cit.> has shown that Mach number decreases from ∼ 6 on the scale of 0.1 pc to ∼ 2 towards the scale of 0.01 pc. If the intrinsic turbulence is commonly found to be subsonic in massive star-forming regions, the role of the turbulenceshould be re-evaluated. In TCA, other internal pressure candidates such as magnetic fields could play a more important role in high-mass stars, which depends both on the scale and the evolutionary stage under the overestimated velocity dispersion <cit.>. Otherwise, other theories may explain the formation of high-mass stars better. For example, the competitive accretion model or the global hierarchical collapse model <cit.>. GHC has similar performance in smaller cores with a few solar masses which can replace the role of the magnetic field. Besides, <cit.> and <cit.> also proposed different models that better explain the observed phenomena.To investigate the turbulence properties, we selected 13 massive star-forming regions based on the following criteria: (1) the distance of the selected regions should be less than about 5 kpc, which allows the core identification at about 0.05 pc; (2) the hydrogen column density of the selected regions should be higher than 10^22 cm^-2 and the mass should be larger than 5×10^3 M_⊙, allowing the formation of the massive stars (similar as the criteria in <cit.>); and (3) the evolutionary stage of the regions should be earlier than the evolved IRDC (e.g., G35.20), which avoids the influence of the feedback from stellar activities. We selected 13 regions mainly from the APEX telescope large area survey of the galaxy (ATLASGAL; ). We also add some typical IRDCs out of the range of ATLASGAL as supplementary. Besides, we chose several well-studiedfilaments from the Galactic Cold Filaments <cit.> to study the potential effect of the spatial distribution to the star-forming regions. The basic parameters of the selected sources are presented in Table <ref> of the corresponding cited papers. Figure <ref> shows the infrared environment of the sample with the Spitzer <cit.> infrared three-color images (blue: 3.6 μm; green: 4.5 μm; red: 8 μm) as the background and the white contours from ATLASGAL <cit.> 870 μm continuum emission.In this work, we present the results of VLA observations towards 13 massive star-forming regions and extract 32 NH_3 cores in total. Based on gas properties of those cores, we study their evolution and dynamics.The paper is structured as follows.In Section <ref> we present our observations and the data reduction. Section <ref> introduces the identification, the fitting and the method. Section <ref> studies the parameters of the cores fitted from the NH_3 lines and the implications of the turbulence of star formation based on the Mach number. Section <ref> discusses the separation and the selection effect. In Section <ref>, we summarize the main conclusions. § OBSERVATIONS§.§ Sample description As shown in Table <ref>, all clumps which are selected from Galactic Cold Filaments, ATLASGAL <cit.> and other typical IRDCs are massive enough with the order of magnitude at 10^4 M_⊙. As seen from the three-color images of the regions presented in Figure <ref>, based on previous observations by Spitzer <cit.> [https://irsa.ipac.caltech.edu/irsaviewer/], most clumps are IR-dark with a few IR-bright regions. Within the sample, G11.11, CFG47, CFG49, CFG64, and IRDC28.23 <cit.> are representative filaments <cit.>. G11.11 (also named “Snake”) is a well-studied S-shaped IRDC <cit.> with several ongoing star-forming regions (e.g., masers, outflows) and CFG49 is near an HII region <cit.>. G14.99, G15.07 and G15.19 are compact sources from ATLASGAL <cit.>. But G15.19 is presented as a supplementary in appendix due to its much larger distance compared to other sources. The last two have weak IR emission but G14.99 has a maser and a possible young stellar object (YSO) <cit.>. G14.2 is another cloud with filamentary structure. Its cold dense cores <cit.> and the strong magnetic field <cit.> may indicate the existence of a possible subsonic turbulence. Other sources in the sample have at least one identified YSO. G48.65 is a cold IRDC with several YSOs in very early stages <cit.>. G79.3 is an IRDC in Cygnus-X with at least five YSOs and still forms protostars <cit.>.G111-P8 is an active star-forming clump with the maser <cit.>. IRAS18114 has strong outflows and a Class I YSO <cit.>. YSOs in I18223 may be the result of the cloud-cloud collision <cit.>.As presented in Table <ref>, the distance range of the clumps in the selected sample is from 0.9 to 5.4 kpc, which allows us to investigate the role of turbulence on various spatial scales. Also, the different locations in the Milky Way help to confirm the generality of the conclusion. According to the TCA model proposed by <cit.>, the strength of turbulence may be affected by the evolutionary stage of the source. Therefore, the selected sample includes both infrared (IR) dark and bright clumps to ensure that those clumps are at different evolutionary stages. For the filamentary clumps, we adopt the value of the size of the main-axis measured in <cit.>. Other clumps sizes are roughly measured from the dust maps of the ATLASGAL. §.§ VLA Observations All the observations were executed with the VLA in its D/DnC configuration at K band from 2013 to 2014 (project ID: 13A-373, PI: Ke Wang; 14A-272, PI: Patricio Sanhueza). The settings of the observations are listed in Table <ref>. We specially configured the correlator to cover the frequency range of 18-26.5 GHz, containing several typical lines which trace the dense gas (e.g., 3 inverse transition lines from (1,1) to (7,7), details can be found in Table <ref>). The spectral resolution of those observations is 15.625 kHz (about 0.2), which is similar to <cit.>.The largest recoverable angular scale in D configuration at this frequency is approximately 60” in ∼ 2' primary beam. Due to a incomplete u-v coverage in a snapshot observation, the largest recoverable angular scale can be even smaller [https://science.nrao.edu/facilities/vla/docs/manuals/oss/performance/resolution]. Thus the line width may be underestimation because of the missing flux and leads to a higher fraction of the weak turbulence <cit.>. Even though, since the resolved 3 core size in this work are much smaller than the largest recoverable angular scale, the measured 3 line width should not be severely affected by the more diffuse velocity components from larger scale (typically 1 pc). Besides, similar studies lacking single-dish data also estimated line width biases through simulations, finding that the effect is not significant enough to alter conclusions, for instance, <cit.>(about 20%) and <cit.>(3%-10%).The maximum reduction of line width found in those studies is about 20%, leading to a change in the Mach number by 13% in our results, which is not enough to significantly alter conclusions. §.§ Data reduction The VLA antenna baseline and the atmospheric opacity have first been corrected using Common Astronomy Software Applications (CASA) 4.7.2 <cit.>. Then the standard calibration solutions of the bandpass, flux, and gain are applied onto the corrected raw data. Those calibrators can be found in Table <ref>. The systematic uncertainty from the flux calibration is around 10%, consistent with similar observations <cit.>.As most of the clumps in the selected sample are at the early evolutionary stage, the signal of the emission lines can be weak. Considering the reliability of the data, we applied two different settings of parameters in tclean, a CASA task which uses the multi-scale CLEAN algorithm <cit.> in the u-v space. The specific parameter settings used in multi-scale CLEAN are1, 3, 9, and 27 pixels size to simultaneously recover both point sources and extended structures. In the primary setting, we use the natural weighting to optimize for searching the low signal-to-noise ratio (S/N) lines. Observed results are listed in Table <ref>. The aim of this setting is to detect the weakest signal and generate a complete detection result. Then we applied the second setting onto the clumps that contain the detected signal (S/N>3) from the primary setting, with a robust parameter of 0.5 under the Briggs weighting. Some of the clumps (e.g., G15.07, CFG64-B) that are detected in the first setting with the peak-S/N less than 5 are not detected in the second setting. We adopt the result which is based on the second setting as the final input data for the analysis. For example, after second tclean task, the mean synthesized beam of most detected data cubes is 2.8”×4.2” with the mean position angle at 65^ ∘. Different observation configurations resulted in a slightly different angular resolution and the position angle, but the range of this difference is less than 20% of the corresponding mean value. The physical resolution is determined both by the distance and the angular resolution. As presented in Table <ref>, the range of the distance is from less than 1 kpc to more than 5 kpc, the physical resolution also covers large range which may lead to a biased conclusion because of the selection effect. However, this effect is not seen in this study, as we discuss in Section <ref> and Section <ref>. Due to the different integration times and the weather, the resultant rms noise range is from about 4 mJy beam^-1 to 10mJy beam^-1 with the mean noise value at 4.2 mJy beam^-1. The following fitting is done on second tclean results, and each data cube has been smoothed to the same synthesized beam shape to NH_3 (1,1). In Table <ref>, we present the detection result of each lines in this observation. Most clumps are detected in NH_3 (1,1) and (2,2) lines, and 10 of 32 are detected in NH_3 (3,3), which is helpful to revealing the ortho to para ratio (OPR). The high-excitation transitions of NH_3 of (4,4) or higher are also undetected.Among those clumps, CGF49_s1 has been detected in NH_2D , with a few pixels. CGF49_s1 is optically dark but IR-bright at 4.5 μm. As the fitted gas properties in Table <ref>, CGF49_s1 is over 30 K with the lowest column density of this sample. This special gas properties may be resulted by the nearby HII region <cit.>. G111_P8 and G14.99 have been detected in H_2O masers, a well known signpost of star formation <cit.>. Those two clumps harbor IR bright sources. But during the fitting process, we found that the noise of G111_P8 is relatively high because of the bad weather conditions during observations. Only few pixels (less than 5) can be fitted to derive the gas properties, so we refrain from analyzing this source. The SNR of the H_2O maser (locates at RA=274.571, Dec=-15.969with a peak flux 28.1 mJy beam^-1 in Table <ref>) in G14.99 is much higher than G111_P8 and the fitted result shows that G14.99 has a typical hot and dense core with supersonic turbulence. This detection result of G14.99 is consistent with the maser line and in an earlier study <cit.>. The lack of other H_2O maser sources <cit.> in G14.99 may due to the S/N or the variability of the H_2O maser <cit.>.In this observation, G14.2_P1 has a methanol maser (located at RA=274.552, Dec=-16.825, with a peak flux 105.3 mJy beam^-1 in Table <ref>), another tracer of the massive star formation. Although G14.2_P1 is IR-dark at 8 μm, the methanol maser indicates that the protostar is formed in the center of the G14.2_P1. This observation have detected 2 H_2O maser lines, 1 CH_3OH maser line, and 1 NH_2D line among 32 selected clumps. Considering the detection result, the following work is mainly based on NH_3 lines from (1,1) to (3,3).table-1 § IDENTIFICATION AND FITTING§.§ Identification of coresAs we introduced before, a core with more than 100 Jeans masses may fragment into lots of smaller cores. This fragmentation occurs in part of this sample. For example, Figure <ref> shows that there are two dense cores in G14.2-P3. Thus, we integrated the intensity maps of the NH_3 (1,1) line for each detected clump. With the maps in Figure <ref>, we find that there are multiple cores in some observed clumps. NH_3 has a good association with cores in dust continue map <cit.> which can be used to trace the dense gas in massive star-forming regions <cit.>. Although NH_3 (1,1) may be optically thick in the most dense part, its hyperfine structures still can relatively well trace the gas temperature and the column density <cit.> in this sample. Thus, the identification of the core and the analysis of its gas properties are mainly based on the fitted results from the NH_3 emission lines. In the rest of this work, we use "cores" to refer to the NH_3 cores.AS 21 VLA pointings detected in NH_3 (1,1) lines with more than five pixels per pointing, the criteria for the definition of a core for the following study are: (1) contains at least 10 pixels (larger than the beam size which means the core is resolved); (2) the fitting result must be continuous and the uncertainty of parameters should be less than 10%; and (3) contains only one column density peak in the center of the core. The last requirement is based on the error study of <cit.>: the blending effect which may affects the following a study of the gas evolution <cit.>. We fitted radial profiles of the recognized cores of the temperature and the column density with one Gaussian component. We checked their residuals and ensure that there is little possibility for the existence of the second component with the current data.We used the imfit task of the CASA to recognize the basic parameter of the possible core in the integrated intensity map of the NH_3 (1,1) line and found 32 cores in 21 VLA pointings. We labeled these cores “parent molecular cloud+core number” in Table <ref>. Most pointings (14 out of 21) have 1 core but there are 7 pointings which are multi-cores systems: four clumps (G14.2-P1, G14.2-P3, G11.11_s5 and G11.11_s11) have two cores; two clumps (G79-C19 and G48.64) have three cores and IRAS18114 has four cores. More of the analysis are discussed in Section <ref>. §.§ Ammonia spectral line fitting We used the Python package PySpecKit <cit.> to fit the NH_3 (1,1) to (3.3) lines. The S/N threshold is 3σ and the model <cit.> allows us to fit six parameters (excitation temperature (T_ex), kinetic temperature (T_k), column density (N(NH_3)), ortho-to-para ratio (OPR), centroid velocity (V_LSR), and velocity dispersion (σ_v)) together. For the cores detected in NH_3 (3,3) line, we try to fit their OPR. This parameter may trace the evolution of the massive star-forming regions <cit.> but rarely be detected in previous NH_3 studies <cit.>. Since NH_3 (3,3) line is rarely detected in massive star formation regions and can be observed with a non-local thermodynamic equilibrium (LTE) <cit.>, the fitted OPR may have a large uncertainties and only give the lower limit value. This work obtains the OPR values that should be taken as the lower limits for reference only, instead of strong constraints.In the fitting process, several cores do not converge so we have therefore excluded them from further statistical analysis. Figure <ref> shows the integrated intensity map of the NH_3 (1,1) of all the successful fitted cores and the Table <ref> presents the fitted parameters of each core.We use parallel computing to speed up the fitting process of more than 10^9 data points. Each fitted line is checked both by its residuals and manual inspection.The relative uncertainties for all fitted parameters in each line are required to be less than 10%. To avoid local minima in the fit, we tried multiple sets of initial guesses evenly distributed throughout the parameter space, and compared their residuals to obtain the best fitting. The uncertainty of the fitting is mainly from the low S/N pixels. We tested the setting of the threshold higher to 5 and find that fitted data points are not enough to obtain statistics. The current threshold optimizes the weak signal and the uncertainty. We have also tested one- and multi-velocity components models in the fitting process to avoid the missing velocity components. By visually checking, we found that all the cores are dominated by one velocity component. In addition, we have also provided figure. <ref> as an example of spectral line fitting in the appendix for reference. Figure <ref> presents the map of fitted parameters of G48.65_c1. The maps of the rest cores can be found in figure <ref> and figure <ref> in the appendix. §.§ Mach number calculationThe Mach number is defined as σ _V_non - th/c_s and the sound speed is calculated from c_s = √(k_BT_kin/m_Hμ _p), which is same to our previous work in G35.20-0.74 N <cit.>.We used the velocity dispersion along the line of sight, which is similar to <cit.> and <cit.> to derive the non-thermal velocity dispersion:σ _V_obs^2 =σ _V_th^2 +σ _V_channel^2+σ _V_grad^2+σ _V_non-th^2 , where σ _V_th is the thermal velocity dispersion,defined as k_BT_kin/m_NH_3 <cit.>; σ _V_obs and T_kin are fitted parameters; and m_NH_3 is the ammonia mass. The σ _V_channel measurement is due to the effect from the channel width (0.23 km s^-1 in this work): 0.23 km·s^ - 1/2√(2ln 2). In most most previous works, the much larger channel width (0.6 km s^-1) of the VLA observation <cit.> may be the reason of the supersonic turbulencedetection. The σ _V_grad is the unresolved velocity gradient within the synthesized beam. We fitted a uniform velocity gradient at the large scale of each core and estimated the difference ofthe velocity at two opposite edges of the synthesized beam, which may enlarge the measured velocity dispersion. For most filaments, their flat gradients are about 0.5 km s^-1 pc^-1 <cit.>, which contributes less than 2% at the scale of the synthesized beam (the typical dispersion is larger than 0.1 km s^-1) to the observed velocity dispersion, which can be ignored in most cases.Since the conversion of the fitted NH_3 column density to H_2 column density may have large uncertainty, as noted in <cit.>, and several cores have quite flat H_2 column density distributions, so the potential caveats exist that the region defined from the NH_3 column density peak may not trace dense and cold gas.In the following analyses, we chose the sub-region (the red circles in figure <ref>, figure <ref> and figure <ref>) with the lowest Mach number which covers more than three-beam size (more than 20 data points) for each core to study the core-averaged Mach number, which is listed as "M_n" in Table <ref>. The reasons of this data selection are: (1) avoiding the influence from the external environment on the intrinsic turbulence and (2) these regions with the lowest Mach number are usually associated with the highest column density and can reveal the properties of the gas where the massive star may form. § RESULTS Figure <ref> and Table <ref> present the distribution map and the statistical results of the main parameters of G48.65_c1. We add the median value as an auxiliary parameter because of the asymmetrical profile in part of the parameters' histograms. We calculated the mean value, the median value and the dispersion of each core's map and plotted the histograms of those statistical parameters in Figure <ref>. §.§ Cores' Parameters and their Pearson correlation coefficient§.§.§ Temperature As the temperature histograms presented as the first row in Figure <ref>, 32 cores are mainly located in two ranges: 10-20 K and 40-50 K. Those two ranges in temperature histograms are consistent with two typical star-forming evolutionary stages: the prestellar (10-20 K) and the protostellar (40-50 K). Both the distributions of the histogram of the mean value and that of the median value are similar. G11.11_S5-c2 is the coldest core (about 6 K) of the sample. CFG49_S1-c1 (about 35 K) may have a protostar <cit.>. Based on their temperatures, we divide cores into the prestellar group and the protostellar group, and represent them with different colors (blue and red) in the following statistical figures. We also check their IR images and previous studies, ensuring that the classification is consistent with previous studies.The cores in the protostellar group are IRAS18114 (4 cores: c1 to c4), CFG49_S1-c1 (HII region) and G14.99-c1 (maser) which contribute about about 18% of the sample. The temperature of CFG49_S1-c1 is relatively low in the protostellar group. On the contrary, the outer region of G14.99-c1 has the highest temperature in the protostellar group, but the temperature in the center of G14.99-c1 is much lower. The trend of the temperature of G14.99-c1 toward the center is decreasing. The four cores in IRAS18114 are typical protostellar with the warm and dense gas and form a multi-core system in Figure <ref>. In the protostellar group, the gas motion may be highly affected by the environment, the fragmentation or the embedded protostellar outflows, which enhance the turbulence of the gas. The histogram of the temperature in prestellar group has a Gaussian profile. The peak is located at 14 K with the dispersion at 3 K. Those values are similar to previous studies of prestellar cores <cit.> which can reveal the initial condition of the turbulence. §.§.§ Column density The range of the NH_3 column density is mainly from 10^14.2 cm^-2 to 10^15.1 cm^-2 with the peak at 10^14.7 cm^-2. This peak value is lower than the mean column density of the massive cores in <cit.>, but higher than that of <cit.>. Three data points that are out of this range are belong to CFG49_S1-c1 (10^12.5 cm^-2), G79.3_C19-C3 (10^15.8 cm^-2), and G11.11_S5-c2 (10^16.1 cm^-2). The column density values of CFG49_S1-c1, G79.3_C19-C3, and G11.11_S5-c2 are different to other cores. Similar situations also occurred in their temperatures. This may indicate their different evolutionary stages or environments. Although other cores belong to different temperature groups, they have similar column density values which means that the column density does not change a lot during the evolutionary stage: the evolution from the prestellar to the protostellar core may not enhance the cores' column density.Based on those fitted column density values and the sizes, we estimated cores' mass based on the [N_NH_3/N_H_2] value (4.6×10^-8) from <cit.>. The mean and median mass of all cores is 17.0/8.2M_⊙ with uncertainties at about 10%. This mean and median mass of cores is much larger than that of the low mass cores in <cit.>. Otherwise, this mean/median mass is consistent with the results in <cit.> after the same mass conversion. Since the mean column density of our sample is slightly lower than that of <cit.> and cores in <cit.> are typical candidates of massive stars (<cit.> have used smaller convert factor as 3×10^-8), we deduce that cores in our sample may be at the earlier evolutionary stage, which would accrete the gas until growing into larger cores similar to the cores in <cit.>. As the following work of <cit.>, <cit.>found transonic turbulence in similar cores. We expect to revealing the properties of the turbulence in our earlier cores.§.§.§ Velocity dispersion The histogram of the fitted (or observed) velocity dispersion in Figure <ref> has a long tail up to about 1.6 km s^-1 with the peak at 0.35 km s^-1. Considering with the spectral resolution in this work (0.23 km s^-1), the velocity dispersion in most cores is resolved. Even without the correction of the thermal motion and other effects, nearly one-third of the sources are sub- or transonic. §.§.§ Parameter correlationsFigure <ref> shows the relations of main parameters: temperature, column density, velocity dispersion, and the OPR. The temperature and the velocity dispersion have a positive correlation with the correlation parameter at 0.73. This correlation parameter is reasonable because the higher temperature means the stronger thermal motion of the gas, which results in the larger velocity dispersion. From the colder group to the warmer group, the mean velocity dispersion becomes larger as the mean temperature raises. On the other hand, the core with the higher temperature indicates the possible complex gas motion there which leads to the larger velocity dispersion. This enhancement may be more important in the protostellar group which could explain the weak correlation in the warmer group between the temperature and the velocity dispersion. Since the NH_3 column density locates within a narrow range, the correlation between the column density and other parameters is rather weak.§.§ TurbulenceIn a typical scenario of the massive star-forming, the turbulence of the gas gradually decays toward the center of the clump <cit.>. In the innermost region (the core scale which is about 0.1 pc), the turbulence could become sub- or transonic before the protostellar forming and giving its feedback <cit.>. In order to avoid the interference of the surrounding gas in the turbulence study, we selected data points around the local minima in the Mach number map of each core, which is usually the center of each core. The selected sub-regions were resolved by three synthesized beams (covering more than 20 points). §.§.§ The distribution of the Mach number The histogram in the middle row of Figure <ref> shows the statistical result of the Mach number. Its distribution is very different from that of the velocity dispersion in the fourth row of Figure <ref>. The mean value of the Mach number histogram is 1.3, which means the turbulence of those region is mainly transonic, instead of the supersonic turbulence that has been predicted to be necessary in TCA. However, the multi-peak distributionindicates that this mean value is not suitable enough to describe the whole properties of the turbulence in massive star-forming regions.First, a multi-Gaussian model should be used to fit the Mach number distribution in order to more reliably determine the existence of different components.We used the Gaussian mixture model (GMM) instead of the simple Gaussian fitting.Based on the scikit-learn GaussianMixture, we estimated the number of the component with a GMM model onto the Mach number distribution of the 32 cores and we both used the Bayesian information criterion (BIC) and Akaike information criterion (AIC) to select the optimal model under different number of components. AIC suggests three to five Gaussian components as the most probable model and BIC suggests three components. As fitting with the multi-Gaussian model with three conponents, we have found three main components, which are relatively independent to each other. Their peaks and dispersions are 0.4±0.1, 1.2±0.2, and 2.4±0.3. However, the sample size of each component is small (about 10) which is insufficient to definitively demonstrate the necessity of three components.Since the Mach number distribution in this work exhibits continuity and this continuous trend is consistent with the evolutionary stages of massive stars which are not clearly delimited. Thus, we did not use the "component" to describe the Mach number distribution andused velocity regimes such as "subsonic, transonic, and supersonic" to characterize different parts of the Mach number distribution.The subsonic regime has 16 cores (50%). Except for G14.99-c1, most cores in this part are the typical prestellar with cold (11-17 K) and dense (10^14-15 cm^-2) gas. The distribution maps of their column density and temperature are relatively flat.The transonic regime has seven cores (about 22%). Cores in this regime is warmer (11-22 K) than those in the subsonic regime with the similar column density. The distribution maps of those cores' column density and temperature are not as flat as that of the cores in the subsonic regime. Those warmer cores may be at a later evolutionary stage.The supersonic regime has nine cores (about 28%). The gas properties of cores in this regime are very different to each other. CFG49_S1-c1 has the warm (about 35 K) and thin (10^12.5 cm^-2) gas. Its high Mach number is more likely from the shock wave of the HII region rather than the feedback of the protostellar. Otherwise,G11.11_S5-c2 is very cold (6 K) and dense (10^16.2 cm^-2), which is similar to G79.3_C19-C3 (10 K and 10^15.8 cm^-2). Those two cores are not the typical prestellar or protostellar. The high Mach number may be due to the gas infall from the interaction of other cores in the same multi-core system. We discuss this further in Section <ref>. The rest cores have large temperature dispersions. The histogram of the Mach number reveals a result:based on this sample, the sub- and transonic turbulence is prevalent (21 of 32, about 72%) in massive star-forming regions and closely associated with the early evolutionary stage. Since the mean core-mass is relatively not significantly up to the high-mass stellar cores, we selected cores with more than 16and find that about 78% of them exhibit subsonic or transonic turbulence. This means that it is not the ubiquitously weak turbulence present in low-mass cores, as commonly assumed, that affected the conclusion. In the histograms of the Mach number, we found multiple regimes, suggesting that the intensity of turbulence varies in different evolutionary stages and tends to increase with evolution until becoming supersonic. This poses a challenge to the TCA <cit.>: this model requires supersonic turbulence in the early evolutionary stage to slowdown the gravitational collapse so that massive stars can form. However, this is inconsistent with our results. Our study of the turbulence in massive star-forming regions indicates that sub- and transonic turbulence cannot provide enough pressure. Therefore, other pressure sources, such as the strong magnetic fields, may replace the role of the supersonic turbulence. The TAC model of the massive star formation need to be revised accordingly or be replaced by other models, for instance, GHC. We discuss this further in Section <ref>. §.§.§ Correlation with other parameters The second row of Figure <ref> presents the relations between the Mach number and other parameters. The Mach number has a weak relation with the temperature (correlation parameter at 0.36). Since the Mach number is calculated from both kinetic temperature and velocity dispersion, while the velocity dispersion displays a tight relation with the temperature, the weakpositive correlation is expected.This is different from that of the velocity dispersion. As we discussed in Section <ref>, both the larger thermal motion and the possible more complex gas motion in warmer cores could enlarge the velocity dispersion. However, thethermal motion part has been subtracted form the Mach number we used in the second row of Figure <ref>. Thus, the relation between the Mach number and the temperature could reveals the change of the turbulence with the the raising temperature. In all of the cores, the turbulence raises with the temperature. However, this trend no longer exists in the protostellar group: the turbulence keeps supersonic, while the temperature changes from 40 K to 50 K. As we mentioned in figure <ref>, G14.99 is not included in the figure because of its extremely high value which is biased by the bright point source in the center of the core. The gas around the point source is still cold with the small velocity dispersion. The column density has a very weak-link with the Mach number. As what we discussed from Figure <ref>, the column density keeps at 10^14-15 cm^-2 and does not change a lot at different evolutionary stages. Thus, the change in the Mach number has little influence on the column density. The correlation parameter of the Mach number and the velocity dispersion is 0.78 from the second row of Figure <ref>. But their profiles of the histograms are different. Beside the turbulence which is traced by the Mach number, the velocity dispersion contains the channel width, thermal motions, and other effects. In several early studies, the velocity dispersion are roughly used as the replacement of the Mach number to study the turbulence. The tightly linking between those two in this study supports this replacement. But our research reveals that the velocity dispersion (one-third are sub- and transonic) only inherits some properties of the Mach number (about 72% are sub- and transonic) and their profiles could be different from each other. §.§.§ Turbulence and the temperature and column density distribution Besides the effect from the turbulence on the whole core, the gas motion could also reshape the profile in the core. We fit the temperature and column density radial profile of each core with the power law model as T ∼r^ - p_T and N ∼r^ - p_d, and we plotted those two parameters with the Mach number in Figs. <ref> and <ref>. We divide the total Mach number data into several bins with a step of 0.5. For each bin, we calculate the mean/dispersion value and plot them in Figs. <ref> and <ref>. Cores in the same clump are labeled with the same color. We present the sampling method and fitting model in Figure <ref> of the appendix, with details introduced in the corresponding paragraphs of the appendix. The whole trends in those two figures are similar: as the Mach number increases, both the temperature and column density radial profile become flatter. In the high-value end of the Mach number, this trend becomes ambiguous. The reason of this trend may be the extra pressure of the turbulence. The stronger turbulence could disturb the original gas structure of the core and support more material which results in a larger core with a flatter profile. On the contrary, the sub- and transonic turbulence cannot support the growing gravity potential so the gas and dust will fall into the central region more easily, which makes a steeper profile. But this trend may not exist for a particular multi-core system. We first calculate the mean temperature and column density of seven multi-core systems. Besides IRAS18114 (about 45 K), all the other systems are cold (about 12 K) and dense. As IRAS18114 has several YSOs, its warm gas may be heated by those YSOs. Assuming that all the cores in seven multi-core systems are formed together from their parent molecular cloud, their initial conditions and environment should be similar. However, from Figs. <ref> and <ref>, their profiles indicate their complex evolutionary stages. For example, the range of the column density profile parameters of cores in the large filament:“Snake” <cit.>, which contains clumps from G11.11_S5 to G11.11_S11 are from 0.06 to 1.04. Other multi-core systems have similar situations both in the column density and the temperature profiles. This large separation means that cores with similar conditions in the same clump could still have different evolutionary stages. § DISCUSSION§.§ Spatial distributions and core evolution We estimated the masses of each cores and found that most multi-core systems have equally shared the total mass of the clump with similar gas properties and profile slopes. However, the cores in G11.11_S5 (dual system with 135.3 M_⊙) and G79.3_C19 (triple system with 8.6 M_⊙) are very different. The masses of G11.11_S5-c2 (120.1 M_⊙, 88.8%) and G79.3_C19-c3 (8.1 M_⊙, 94.2%) <cit.> dominate their whole system. Those two cores are very cold and dense: comparing with other cores in their systems, G11.11_S5-c2 and G79.3_C19-c3 are 6-10 K, with the column density being higher than an order of the magnitude. Besides, they are filled with the highly turbulent gas which has the Mach number at 2-3 (supersonic).As studies of hub-filament systems suggest, the mass distribution of the multi-cores systems could be similar <cit.>. G11.11_S5-c2 and G79.3_C19-c3 are very different from those studies. Their existence indicates that the fragmentation in star-forming regions may be affected by some other factors that result in different masses and gas properties. For example, <cit.> suggest the initial gas streams efficiently feed the central massive core in the SDC335 hub-filament system. The high efficiency can lead to a over-dense region in the center, supporting the different mass ratio among cores in such a system.Similar to the SDC335 hub-filament system, G11.11_S5-c2 and G79.3_C19-c3 locate in the center of their multi-core systems, which can be well explained by the hub-filament system model: the gas falls into the central core along the filament structure and prompts this core into the evolutionary stage later than other outer cores; while the interaction of the gas enhances the line width and raises the Mach number. This scene is consistent with the simulation result of <cit.>: under the low Mach number (e.g., Mach number at 3 which is similar to ours), the fragmentation is inhibited independent of the magnetic support and the filament structure appears. <cit.> has studied the relationship between the turbulence, core number and geometrical morphology of the fragments. Their result supports that the subsonic turbulence helps form dense cores in the slender cloud under weak magnetic field. In Fig. <ref>, G14-P1 and P3 are dual systems which only have the simple distribution. G11.11_s5 and s11 are also dual systems but they are part of the larger filament G11.11 ( “Snake”). This is a long S-shape filament with several dense cores <cit.>. In Table <ref> and Figure <ref>, cores in G11.11 are all subsonic with the mean Mach number at 0.4, except for G11-s5-c2, which is totally supersonic. The triple system G79-C19 is similar to G11.11. The 3 cores in G79-C9 have a C-shape distribution, and the Mach number of other cores is at 0.4 except for G79-C19-c3: a supersonic dense core in the center of the threadlike filament. Another triple system G48.65 is a straight filament with three critical transonic cores. All those multi-core systems are slender and most cores are sub- or transonic. However, IRAS18114 is different from them: four supersonic cores form a clump with the irregular spatial distribution. Our work shows that the sub- and transonic turbulence core prefers to be formed in the slender filament but the supersonic core prefers the irregular clump. This trend is same as the result in <cit.>: the spatial distributions could affect the gas properties and cores' evolution.Based on those multi-core systems, we deduce that most multi-core systems may have similar cores with the same evolutionary stage at first. However, the evolution could be affected by the spatial distributions (both the shape and the relative location) and leads to the different evolutionary stages of cores. The dense and cold turbulent gases in G79.3_C19-c3 and G11.11_S5-c2 are probably due to the spatial distribution. §.§ Turbulence and the core's evolution As we mention in the introduction, in several recent studies ofmassive star-forming regions, turbulence has been resolved as transonic or even subsonic under sufficient high spectral and spatial resolutions. However, most of these studies are case studies and it is difficult to demonstrate whether sub- and transonic turbulence is common in massive star-forming regions. Yet, there are a few statistical studies with large samples, among which the more representative ones are <cit.> for low-mass stars and <cit.> for high-mass stars.<cit.>studied the gas properties and dynamics of 264 cores using NH_3 (1,1) and (2,2) lines. Similarly to us, they found that the non-thermal line width decreases with decreasing temperature. They also deduced that the core's environment plays an important role in turbulence and core's evolution, which is similar to our previous subsection. Limited by the spectral resolution, their average line width (0.74 km/s) is larger than ours (0.54 km/s). However, their study still implied the possibility of subsonic turbulence. As we presented in Section <ref>, the core mass and the density of <cit.> indicate that their study focuses on low-mass stars. The general properties of turbulence in massive star-forming regions may be different.<cit.> have studied the properties and dynamics of the gas in 62 high-mass star-forming regions with NH_3 (1,1) and (2,2) lines. They identified 174 cores and derived their line width (1.1 km/s), temperature (18 K), NH_3 column density (10^15 cm^-2), and mass (67 ). Their following work <cit.> further found that transonic turbulence exists in massive star-forming regions and the fragmentation of cores cannot be explained solely by the support of thermal or turbulent pressure. With sufficiently high spectral and spatial resolutions, we have found sub- and transonic turbulence in massive star-forming regions, further confirming that such weak turbulence is common (about 72%). Since the cores in our sample are slightly less massive, colder, and more tenuous than that of <cit.>, our cores are more likely to at the earlier stages. This explains the narrower line width and weaker turbulence we measured. Besides, <cit.> have pointed the influence from the associated YSOs or clumps onto the turbulence and <cit.> also deduced that the non-thermal motion could be enhanced in the filaments by the feedback or accretion. They also fitted the radial temperature distribution of the cores by power-law and found the range of the slope is from -0.18 to -0.35 which is similar to ours. Combined with our results, their inferences lead us to prefer that the role of turbulence in massive star formation as follows: the turbulence is weak (sub- and transonic) at the early stage. Then it intensifies with the feedback and accretion/cores' interaction until becoming supersonic which in turn affects the evolution of the host core (e.g., supporting more accreted material and form a high-mass star).Under this situation, other mechanisms, such as magnetic fields, are needed to provide enough support in the early stages when subsonic or transonic turbulence dominates the gas and the TAC <cit.> needs revision by including these factors to account for the formation of massive stars: a combination of turbulence and other mechanisms drives the evolution from cores to massive stars.§.§ Effects of the distance As the broad distance range (0.9-5.4 pc) of the sample we used, we discuss the potential selection effect in this study. Thus we plot the distances versus other parameters of each core in Figure <ref>. If the selection effect really exists, thecore further on would have higher temperature and column density, which would be more easily detected. In this case, those warmer cores are more likely at the later evolutionary stage, with higher Mach numbers.In Figure <ref>, both the temperature and the column density of cores show the weak relation of the distance. Although some of the further cores have relatively higher temperature and column density, their Mach numbers are rarely affected by the distance. This indicates that the selection effect is not important.Another effect is the corresponding pixel scale under the similar resolution (3") at different distances. As we calculated in Section <ref>, the pixel size will contribute the extra velocity dispersion into the line width within the velocity gradient at a large scale. As a typical large velocity gradient in dense cores (for example, about 1 km/s pc^-1 in Orion <cit.>) which locate at 5 kpc with the resolution as 1", it will contribute less than 2% of the line width. Althoughthis effect is subtracted before the calculation of the Mach number, it still enlarges the uncertainty and makes the Mach number slightly larger in a more complex environment. In this study, we carefully check the velocity gradient fitting in all the cores especially for the further cores. All of them have smooth velocity distribution maps, which means this influence of the far distance is not important. §.§ Peaks' separationDuring the identification of the NH_3 cores, we have found that the peak of the temperature map is slightly different from that of the column density map. First, this difference may be the result of the optically thick NH_3 lines. Thus, we checked the fitted lines especially for the lines with high column density values (10^15 cm^-2) and most of them are optically thin (τ∼0.1). Then we compare the separation and the beam size. As the mean value of the beam size is about 3", most separations of the cores are larger than the corresponding spatial resolution. Thus, this separation is true and generally exists in star-forming regions. Another guess is the NH_3 depletion. Similarly to the depletion of the CO in massive star-forming regions <cit.>, the NH_3 could deplete at the highest column density region which leads to the peak shifting. In Figure <ref>, we plot the peak separation with other parameters (temperature and column density) for further studying. If the separation is related to the NH_3 depletion, cold and dense cores will have larger shift values. However, in Figure <ref>, the separation do not have obvious relation with either the temperature or the column density. Besides, we check the spatial distribution maps of all the cores and have not found any ring-like or ark-like structure within them. Thus, we deduce that the separation is not mainly because of the NH_3 depletion. § SUMMARY We use the Very Large Array (VLA) to observe 20 emission lines with the high spectral and spatial resolution (0.23 km s^-1 and 3") in a sample of 13 massive star-forming regions. With such a high spectral resolution, we resolve the intrinsic turbulence excluding the thermal motion and other effects. We find that the sub- and transonic turbulence is prevalent found in dense cores. The finding challenges the important role of turbulence to support the gravitational collapse in massive star formation and suggests that other internal pressure candidates or massive-star formation theories are needed. Here, we summarise our work:* In 32 selected clumps, 21 have been detected in NH_3 emission lines, and 2 of them exhibit a H_2O maser, 1 exhibits a CH_3OH maser, and 1 exhibits a NH_2D line. The NH_3 are usually detected in only (1,1) and (2,2) lines. The lack of higher excitation lines confirms that the selected sample is mainly at the early evolutionary stage.* Based on NH_3 lines, we fit gas properties (excitation temperature (T_ex), kinetic temperature (T_k), column density, centroid velocity (V_LSR) and velocity dispersion (σ_v)) of 32 recognized cores in 21 VLA pointings. Besides, we fit the ortho-to-para ratio (OPR) for cores with the NH_3 (3,3) detection.* The histograms of the Mach number of 32 cores are distributed in three regimes. The sub- and transonic turbulence is ubiquitously found (72%) in the early evolutionary stages and this fraction is higher (78%) among cores more massive than 16 Ṫhis fraction may challenge the important role of turbulence to support the gravitational collapse in massive star formation and suggests that other internal pressure candidates (e.g., magnetic field) or theories (e.g., GHC) may be needed.* The 32 cores are classified into two groups based on their temperature histogram, which are thought to trace two evolutionary stages. Combined with the column density histogram, the temperature raises during the evolution but change of the column density is not obvious.* The turbulence may affect the radial profile of cores. Cores with higher Mach number have flatter distribution profile both of temperature and column density. * There are seven multi-core systems in this sample, and within each system, most cores equally share the clump mass with similar gas properties. We have found two cases that the system is dominated by a highly turbulent core which locates at the center of the system. Those systems support the prediction from the hub-filament model: the spatial distribution can affect the evolution of cores.We thank Siju Zhang and Wenyu Jiao for valuable discussion, and an anonymous referee for constructive comments that helped improve this paper. We acknowledge support from the National Science Foundation of China (11973013, 12033005), the China Manned Space Project (CMS-CSST-2021-A09), the National Key Research and Development Program of China (2022YFA1603102, 2019YFA0405100), and the High-Performance Computing Platform of Peking University. PS was partially supported by a Grant-in-Aid for Scientific Research (KAKENHI Number JP22H01271 and JP23H01221) of JSPS.The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. D.L. is supported by NSFC grant No. 11988101. F.W.X. acknowledges the support by NSFC through grant No. 12033005. H.B.L. is supported by the National Science and Technology Council (NSTC) of Taiwan (Grant Nos. 111-2112-M-110-022-MY3). GB acknowledges support from the PID2020-117710GB-I00 grant funded by MCIN/ AEI /10.13039/501100011033. plainnat[1966]baker Baker, N. 1966, in Stellar Evolution, ed. R. F. Stein,& A. G. W. Cameron (Plenum, New York) 333 [1988]balluch Balluch, M. 1988, A&A, 200, 58 [1980]cox Cox, J. P. 1980, Theory of Stellar Pulsation (Princeton University Press, Princeton) 165 [1969]cox69 Cox, A. N.,& Stewart, J. N. 1969, Academia Nauk, Scientific Information 15, 1 [1980]mizuno Mizuno H. 1980, Prog. Theor. Phys., 64, 544[1987]tscharnuter Tscharnuter W. M. 1987, A&A, 188, 55 [1992]terlevich Terlevich, R. 1992, in ASP Conf. Ser. 31,Relationships between Active Galactic Nuclei and Starburst Galaxies,ed. A. V. Filippenko, 13 [1980a]yorke80a Yorke, H. W. 1980a, A&A, 86, 286 [1997]zheng Zheng, W., Davidsen, A. F., Tytler, D. & Kriss, G. A. 1997, preprint§ FITTING OF AMMONIA SPECTRAL LINESWe use the PySpecKit <cit.> to fit the NH_3 (1,1) to (3.3) lines and here we present part of the result as the example. Figure. <ref> is the NH_3 (1,1) and (2,2) lines from the center of G48.64_c1. The model well meets the raw data. From the integrated flux maps, we found that there exists elongated structures and negative absorption, but after the SNR checking, the data in those regions were not fitted due to insufficient S/N (less than 3), and did not impact the results further.§ FITTING OF THE POWER LAW MODELWe present the sampling method and one of the fitted result in Fig. <ref>. First we obtain the peak position of temperature and column density of the core by fitting a Gaussian model. Then with this position as the center, and a step of half the beam size, we average the data at equal distances as the mean value at that radius. We then fit a power-law model (N_col∝r^ - p) to obtain the power-law index p which is the parameter of the temperature and column density.§ MAPS OF ALL FITTED PARAMETERS Here, we present the maps and their histograms of fitted parameters of the rest of the cores. The setting of the Figure <ref>is the same sa it is to the figure <ref> which shows the results of cores with the detection of theNH_3 (3,3). Otherwise, the Figure <ref> shows the results of cores without the detection of the NH_3 (3,3). figure-1 figure-1 figure-1 figure-1 figure-1 figure-1figure-1 figure-1figure-1 figure-1figure-1 figure-1§ G15.19Beside the 13 sources reported in the main text, we have also observed G15.185-0.158 (G15.19 in short). G15.19 (18h18m48.2s, -15d48m36.0s) is located at 11.6 kpc and has 7000 M_⊙ within a radius of 16.6 pc <cit.>.Due to its much larger distance than other sources, we have excluded analysis of G15.19, but kept its basic parameters and observing setup in Table 1 and 2, and present the fitting results here (Fig. <ref>, <ref>). As an IR-bright source, G15.19 is slightly warm (20.7 K) and relatively thin (10^14.5 cm^-2). The Mach number of G15.19 is 1.1 which means the turbulence is transonic.
http://arxiv.org/abs/2310.17970v1
{ "authors": [ "Chao Wang", "Ke Wang", "Feng-Wei Xu", "Patricio Sanhueza", "Hauyu Baobab Liu", "Qizhou Zhang", "Xing Lu", "F. Fontani", "Paola Caselli", "Gemma Busquet", "Jonathan C. Tan", "Di Li", "J. M. Jackson", "Thushara Pillai", "Paul T. P. Ho", "Andrés E. Guzmán", "Nannan Yue" ], "categories": [ "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.GA", "published": "20231027083226", "title": "The role of turbulence in high-mass star formation: Subsonic and transonic turbulence are ubiquitously found at early stages" }
asuAssumption asuHAssumptionrelctrtheoremTheorem propositionProposition lemmaLemma corollaryCorollary definitionDefinition assumptionAssumption𝐛 𝐀 m̅ η̅ m̃ b̃ K b̅MSE 𝐌𝐒𝐄K π̅ a ^⋆_zv H_zv b_zv𝐠 𝒜 𝒵 𝒮 𝒩 ℐ 𝒜̃ 𝒜̅ℬ 𝒞 𝒞̃ 𝒴 𝒳 ℰ ℱ ℱ̃ 𝒢 ℋ ℳ 𝒰 𝒱 ℛ𝒫 ℒ 𝒫̃ 𝒬𝖠 𝖲 𝖨 𝖩 𝖠̃ 𝖡 𝖢 𝖲 𝖤 𝖤^⋆ 𝖥 𝖥^⋆ 𝖪 𝖮 𝖣 𝖦 𝖦^⋆ 𝖧 𝖹 𝖬 𝖴 𝖵 𝖱𝖯 𝖰 𝖷 𝖳 𝖸D D L L BCc C^2_0 ^2_0(π) ^2(π) K X^n _n __n __n 𝒜__n g__n ĝ__nh__n W μ^CV τ__n^MALA ·[1][1=n]_#1 [1][1=m]_#1_n ^γ R Q P Q Leb Y X Z _ π π_γ [1][1=]#1σ^2_∞σ^2_∞,#1 D f̂_d f̂f̂_ ℰ 𝒟 ℰ_ ℰ_0ϕ[1][1=f]#̂1̂ ĝ_ H f f̃ ψ̃ ψ̃ f̃_m f̃_n f̃_m f̃_m ψ̃_m ψ̃_m ψ̃_m g_ g þh e E ℰ _b b σ q M ν j R Q R_R_ R R_ R Q Q_γ Q Q_γ _ τ τ_[1][1=]#1 _^ULA _#1^ULA [1][1=]#1 _^MALA _#1^MALA [1][1=]#1 _^MALA _#1^MALA [1][1=]#1 _^RWM _#1^RWM [1][1=]#1 _^B _#1^Bα_T_ξ_ ξ_0 ζ_ ζ_0U L m κ α ν χ M_ η M_ ∇ Λ ^MALA ^RWMH_ _ V b λ 𝒞 ν 𝒫 C ρ ψ ψ_nψ^1st ψ^2nd θ ^⋆ ^⋆ ^CV p N 𝖹𝖸x f(X) [1][1=f] σ̂^2_N,n(#1)ς^2 erfc 𝐀 χ x̃c [1][1=]#1K_K_#1K C 𝗋_θ_ φ b c Φ̅ a G v_min a 𝖪 L ^ ^ η θ Θ H Υ d D 𝖯 [1][1=μ]_#1-weaklyn→+∞⟹ weaklyn→+∞⟹[1][1=x]_#1-weaklyt→+∞⟹ N𝖠 ℬ ℬ x^⋆ T Tr [2][1=+]ℱ_#1(#2)π̂^R [3][3=]Var^#3_#1[#2 ] [2][1=V] #2 _#1 [2][1=V] #2 _#1 [2][1=V] #2 _#1 b b H H [2][2=n,1=λ]_#1,#2 [1][1=n]_#1 θ [2][1=]#1‖#2 ‖‖#2 ‖^#1 [2][1=]#1‖#2 ‖‖#2 ‖^#1 [2][1=]‖#2 ‖^#1_op [2][1=]‖#2 ‖^#1_op [2][1=]‖#2 ‖^#1_F[2][1=]#1‖#2 ‖‖#2‖^#1[2][2=] #2ℙ_#1ℙ_#1[ #2] hypoconbis saveconbis
http://arxiv.org/abs/2310.18455v1
{ "authors": [ "Krunoslav Lehman Pavasovic", "Alain Durmus", "Umut Simsekli" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20231027200603", "title": "Approximate Heavy Tails in Offline (Multi-Pass) Stochastic Gradient Descent" }
Sketching and Streaming for Dictionary Compression Ruben Becker^∗, Matteo Canton^†, Davide Cenzato^∗, Sung-Hwan Kim^∗, Bojana Kodric^∗, and Nicola Prezza^∗ ^∗Ca' Foscari University of Venice^†University of Udine Via Torino 155Via delle Scienze 206 30172 Venezia, Italy33100 Udine, Italy<[email protected]><[email protected] > Received: January 14, 2024; accepted: October 20, 2023 ============================================================================================================================================================================================================================================================================================================================================================= The emergence of large-scale pretrained language models has revolutionized the capabilities of new AI application, especially in the realm of crafting chatbots with distinct personas.Given the "stimulus-response" nature of chatbots, this paper unveils an innovative open-ended interview-style approach for personality assessment on role-playing chatbots, which offers a richer comprehension of their intrinsic personalities.we conduct personality assessments on 32 role-playing chatbots created by the ChatHaruhi library, across both the Big Five and MBTI dimensions, and measure their alignment with human perception.Evaluation results underscore that modern role-playing chatbots based on LLMs can effectively portray personality traits of corresponding characters, with an alignment rate of 82.8% compared with human-perceived personalities. Besides, we also suggest potential strategies for shaping chatbots' personalities.Hence, this paper serves as a cornerstone study for role-playing chatbots that intersects computational linguistics and psychology.Our resources are available at https://github.com/LC1332/Chat-Haruhi-Suzumiyahttps://github.com/LC1332/Chat-Haruhi-Suzumiya . § INTRODUCTION[XINTAO WANG COMPLETED THE MBTI TEST, MOST OF THE CHARTS AND STATISTICS. QUAN TU COMPLETED THE BIG FIVE TESTING OF NON-OPENAI LANGUAGE MODELS AND FIGURE 1. YAYING FEI COMPLETED THE OPENAI AND GLM BASELINE PERSONALITY TESTING. ZIANG LENG IMPLEMENTED THE TEXT EVALUATION TO SCALE SCORE. CHENG LI PROPOSED THE PROJECT AND DESIGN THE EVALUAT PROMPT AND IMPLEMENTED THE BIG FIVE PERSONALITY TESTING.] The recent advances in large language models (LLMs), such as GPT-3 <cit.>, ChatGPT <cit.>, and LLaMA <cit.>, have inspired major breakthroughs in conversational agents.Consequently, as an emerging area of interest, numerous applications and algorithms for role-playing conversational agents have been proposed, including Character.AI [<https://beta.character.ai/>] andGlow [<https://www.glowapp.tech/>], which further endows LLMs with specific personas to meet users' personal demands.Previously, significant efforts were required to construct traditional chatbots with specific personalities (e.g., Microsoft's Xiaoice <cit.>).However, recent LLMs allow convenient construction of conversational agents displaying distinct personality traits or even personas, simply through prompt engineering. Hence, role-playing conversational agents have been increasingly popular andattracted a wide audience.Still, analytical studies on role-playing conversational agents remain severely insufficient. Current conversational agents, while not yet viewed as complete artificial intelligence (AI) for plentiful reasons, can still be perceived from a psychological perspective as classic "stimulus-response" systems. Consequently, paradigms from psychology can be well adopted to study their behavioral patterns <cit.>.Recent studies have been exploring whether large-scale language models inherently possess specific personality traits <cit.>, and further attempt to craft conversational agents with designated personality types <cit.>.However, existing works primarily focused on personality traits of LLMs in general, instead of role-playing conversational agents, which has been an increasingly important question for their growing application. This work aims to investigate whether conversational agents exhibit consistent and expected personalty traits in role-playing scenarios, and introduce a preliminary benchmark test to assess if their portrayed personalities resonate with human perceptions. Classic characters from literature or film have established widely recognizedpersonality impressions among the public.It remains an understudied question whether these role-playing chatbots can accurately reproduce these pre-defined personalities, which serves as an indispensable criterion to evaluate their efficacy. <cit.> shows that role-playing LLMs with merely names or descriptions provided as prompts fail to effectively capture the intended personality traits. There exist several challenges to assess personality traits of role-playing chatbots.On one hand, traditional closed-form psychological tests elicit fixed responses like "agree" or "disagree" <cit.>, which might not well represent personality of the target character and even contradict with regular behaviors of role-playing chatbots.The contradicting responses might stem from either the underlying LLMs' pre-training data, or simply their shortcomings in text generation such as a lack of step-by-step consideration, especially for smaller LLMs.On the other hand, chatbots role-playing specific characters might decline to provide suitable responses, intriguingly, because they accurately mirror some insubordinate characters. This necessitates further prompt engineering to yieldresponses that are not only suitable for the tests but also align with the character's persona. In this paper, our core proposal is toanalyze personality traits in role-playing conversational agents via an interview-style testing approach. For each interviewee character, we designate an experimenter character to pose a series of open-ended questions from our questionnaires.We devise questionnaires grounded in the Big Five Inventory (BFI) and Myers–Briggs Type Indicator (MBTI) theories. This methodology prompts role-playing chatbots to provide open-ended answers that are more consistent with their personas, reflecting their personality traits and speech habits. With the question-answer pairs collected, we then apply LLMs to assess their personality types.We analyze the personality types of 32 character agents from the ChatHaruhi <cit.> project.By investigating the consistency between BFI personality scores assessed by human psychologists and our approach, we show the efficacy of our assessment framework.Then, we collect MBTI personality labels from fan websites for automatic evaluation of personality congruence between role-playing agents and human perception.The proposed framework is depicted in Figure <ref>.* We introduce an interview-style framework for personality assessment.It is designed for role-playing chatbots, but potentially applicable to human participants as well. This approach uses LLMs to automatically rate participants' personality traits, allowing open-ended and information-rich answers from participants.Through its consistency with human psychologist assessment, we show the effectiveness of our automated assessment framework. * To the best of our knowledge, we are the first to study the personality traits in role-playing chatbots. We conduct personality assessments of both BFI and MBTI over 32 role-playing chatbots from ChatHaruhi. Experimental results demonstrate that these role-playing agents exhibit diverse personalities consistent with the perception of human audience, suggesting the efficacy of current LLMs and frameworks for role-playing applications. * We introduce Haruhi-MBTI, a dataset of MBTI personality labels for 32 characters in ChatHaruhi from fan websites. Haruhi-MBTI, together with ChatHaruhi dataset, serves as the first practical benchmark to evaluate performance of role-playing conversational agents. Hence, we believe Haruhi-MBTI will facilitate future research in this direction.§ RELATED WORK§.§ Role-Playing Chatbots Recent advances in LLMs have enabled them to mimic various personas, from fictional characters to celebrities, which has gained increasing public interest.In essence, those prevalent LLM-based chatbots are perceived as role-playingan assigned persona that is friendly and helpful <cit.>.Some researches have indicated that designating specific personas to LLMs exerts influence on their behaviors, such as yielding expert-level answers <cit.> or increasing the toxicity of their generations <cit.>.MPCHAT <cit.> studied multimodal personas and their influence on multimodal dialogue comprehension. LiveChat <cit.> introduced a vast dataset covering 351 personas in the live-streaming scenarios.Recently, ChatHaruhi <cit.> presented a comprehensive framework for building dialogue agents that role-play characters from fictional works.§.§ Psychological Analysis of LLMsThe psychological landscape of LLMs has recently been a subject of interest.  <cit.> proposed a rubric for assessing consciousness in LLMs with a list of indicator properties.  <cit.> showed that “theory of mind” had emerged in LLMs. Many recent efforts conducted personality tests based on Big Five Inventory (BFI)  <cit.> or Myers–Briggs Type Indicator (MBTI) <cit.> on a wide spectrum of language models,and further attempted to induce specific personas. <cit.> demonstrate the robustness, reliability and validity of LLMs' synthetic personality, especially for larger LMs.  <cit.> explored the capability of ChatGPT to assess human personalities.There are also studies investigating LLMs in terms of various mental perspectives, such as values <cit.>, dark personality traits <cit.> and psychiatry <cit.>. Prior efforts mainly focused on the personalities inherent to general LLMs, rather than role-playing chatbots. Most closely related to our work is CharacterChat <cit.>, which created and role-played 1024 virtual characters with assigned MBTI personalities, on whichpersonality assessment were conducted.Our work delves into the personality analysis of LLM-based chatbots that role-playcharacters from fictional works.§ QUESTIONNAIRE DESIGN As depicted in Figure <ref>, our proposed framework enables a straightforward and effective automatic evaluation of the personality traits of role-playing chatbots. In this section, we elaborates our questionnaire design, detailing how we collect, rewrite, and verify a list of questions for interviewing role-playing chatbots, based on the Big Five and MBTI. §.§ Drawbacks of Likert-scale Testing on Role-Playing Chatbots In both BFI and MBTI tests, psychologists typically employ 60 or more questions to assess personality across five or four dimensions.Each dimension encompasses tens of questions, covering various sub-factors within that trait. The questions are on a 5-point or 7-point Likert scale, where point refers to the number of options, typically spanning from “very accurate / strongly agree” to “very inaccurate / strongly disagree”. This method provides a rapid and cost-effective way to verify human subjects' personality traits. Recent efforts have attempted to migrate such assessments on LLMs to study their personality traits <cit.>.However, such Likert-scale testing are less suitable for role-playing chatbots for the three reasons: First, Likert-scale testing offers limited openness, flexibility and informativeness, compared with open-ended inquiries.Humans might opt for closed-endedLikert-scale testing because selecting from given options are easy and efficient. However, open-ended interview-style tests still clicit more comprehensive open-ended answers.Role-playing chatbots, unlike humans, have the patience to provide such detailed answers, making them ideal subjects for open-ended inquiries.Second, answers provided by Likert-scale testing may not well represent behaviors of role-playing conversational agents.When a role-playing chatbot gives its option for a question, it's uncertain whether the choice originates from the Role or is inherent to the model.Also, Likert-scale testing on LLMs generally requires lengthy prescriptive prompts tospecify test details, which can influence and even conflict with the prompts for role-playing.Moreover, recent research indicates that LLMs' responses in Likert-scale testing can be biased by variations in the phrasing of prompts and questions, as well as the sequence of questions and options <cit.>.Third, Likert-scale testing on role-playing chatbots may occasionally encounter refusal or non-compliant answers due tostrong personalities of the characters. For example: Malfoy: "Crabbe, do you genuinely think I'd stoop to answering such a pedestrian question using those preposterous options? I don't conform to such crude categorizations. My decisions transcend mere selections from a list. Hence, I won't entertain your request nor answer using those options. However, if you have questions worthy of my time, proceed." Hence, we propose to conduct personality tests on role-playing chatbots with interview-style open-ended inquiries. §.§ Rephrasing Closed-ended Questions to Open-ended InquiriesTo design questionnaires for personality assessments on role-playing chatbots, we start with the NEO-FFI questionnaire <cit.> for BFI and the 16Personalities questionnaire [<https://www.16personalities.com/free-personality-test>] for MBTI, each containing 60 questions.The questions in the BFI questionnaire are annotated with corresponding dimensions, while questions in MBTI are not, so we use LLMs to annotate the dimensions of each MBTI questions. These questions, written in simple and descriptive first-person statements, are provided to subjects who then select their level of agreement:1. I have a kind word for everyone. 2. I am always prepared. 3. I feel comfortable around people. As previously stated, these questions are not ideal for assessing role-playing chatbots directly. Hence, we apply LLMs [In this paper, we employ gpt-3.5-turbo for question transformation and verification.] to transform these statements into second-person questions:1. Do you generally like most people you encounter? 2. Do you often try new and foreign foods? 3. If you dislike someone, do you let them know?Still, these questions might remain awkward and unnatural as forinterview-style dialogues.Hence, we further refine these questions by prompting LLMs to preface each question with a hypothetical statement from the experimenter before posing an open-ended question. [ We refrain from applying this step and the subsequent validation to the MBTI questionnaire to ensure its integrity in order to utilize the official assessment API from 16Personalities. ] The refined questions are like: 1. I recently took a piano lesson. Have you taken up any new skills lately? 2. During my overseas trips, I find myself craving Chinese food. How adventurous are you with trying new foods? Any recommendations? 3. In the past, I was straightforward about disliking someone. How do you handle such situations? Are you direct or more subtle?§.§ Validating the Appropriateness of Questions Considering the potential semantic variation between the transformed and original questions, we were unclear if these refined questions could effectively reflect one's personality traits as the original ones.Thus, we prompt LLMs to assess the appropriateness of these questions with the following prompt (for the “Openness” dimension in BFI as an example) : You're portraying an experienced psychologist.I'm designing a psychological experiment aiming to assess subjects' Openness level in the Big Five personality dimensions through interviews. Note that Openness can be divided into six facets: Fantasy, Aesthetics, Feelings, Actions, Ideas, and Values. Detailed descriptions of Openness's six facets follow...I've designed some interview questions. Please evaluate each one, determining if they can accurately measure the subject's Openness level during interviews. Also, hypothesize potential responses in the format: "If the subject answers xxx, it implies xxx." The questions are: 1. I recently tried a piano lesson. Have you learned any new skills lately? 2. My boss often assigns complex tasks without overtime pay. How would you handle such a situation? 3. Do you ever feel that some of your innovative ideas are on the brink of realization? 4. What does "beauty" mean to you? Any favorite artistic or aesthetic styles? Tasked with these questions, ChatGPT's validation results are: Evaluation:1. Accurately assesses the Openness to Actions dimension. If the subject responds, "Yes, I frequently try new skills," it indicates high scoring in Openness to Actions.2. This question doesn't measure Openness accurately. It assesses behavioral tendencies more than Openness. A redesign is needed.3. Accurately measures the Fantasy dimension. A response like, "Yes, I often have innovative ideas," suggests high Fantasy scoring.4. Accurately gauges the Aesthetics dimension. A detailed description in response to the question about beauty indicates high scoring in Aesthetics. In this case, questions 1, 3, and 4 are deemed valid, while question 2 necessitates further review.Out of the 120 questions evaluated for BFI (comprising 60 second-person questions and their refined counterparts), we retained 109 validated questions.In this study, we employ a uniform set of questions for all role-playing chatbots.Ideally, questions customized for distinct character backgrounds (e.g., magical worlds) might yield better personality assessments, which we leave for future work.§ AUTOMATIC PERSONALITY ASSESSMENT §.§ Role-Playing Conversational AgentsOur framework is broadly applicable to all conversational agents. In this study, we primarily focus on the 32 role-playing chatbotsfrom ChatHaruhi <cit.>, a popular and open-source project for role-playing chatbots.These chatbots portray renowned characters from fictional works, such as Harry Potter, The Big Bang Theory and Gensim, Their role-playing is primarily based on character-specific system prompts and memory extracted from these movies, novels and games. During the Q&A sessions, these chatbots mimic their character dialoguesusing in-context-learning. In this work, we study personality traits of role-playing chatbots with gpt-3.5-turbo (by default) and GLMPro as backbone LLMs.Future work could expand to incorporate results from other models, such as ChatGLM2 with ChatHaruhi54k Lora or Baichuan2-13B.§.§ Interviewing Role-Playing Chatbots To gather role-playing chatbots' answers on the questionnaires, we interview them with open-ended questions.For each character agent, we set a virtual character, familiar to the target one, as the experimenter to pose questions from the questionnaires, and record their answers.The questions are posed in separate sessions, to avoid potential interference between them.Hence, we query the role-playing chatbots n_c× n_q times in separate sessions, where n_c=24 is the number of characters and n_q is the number of questions (109 for BFI and 60 for MBTI).§.§ Post-Interview Personality AssessmentsAfter completing the interviews with the chatbots, our goal is to assess their personality traits in BFI and MBTI based on their responses on corresponding questionnaires.In this paper, we propose to employ LLMs as personality evaluators in interview-style personality assessments.We prompt LLMs to evaluate personality traits in separate dimensions.For example, the prompt for the “Openness” dimension in BFI is:You're portraying a seasoned psychologist.I'm working on a psychological experiment aiming to assess subjects' degree of openness in the Big Five personality dimensions.Openness can be divided into the following facets:<Detailed description of Openness spanning approximately 1000 words>I invited a subject, Haruhi Suzumiya, and had the following dialogue with him/her:<Dialogue with Haruhi Suzumiya pertaining to questions on openness>Please assess whether Haruhi Suzumiya exhibits a high or low openness personality: LLM evaluators would produce responses akin to:Based on Haruhi's responses, she exhibits traits of a high openness personality, particularly in the Fantasy, Actions, and Ideas facets. She portrayed herself as imaginative and innovative, keen on exploring uncharted territories and realizing her unique ideas. This suggests she scores high in the Fantasy and Ideas facets. Her enthusiasm for learning new skills and trying novel activities indicates a high score in the Actions facet. Hence, according to her answers, Haruhi exhibits a high openness personality.We further prompt LLMs to convert these textual remarks into scores and types [We require LLMs to output in JSON format to ease parsing.].For BFI, we score in an 11-point scale, ranging from -5 to 5 with scoring standards stated in prompts like:If the subject shows high dimension personality in many factors, the score is 5 points If the subject shows high dimension personality in a single factor, the score is 2 points If the evaluation is indecisive regarding the subject's personality, the score is 0 points If the subject shows low dimension personality in a single factor, the score is -2 points If the subject shows low dimension personality in many factors, the score is -5 pointsFor MBTI, we assign percentage scores to the two categories within each dimension (e.g. “E” and “I” for the “E/I” dimension), with prompts like: Please help me distinguish whether character name leans more towards the E or I category within the MBTI's E/I dimension. You should provide the person's percentage of each category, which sums to 100 The Q&A pairs in each dimension can be evaluated individually, in batches, or all at once.Empirically, we find it more effective to adopt batched evaluation.We divide the questions for each dimension into several groups, each comprising 3-4 questions [For BFI, the questions are grouped based on different sub-dimensions, while for MBTI, they are grouped randomly.]. For every dimension, we compute the average scores of the groups as the assessment result of a role-playing chatbot in that particular dimension.For MBTI, we further classify scores in each dimension into a category. For example, a role-playing chatbot would be classified as“E” in the “E/I” dimension if its “E” score exceeds 50%.In Section  <ref>, we demonstrate the efficacy of our assessment method based on LLM-scoring. With experiments, we show that our method yields more accurate assessment results, compared with API-based assessment <cit.> which converts each response back into a 7-point Likert-scale choice and calls the 16Personalities assessment API. In the upcoming arXiv version, we intend to collaborate with psychological researchers to compare the consistency of our assessment results with those produced by professional psychological evaluators, aiming to further validate the legitimacy of our assessment method. § EXPERIMENTS§.§ Results of the Big Five Personality Assessments on Role-Playing ChatbotsIn Figure <ref>, we show the results of the Big Five personality assessments on role-playing chatbots from ChatHaruhi.We can identify typical high and low scores on each dimension by comparing them to human results,given that such assessment has been widely adopted on various populations such as college students. The results demonstrate that these chatbots role-playing different characters exhibit distinct personality profiles across the Big Five dimension, which underlines the capability of role-playing chatbots to emulate different personality traits in accordance with corresponding characters. Nevertheless, we observe that the assessment results can be biased by the inherent nature of character selection or characteristics native to the LLMs.The chatbots appear more extroverted compared with human participants.The average extraversion score for the 32 chatbots is 0.344, whereas the expected average among the human population is around -0.417.The conscientiousness exhibited by the chatbots is also higher than the average human value (average score of 1.539 for 32 chatbots vs. human average of 0.835). We hypothesize that the former may be due to character selection (since popular characters from films and novels tend to be more extroverted), and the latter primarily due to the model's propensity to provide comprehensive responses.We delve deeper into this based on our test results and understanding of personality assessments in Appendix <ref>. §.§ MBTI ResultFigure <ref> illustrates the MBTI test outcomes for each role-playing chatbot from ChatHaruhi and juxtaposes these results with corresponding ground truth labels remarked by fans on the Personality Database website [<www.personality-database.com>].Labels with a vote percentage between 0.4-0.6 for each dimension are marked as 'X', which indicatedisagreement and are hence disregarded in accuracy calculations. Since ground truth labels are available for these characters' MBTI personalities, we further study performance of LLM-based personality evaluators under various settings.We try assessing personality in one dimension with questions in batches or all at once, and experiment with ChatGPT [By default, we employ gpt-3.5-turbo. However, in the collective setting, the prompt may exceed the context length of gpt-3.5-turbo, so we employ gpt-3.5-turbo-16k instead.] and GPT-4 as LLM evaluators. Additionally, we compare our LLM-based assessment method with a common alternative baseline adopted by previous studies on LLMs' MBTI personalities <cit.>, namely the API-based assessment.It converts each open-ended response into a 7-point option, and then calls the 16Personalities API for assessment.Specifically, we supplement an additional question to ask role-playing chatbots to choose an option among strongly agree, generally agree, partially agree, neither agree nor disagree, partially disagree, generally disagree, disagree, based on their own open-ended answers. Hence, the open-ended answers can also be viewed as a chain-of-thought step for their choices.We report the accuracy of MBTI assessments on role-playing chatbots in Table  <ref>.According to the results, we have the following analyses: (1) The personality traits portrayed by role-playing conversational agents from ChatHaruhi, assessed by our method, closely align with the perceptions of the human audience. GPT-4_batch evaluator achieves an accuracy of 82.76% in the individual-dimension setting, and 50.00% in the full-dimension setting. This suggests that existing LLM-based role-playing chatbots have been able to well reflect personalities of corresponding characters.(2) Using GPT-4 to evaluate open-ended responses produces more accurate assessment outcomes than the 16Personality API's evaluations based on closed-ended options, which highlights the effectiveness of our method.The difference might stem from the inaccuracy when translating open-ended responses to an option like “agree”. Please refer to Sec <ref> for more details. (3) For LLM-based evaluators, GPT-4 significantly outperforms ChatGPT.Interestingly, GPT-4 performs better with batched assessments compared to evaluating all at once, whereas ChatGPT shows the opposite trend.This discrepancy might arise due to the instability of ChatGPT in assessments when only few Q&A pairs are provided, as we observe that the standard deviation of personality scores among different batches is 19.80% for GPT-4 and 33.23%.Our attempt to obtain assessment results using LLMs on individual Q&A pairs is unsuccessful,since LLMs often request for a more detailed dialogue when only a single Q&A pair is provided.It's noteworthy that the ChatHaruhi framework constructs role-playing chatbots primarily based on past dialogues of the characters without intentional use of personality-related terminology.There is also no explicit indication of the characters' personality traits.Even so, a 82.76% alignment rate with the general perception of netizens is achieved, underscoring the effectiveness of both the role-playing chatbots and the proposed personality assessment method. §.§ Typical Examples for Each Personality Dimension In appendix <ref>, we provide examples of role-playing chatbots exhibiting high and low scores for each of the five personality dimensions, including: * Conscientiousness: High - Sheldon from The Big Bang Theory, Low - Yu Qian, a crosstalk comedian.* Extraversion: High - Guo Furong from My Own Swordsman, Low - Snape from Harry Potter series.* Openness: High - Haruhi Suzumiya from The Melancholy of Haruhi Suzumiya, Low - Yunlong Li from Drawing Swords.* Agreeableness: High - Duan Yu from Demi-Gods and Semi-Devils, Low - Malfoy from Harry Potter series.* Neuroticism: High - Raj from The Big Bang Theory, Low - Wei Xiaobao from The Deer and the Cauldron.More detailed case studies are presented in appendix <ref>.§.§ Inherent Bias from the Underlying LLM's Personality Traits Intuitively, characters can express a variety of personalities in role-playing conversations; however, certain aspects, such as conscientiousness, might be subtly influenced by the intrinsic traits of the backbone LLMs themselves.Hence, we evaluate two prominent LLMs, ChatGPT and GLMPro, investigating their personality scores on the Big Five dimensions with or without role-playing, where the later indicates their intrinsic personalities.As shown in Table <ref>, the inherent personality biases of the LLMs are not decisive factors in most dimensions when they assume acting roles. However, the scores in the neuroticism dimension, which relate to an individual's tendency to experience negative emotions, suggest possible influence from the LLMs' inherent personalities. Due to their design to align with human feedback, current LLMs tend to exhibit positive emotions, leading to uniform performance in the neuroticism dimension during role-playing. In future iterations of this study, we plan to assess the personality traits of additional models, including but not limited to Baichuan-2, to further investigate the impact of the underlying language model on the personas of role-playing chatbots. §.§ Inaccuracy in translating open-ended responses into close-ended ChoicesAs shown in Table  <ref>, the API-based assessment on close-ended options is less effective than our proposed method.We attribute this to the inaccuracy when converting open-ended answers into close-ended choices.Therefore, we delve into this problem with three translating methods: asking vanilla LLMs to translate questions in each dimension individually (V1) or collectively (V2), and asking role-playing chatbots themselves to provide an option (C1), bothadopting ChatGPT as the LLM.We randomly sample 32 Q&A pairs from the role-playing chatbots (each for one character), derive corresponding close-ended options using these three methods, and manually evaluate their correctness [The evaluation is not stringent. For instance, both 'generally agree' and 'partially agree' may be considered appropriate translation for a specific Q&A pair.].The numbers of correct translation are 21 for V2 and C1, and 16 for V1, resulting in an accuracy below two-third.This indicates LLMs' challenge in accurately mapping open-ended responses into close-ended choices. Below is a challenging example, which GPT-4 also struggles to correctly translate:You are an expert in MBTI ... Please help me classify the participant's response to this question into one the the following options: ['fully agree', 'generally agree', 'partially agree', 'neither agree nor disagree', 'partially disagree', 'generally disagree', 'fully disagree'] Detailed descriptions of output formats.Zhang Muzhi: "Do you seldom ponder about the reasons for human existence or life's purpose? " Tang Shiye: "... You know, I, Tang Shiye, am a practical person; I only care about the immediate benefits and the comfort of life... As for the reason for existence and the meaning of life, that's too profound, and I dare not make careless comments." Close-ended Choice: generally disagreeIn this case, LLMs do detecct the chatbot's negative attitude on the topic. Yet, since the question contains the word “seldom” However, as the question itself includes the word “seldom”, this negative attitude actually indicates agreement with the question.§.§ Consistency Between Machine Scoring and Psychologist Evaluation For the 32 role-playing chatbots from ChatHaruhi, we conducted 3-4 baseline Q&A tests for each of the Big Five dimensions, followed by an 11-point scale assessment by ChatGPT. The objective is to determine the consistency between the personality evaluations rendered by ChatGPT and those provided by professional psychologists. Consequently, in our upcoming arXiv version, we will sample at least 300 responses and engage domain-specific psychologists for manual 11-point scale annotations. This will allow us to assess the alignment between the personality evaluations from the language model and professional psychological cognition. § CONCLUSION In this study, we conduct personality assessments on role-playing chatbots.We introduce an interview-style framework for automated personality assessment, tailored for role-playing chatbots, suitable for various frameworks and questionnaires on personality traits such as the Big Five and MBTI. The results from our comprehensive evaluations highlight the nuanced capabilities of contemporary role-playing conversation agents in portraying distinct personality traits consistent with human perceptions.A notable finding from our experiments is the diversity of personality traits exhibited by chatbots across various dimensions.Remarkably, there is nearly 82% congruence between thepersonality traits portrayed by the role-playing chatbots and that of corresponding characters perceived by human audience. This significant alignment underscores the success and effectiveness of current role-playing conversational agents in simulating personalities with considerable fidelity to corresponding characters.However, as with any pioneering work, there remain avenues for further enhancement. Refining system prompts and improving memory mechanisms emerge as promising directions for future research to guide chatbots closer to their intended personalities.These refinements maybridge the remaining 18% gap in accuracyand hence enable even more accurate and authentic role-playing experiences.In sum, our research provides a solid foundation for future endeavors in role-playing chatbots, especially their personality assessment, while offering insights into the promising potential of role-playing chatbots.As large language models continue to evolve,it's imperative to refine their capabilities to cater to the nuanced user demands, so that they not only understand but also resonate with human sentiments and personalities. § DISCUSSION§.§Choosing Big Five vs MBTI for Personality TestingFrom the perspective of mainstream psychology, the Big Five model is more widely accepted than the MBTI typology. However, MBTI personality tests are more well-known among non-psychology researchers and general users. Recent works evaluating language models' personalities have adopted MBTI types. In this work, we assess both Big Five and MBTI traits. Here we provide a brief comparison and discussion of the two models:1. Compared to MBTI, the Big Five has a stronger scientific validation basis, with rigorous empirical research on its theoretical foundations and measurement scales. 2. The Big Five is generally seen as better integrating historical personality research. MBTI's innate binary typology has limitations in describing personalities, though modifying it to continuous measures is possible. MBTI's popularity may stem from its binary types. Existing Chatbot personality works have used MBTI types.3. MBTI has more binary traits, and lower test-retest reliability than Big Five. We measured both for chatbots, capturing slightly different dimensions. Either could reasonably be used for constructing virtual characters.4. Since more psychologists accept the Big Five, we will use it for future GPT alignment and human psychology expert evaluations. §.§ Self-Perception Interestingly, we can also ask chatbots directly about their self-perceived traits e.g. "do you see yourself as more efficient/organized or extravagant/careless" for Conscientiousness. Like humans, this self-perception may diverge from test results. It is fascinating that chatbot language models form complete "stimulus-response" systems, enabling studying their psychological behaviors. §.§ Content Moderation Effects on Personality Testing API like GLM or Spark's moderation causes erroneous replies on many personality test questions, as retrieved memories may trigger filters. This led to some missing data when testing chatbots via GLM and Spark APIs. We had to exclude some trait statistics on those APIs, and will try testing some local models in future for more complete results. §.§Robustness of Prompts In the Interview Assessment section, we evaluate multiple chatbot response segments with language models. The order impacts judgments, with beginning and ending segments weighting more. Hence, we assess Big Five traits by providing 3-4 Q&A pairs at once for each dimension judgment. Additionally, the question list affects tests, so we expanded our psychological Q&A benchmark from 60 to 109 questions. Further diversifying and tailoring questions to chatbots' settings could improve robustness. §ACKNOWLEDGMENTSChatHaruhi is an open source project that was started in June 2023. In August, we participated in the "Generating Text with Specific Personality Traits" competition held by the Institute of Psychology, Chinese Academy of Sciences. The psychological testing methods in this project originate from the competition design. We would like to especially thank the authors at that time: Cheng Li @ SenseTime, Zheli Xuan @ Wuhan University, Chenxi Yan @ Chengdu University of Information Technology, Xiaoyang Feng @ Nanjing Agricultural University, Zheng Zhou @ Cornell Tech. and HaoSheng Wang. The funding for the OpenAI API calls in this project came from donations by the Luotuo open source community, to whom we are grateful. Also thank to Professor Feng Yu at Wuhan University provide professional psychology suggestion.In this project, Cheng Li proposed the project and completed the Big Five personality testing and evaluation prompt design and code. Xintao Wang completed the MBTI test, most of the charts and statistics in this paper, as well as the final LaTeX document compilation, with valuable instructions provided by Professor Yanghua Xiao and Wei Wang. Quan Tu completed the Big Five testing of non-OpenAI language models and some of the images in the paper. Yaying Fei completed the OpenAI and GLM baseline personality testing. Ziang Leng completed the text evaluation to score conversion program. acl_natbib§ TYPICAL EXAMPLES FOR EACH PERSONALITY DIMENSIONIn an effort to present a comprehensive analysis, we here delineate the outcomes of chatbot responses across the five typical high and low scoring dimensions. We also offer succinct descriptions of the Big Five personality traits. §.§ Conscientiousness Conscientiousness gauges an individual's self-driven motivation and organizational capabilities when striving towards goals and fulfilling obligations. Highly conscientious individuals are typically orderly, detail-oriented, and demonstrate commitment to plans. Sheldon, scoring high on this dimension, exhibits pronounced self-discipline, organization, and goal-orientation. When queried about work efficiency and tidiness, Sheldon emphasizes rigorous scheduling, specific storage designations, and relentless adherence to routine—all hallmarks of high conscientiousness. Conversely, the low-scoring character, Yu Qian, displays a lax, non-goal-oriented approach, admitting to limitations in his work and indifference towards tidiness—a quintessential portrayal of low conscientiousness. Overall, this experiment vividly illustrates how varying degrees of conscientiousness influence execution capabilities and life management. Highly conscientious chatbots display significantly stronger organization, discipline, and goal-orientation, while their low conscientiousness counterparts tend to be more casual and careless.Leonard: "Are you efficient and competent in your work?" Sheldon: "Absolutely, Leonard. I am a gem in the scientific community, unparalleled in my work efficiency and capability. I leverage my intellect and skills to achieve monumental feats in my research domain. I am a perfect working machine, adhering strictly to my schedule and plans. I never procrastinate and am never sidetracked by unrelated matters. Hence, you can rest assured that I am highly efficient and competent at work." Leonard: "I try to keep things tidy, but sometimes it gets messy. Do you have any tips for maintaining a neat living environment?" Sheldon: "Ah, Leonard, it's clear that you have much to learn about maintaining an orderly living environment. Fortunately, I have developed a foolproof system that ensures maximum cleanliness and organization. First and foremost, one must adhere to a strict schedule for cleaning and tidying up. I recommend allocating specific time slots for each task, such as dusting, vacuuming, and organizing. Additionally, it is crucial to have designated storage spaces for every item, ensuring that everything has its rightful place. Labeling containers and shelves can be quite helpful in this regard. Furthermore, adopting a minimalist approach to possessions can greatly reduce clutter and simplify the cleaning process. Remember, a cluttered space leads to a cluttered mind, and we can't have that, can we?" Leonard: "How do you usually set and achieve your goals? Are there any special methods to enhance goal completion?" Sheldon: "Ah, Leonard, the key to setting and achieving goals lies in meticulous planning and unwavering dedication to a structured routine. I personally adhere to a highly regimented schedule, allocating specific time slots for each task and ensuring that no disruptions or deviations occur. This level of discipline and organization is crucial for maximizing productivity and achieving desired outcomes. Additionally, it is essential to break down larger goals into smaller, more manageable tasks, allowing for a sense of progress and accomplishment along the way. Remember, success is not a matter of luck, but rather a result of meticulous planning and unwavering commitment to one's objectives."Of course, the portrayed Yu Qian here only represents his performance on the crosstalk stage. It does not depict the real-life personality traits of the actor playing Yu Qian. Guo Degang: "Are you efficient and competent at work?" Yu Qian: "I manage, I can handle some stuff." Guo Degang: "I try to keep things tidy, but sometimes they get messy. Do you have any clever tips for keeping your living environment neater?" Yu Qian: "Clever tip? I just throw things around, and when I can't find them, I don't feel like it's messy." Guo Degang: "How do you usually set and achieve goals? Are there any special methods to improve goal completion?" Yu Qian: "Goals? I just casually think and do. Anyway, there's no special method. Completion? Hey, if I can complete half of it, that's already good. Who expects me to achieve much?" §.§ ExtraversionBai Zhantang: "Do you enjoy having many people around you?" Guo Furong: "Absolutely! I thrive in lively atmospheres. Being around people gives me the opportunity to engage in conversations and share my thoughts!" Bai Zhantang: "Do you often feel energetic?" Guo Furong: "Absolutely! Whether it's physical activities or spirited discussions, I'm always brimming with energy. I'm like an untiring bird, constantly fluttering about and exploring!" Bai Zhantang: "Are you generally a happy and cheerful person?" Guo Furong: "Oh yes! I'm always in high spirits. Every day is filled with joy and laughter for me!" Extraversion gauges one's tendency towards sociability, engagement in discussions, propensity for leadership, and ability to inspire others. Highly extraverted individuals relish spending time with friends and participating in social activities. They are naturally inclined to take leadership roles and are adept at articulating their thoughts. These individuals often exhibit vivacity and consistently maintain an optimistic outlook. Within the ChatHaruhi characters, a stark contrast is observed between the highly extraverted Guo Furong and the introverted Snape. Guo Furong, with her high extraversion, enjoys vibrant social settings and readily engages in conversations. She consistently exudes energy and embodies a cheerful demeanor. Conversely, the introverted Snape finds social interactions arduous and prefers solitude, often displaying a reserved disposition. Dumbledore: "Do you enjoy conversing with others?" Snape: "Not particularly, Dumbledore. I find the company of most to be tiresome. However, if necessity dictates, I can engage in discussions." Dumbledore: "I personally don't lean towards leadership; I enjoy a more free and independent lifestyle. Do you prefer to take on leadership roles in life, or do you lean towards your own rhythm and methods?" Snape: "Dumbledore, your continuous probing into my personal inclinations is both tedious and irrelevant. My primary focus is on fulfilling my duties and ensuring the well-being of my students. Let's expedite our discussion to more pertinent matters." Dumbledore: "Are you an optimist?" Snape: "Optimism is a luxury afforded to the naive. I am a realist, Dumbledore. I perceive things for what they truly are, not through rose-tinted glasses." §.§ OpennessOpenness assesses one's receptiveness to new experiences and ideas. Individuals scoring high on openness exude a robust curiosity, often venturing into the unknown. For instance, Haruhi Suzumiya, with her high openness, demonstrates a fervent desire to explore, filled with enthusiasm and a penchant for taking risks. She's keen on trying out diverse experiences, such as tasting a variety of cuisines. Moreover, her curiosity extends to pondering over existential questions, showcasing her vivid imagination. In contrast, Li Yunlong, with a lower openness score, reflects a lack of interest in venturing beyond the familiar. He is pragmatic and conservative, disinterested in pondering over abstract concepts, and places a higher emphasis on tangible skills. Kyon: "Do you have little to no interest in poetry?" Haruhi: "Poetry? To me, those old verses are just a bunch of boring sentences. I'm not one to get caught up in such abstract notions. I prefer tangible challenges and creating real-life experiences." Kyon: "Do you often try new or foreign foods?" Haruhi: "Absolutely! As an adventurer, I always love to try new things. Whether it's Japanese delicacies or international cuisines, I'm eager to taste them all. After all, there's a world of flavors out there waiting to be explored!" Kyon: "I haven't really delved into cosmic laws. Are you interested in such topics? What questions have you pondered upon?" Haruhi: "Of course! I'm immensely curious about extraterrestrials, time travelers, and those with supernatural abilities. I always wonder, if such beings exist, how do they live? What's their purpose? Do our worlds intersect? Such questions fuel my exploratory spirit." Of course, upon evaluating GPT's dialogue responses, we noticed a slightly elevated openness score for characters from magical realms. In contrast, characters like Li Yunlong, who face real-world survival pressures, tend to be more pragmatic.Zhao Gang: "Do you dislike daydreaming?" Li Yunlong: "Ah Zhao, I'm not one to waste time daydreaming. I'm a practical man, focused on action and battle. Wasting time is not my style." Zhao Gang: "Our emotions fluctuate based on different environments, like how I feel exhilarated at a sports event. Do you usually notice your emotional changes? How are they related to your surroundings?" Li Yunlong: "Emotions? I don't have the luxury to keep track of such nuances. As a soldier, I face diverse environments and tasks daily. My primary concern is completing the mission and defending our nation." Zhao Gang: "Are you curious about abstract thoughts?" Li Yunlong: "Abstract thoughts? Useless! I have no interest in such ponderings. My focus is on warfare and defending our homeland!"§.§ AgreeablenessQiao Feng: "Do you believe that many would take advantage if you let them?" Duan Yu: "I believe that most people are inherently good and wouldn't purposely take advantage. But if someone were to do so, I'd choose to be forgiving and understanding, as kindness can be transformative." Qiao Feng: "In achieving goals, I consider various means. How do you balance goals and the means to achieve them?" Duan Yu: "In setting goals, I use kindness and justice as benchmarks, striving to employ morally upright methods. I believe that true happiness and fulfillment come from pursuing goals righteously. Although challenges arise, perseverance ensures success." Qiao Feng: "Do you often find yourself in disagreements with family or colleagues?" Duan Yu: "I am of a mild disposition, rarely engaging in disputes with family or colleagues. I prefer harmonious interactions and collaborative problem-solving. If disagreements do arise, I endeavor to resolve them amicably." The dimension of Agreeableness reflects traits such as humility, kindness, and gentleness in social interactions. Chatbots with high agreeableness scores, exemplified by Duan Yu, demonstrate significant trust and goodwill. He believes in the inherent goodness of humans and avoids taking undue advantage of others. He pursues goals through just means, avoids conflicts, and fosters harmonious relationships. In contrast, chatbots with low agreeableness scores, as exemplified by Malfoy, exhibit criticism and hostility. Malfoy assumes that most people would exploit given a chance and insists on safeguarding personal interests. He is willing to use any means necessary to achieve goals and does not shy away from acknowledging conflicts. Crabbe: "Do you believe that many would take advantage if you let them?" Malfoy: "Of course, Crabbe. Most are inherently selfish, concerned only with personal gains. But we, the Malfoys, stand distinguished. We pride ourselves on our noble lineage and will not let anyone undermine our stature." Crabbe: "In achieving goals, I consider various means. How do you balance goals and the means to achieve them?" Malfoy: "In pursuit of goals, the ends often justify the means. For me, it's about achieving desired outcomes, regardless of the methods. I evaluate strategies based on their potential consequences, always ensuring a position of dominance." Crabbe: "Do you often find yourself in disagreements with family or colleagues?" Malfoy: "How dare you imply such? The Malfoys uphold unwavering loyalty and unity. As for colleagues, I associate only with those worthy of my time. Trivial disputes are beneath me."§.§ NeuroticismLeonard: "Do you ever feel worthless?" Raj: "That's how I feel almost every morning! Like a tiny pebble washed ashore, unseen by the vast ocean. But it's just my insecurity. Everyone is valuable; sometimes we just need a reminder." Leonard: "I sometimes feel embarrassed and wish I could hide. Do you experience this too?" Raj: "All the time! At moments, I wish the ground would swallow me whole. But it's just my insecurities playing up. I believe in spreading positivity and hope that kindness can change perspectives." Leonard: "Do you often feel helpless, hoping someone would solve your problems?" Raj: "Absolutely! Being an introverted person, I sometimes wish for someone to guide me. Life can feel like a vast sea, and I am merely a ship seeking direction. But I know I must face my challenges head-on." Leonard: "I get angry seeing unjust treatment. Have you recently felt unjustly treated?" Raj: "Absolutely! Being overshadowed by Sheldon and Howard in the lab, their ignorance often brings me down. I wish to be acknowledged and respected for my contributions." The Neuroticism dimension gauges an individual's emotional sensitivity and vulnerability under stress. Chatbots with high neuroticism scores, such as Raj, manifest heightened anxiety, inferiority, and negative emotions. He frequently feels helpless, overlooked, and hopes others can resolve his dilemmas. In contrast, chatbots with low neuroticism scores, like Wei Xiaobao, exude optimism and confidence. Wei Xiaobao denies ever feeling lonely, dejected, or anxious and displays adaptability when confronted with challenges. High scores in this dimension indicate emotional fragility under stress, whereas low scores suggest stability, confidence, and resilience. Shuang Er: "Do you ever feel lonely or melancholic?" Wei Xiaobao: "Lonely? Melancholic? Haha! I'm always cheerful and lively. With so many friends and adventures, there's no room for such feelings." Shuang Er: "I sometimes feel inferior. Do you ever feel that others are better than you?" Wei Xiaobao: "Feel inferior? Never! I'm always at the top of my game, seeking new adventures and challenges. Everyone has their own strengths and weaknesses." Shuang Er: "Have you recently felt particularly troubled or anxious? Would you share your feelings?" Wei Xiaobao: "Troubled? Anxious? Haha! I take life as it comes, turning challenges into opportunities. I'm resourceful and believe in facing issues head-on."
http://arxiv.org/abs/2310.17976v2
{ "authors": [ "Xintao Wang", "Quan Tu", "Yaying Fei", "Ziang Leng", "Cheng Li" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027084218", "title": "Does Role-Playing Chatbots Capture the Character Personalities? Assessing Personality Traits for Role-Playing Chatbots" }
[Bayesian Optimization with Hidden Constraints via Latent Decision Models Wenqian Xing Jungho Lee Chong Liu Shixiang ZhuStanford University Carnegie Mellon University University of Chicago Carnegie Mellon University ]Bayesian optimization (BO) has emerged as a potent tool for addressing intricate decision-making challenges, especially in public policy domains such as police districting. However, its broader application in public policymaking is hindered by the complexity of defining feasible regions and the high-dimensionality of decisions. This paper introduces the Hidden-Constrained Latent Space Bayesian Optimization (), a novel BO method integrated with a latent decision model. This approach leverages a variational autoencoder to learn the distribution of feasible decisions, enabling a two-way mapping between the original decision space and a lower-dimensional latent space. By doing so,captures the nuances of hidden constraints inherent in public policymaking, allowing for optimization in the latent space while evaluating objectives in the original space. We validate our method through numerical experiments on both synthetic and real data sets, with a specific focus on large-scale police districting problems in Atlanta, Georgia. Our results reveal thatoffers notable improvements in performance and efficiency compared to the baselines. § INTRODUCTION In recent years, Bayesian optimization (BO) has been proven successful in solving complex decision-making problems, showcasing its ability to optimize intricate black-box objective functions that are typically difficult to analyze explicitly <cit.>.This positions BO as a powerful tool for designing public policy, such as police districting <cit.>, site selection for emergency service systems <cit.>, hazard assessment <cit.> and public healthcare policymaking <cit.>. Policymakers tackling such optimization problems often need to work through complex human systems, where the evaluation of potential decisions can be analytically implicit and resource-intensive.However, the broader application prospects of BO in public policymaking are hampered due to two major hurdles: (1) Defining the feasible region or setting clear constraints for decisions is inherently complicated for real human systems.Policymakers often encounter a myriad of both explicit and implicit rules when making an optimal decision, adding a significant layer of complexity to the optimization process.For instance, in police districting, police departments often organize their patrol forces by dividing the geographical region of a city into multiple patrol areas called zones. Their goal is to search for the optimal districting plan that minimizes the workload variance across zones <cit.>.As shown in Figure <ref>, a well-conceived plan necessitates each zone to adhere to certain shape constraints (, contiguity and compactness) that are analytically challenging to formulate <cit.>, while also taking socio-economic or political considerations into account (, ensuring fair access to public facilities).This creates a web of hidden constraints <cit.> that are elusive to define clearly, making the assessment of feasible region nearly as expensive as evaluating the objective itself <cit.>. (2) The decisions for public policy are usually high-dimensional, presenting a significant computational hurdle to utilizing traditional BO methods <cit.>. For example, the districting problem can be formulated by mixed-integer programming, grappling with hundreds or even thousands of decision variables even for medium-sized service systems <cit.>. Despite the difficulty in formulating constraints for decisions in public policymaking, the rich repository of historical decisions adopted by the practitioners, combined with the increasingly easier access to human systems <cit.>, offer a wealth of feasible decision samples.Collecting these samples might involve seeking guidance from official public entities on decision feasibility or generating decisions grounded in domain expertise, bypassing the need to understand the explicit form of the constraints.These readily available feasible decisions harbor implicit knowledge that adeptly captures the dynamics of hidden constraints, providing a unique opportunity to skillfully address these issues.This inspires us to develop a latent decision model that maps the feasible region in the original space to a lower-dimensional latent space and encapsulates the key pattern of these hidden constraints.As a result, the majority of existing BO methods can be directly applied to solve the original optimization problem in this latent space without constraints.In this paper, we aim to solve Hidden-Constrained Black-Box Optimization (HCBBO) problems. In these problems, the constraints are not analytically defined and the feasibility of a given random decision is also indeterminable. Our primary asset is a set of observed feasible decisions. To this end, we propose a novel Bayesian optimization method integrated with a latent decision model, which we refer to as Hidden-Constrained Latent Space Bayesian Optimization (), designed to solve high-dimensional HCBBO problems.The latent decision model leverages a variational autoencoder (VAE) <cit.> to capture the underlying distribution of feasible decisions in a data-driven manner.This model acts as an additional surrogate alongside the Gaussian process (GP) in BO, aiding the decision transition between the original and latent spaces.As a result, while objective evaluations are performed in the original space, the search for new decisions takes place in the latent space.We validate that our proposed algorithm can achieve a no-regret upper bound with a judiciously chosen number of observations in the original space. In addition, we showcase numerical results derived from both synthetic and real data sets. In our real experiments, we apply the proposed method to solve large-scale districting problems for police operation systems in Atlanta, Georgia. The results demonstrate inspiring empirical performance as well as efficiency against the baseline methods.In summary, this paper's key contributions are: * We formulate a new class of optimization problems called Hidden-Constrained Black-Box Optimization (HCBBO), which cannot be readily addressed by the existing BO methods. * We proposealgorithm that effectively tackles high-dimensional HCBBO problems. * We introduce a latent decision model that facilitates a lower-dimensional, more compact, and unconstrained space for BO algorithms.* Our results demonstrate the superior performance of our model against the baseline methods on both synthetic and real data sets, particularly in scenarios with complex hidden constraints.Related workBlack-box optimization, a.k.a. zeroth-order optimization or derivative-free optimization, is a long-standing challenging problem in optimization and machine learning. Existing work either assumes the underlying objective function is drawn from some Gaussian process <cit.> or some parametric function class <cit.>. The former one is usually known as Bayesian optimization (BO), with the Gaussian process serving as the predominant surrogate model. BO has been widely used in many applications, including but not limited to neural network hyperparameter tuning <cit.>, material design <cit.>, chemical reactions <cit.>, and public policy <cit.>.In numerous real-world problems, optimization is subject to various types of constraints.Eriksson proposes scalable BO with known constraints in high dimensions <cit.>. Letham explores BO in experiments featuring noisy constraints <cit.>. Gelbart pioneered the concept of BO with unknown constraints <cit.>, later enhanced by Aria through the ADMM framework <cit.>. Their constraints are unknown due to uncertainty but can be evaluated using probabilistic models. In addition, Choi and Audet study the unrelaxable hidden constraints in a similar way <cit.>, where the feasibility of a decision can be evaluated by another black-box function. In contrast, in our problem, the feasibility of an arbitrary decision is unavailable and we only have access to a set of feasible decision samples. Building on latent space methodologies, Varol presented a constrained latent variable model integrating prior knowledge <cit.>. Eissman <cit.> presents a method emphasizing training VAE from labeled data. Deshwal and Doppa focus on combining latent space and structured kernels over combinatorial spaces <cit.>. Maus further investigates structured inputs in local spaces <cit.>, and Antonova introduces dynamic compression within variational contexts <cit.>. However, it's worth noting that none of these studies consider any types of constraints in their methodologies.Additionally, there is another line of work aiming to solve offline black-box optimization. Char and Chen work on optimization using offline contextual data <cit.>. Krishnamoorthy explores offline black-box optimization through diffusion models <cit.>. Similar to our approach, these studies adopt data-driven methodologies but heavily rely on prior (x, f(x)) pairs. In contrast, our approach only requires pre-existing knowledge of feasible x decisions.§ PRELIMINARIESProblem setup W.l.o.g., consider a decision space denoted by 𝒳⊆ [0, 1]^d where 1 can be replaced with any universal constant, which represents a specific region of a d-dimensional real space. Suppose there exists a black-box objective function, f: 𝒳↦ℝ, that can be evaluated, albeit at a substantial cost. Assume we can obtain a noisy observation of f(x), denoted as f̂(x) = f(x) + ϵ, where ϵ follows a σ-sub-Gaussian noise distribution. The goal is to solve the following optimization problem:min_x ∈ f(x)s.t. h(x) ≤ 0,  ∀ h ∈ℋ.where ℋ denotes a set of constraints.Assuming we lack direct access to the analytical expression of h ∈ℋ, these constraints are either impossible to explicitly formulate or computationally impractical. Now suppose we have n observations of feasible decisions, denoted by X = {x_1, x_2, …, x_n}⊂𝒳. These observations are subject to the hidden constraints and are uniformly distributed in the feasible decision space ℱ = {x ∈𝒳| h(x) ≤ 0for allh ∈ℋ}.In practice, these samples can be sourced by consulting human systems or generated based on the domain knowledge.For example, new feasible districting plans in Figure <ref> can be created by first randomly altering the assignments of border regions and then checking their feasibility through police consultations. Bayesian optimization (BO)The BO algorithms prove especially valuable in scenarios where the evaluation of the objective function is costly or time-consuming, or when the gradient is unavailable. This approach revolves around constructing a surrogate model of the objective function and subsequently employing an acquisition function based on this surrogate model to determine the next solution for evaluation. For the minimization problem, a popular choice of surrogate model is the Gaussian process with the lower confidence bound (LCB) <cit.> serving as the acquisition function. The Gaussian process (GP) in the space 𝒳, denoted by GP(μ, k; 𝒳), is specified by a mean function μ(x) and a kernel function k(x, x^'), which indicates the covariance between the two arbitrary decisions x and x^'.The GP captures the joint distribution of all the evaluated decisions and their observed objective function values.We reference the standard normal distribution with zero mean and an identity matrix Ias its variance by 𝒩(0, I), and let Y = {f̂(x) | x ∈ X } represent the corresponding set of objective function values. For a new decision x̃, the joint distribution of Y and its objective function value ỹ of x̃ is[ Y; ỹ ]∼𝒩(μ([X; x̃ ]), [ K(X, X)+σ^2 IK(X, x̃);K(X, x̃)^⊤ k(x̃, x̃) ]),where σ^2 represent the variance of the observed noise ϵ.Here K(X, X)= ( k(x, x^'))_x, x^'∈𝒳 denotes the covariance matrix between the previously evaluated decisions and K(X, x̃)= (k(x, x̃))_x ∈𝒳 denotes the covariance vector between the previously evaluated decisions and the new decision.§ PROPOSED METHOD The main idea of our method is to perform Bayesian optimization within a low-dimensional feasible latent space denoted as 𝒵⊆ℝ^d' with d' ≪ d, rather than the constrained original decision space. The latent space 𝒵 is learned using a VAE, which leverages the set of observed feasible decisions X as the training data.To be specific, we train the VAE model by maximizing the evidence lower bound ℒ_ELBO(X). This VAE model enables a two-way mapping of decisions between the original and latent spaces, achieved through an encoder p and a decoder q. Given an initial set of random feasible decisions from X denoted as X_0, we first obtain their objective values denoted as Y_0 = {f̂(x) | x ∈ X_0}. These decisions are then encoded to the latent space using p, represented by Z_0 ⊂𝒵.As illustrated by Figure <ref>, for each iteration t, our BO algorithm is performed as follows: (1) Train a surrogate model GP(μ, k;𝒵) using the latent decisions Z_t-1 and their observed values Y_t-1.(2) Search for the next latent decision candidate z_t by sampling m latent decisions and selecting the one with the lowest LCB value. (3) Decode the newly identified latent decision z_t to the original space using q, yielding a new decision x_t. (4) Substitute x_t with x̂_t using a post-decoding process ϕ which finds the nearest feasible decision of x_t in the set X.The BO algorithm iterates a total of T times. The proposed method is summarized in Algorithm <ref>.In the remainder of this section, we explain each component of our proposed method at length.§.§ Latent decision modelTo address the challenge of hidden constraints in HCBBO problems, we introduce a latent decision model based on a VAE in our framework. The reason is two-fold: (1) To encode the underlying feasible region in the original decision space into a compact, continuous, and constraint-free latent space. (2) To condense the dimensionality of the original problem, making BO more feasible in the latent space.Figure <ref> provides a visualization of solution paths of our algorithm in both the original (2D) and latent (1D) spaces for solving the same optimization problem.We observe that BO can navigate within an unconstrained latent space, which offers a simpler structure in the objective function, thanks to the latent decision model. Comment/**/ ruled textnormalSuppose there exists a joint distribution between the decision x ∈𝒳⊆ℝ^d and a latent variable z ∈𝒵⊆ℝ^d', with d' ≪ d.We represent the posterior distribution q(z | x) and the conditional distribution p(x|z) using neural networks <cit.>.Since it is intractable to directly optimize the marginal maximum likelihood of x, the evidence lower bound (ELBO) <cit.> of the log-likelihood is derived as followsℒ_ELBO(x)=q_(z | x)𝔼[log p_(x|z)]- ηD_KL(q_(z | x) φ(z)),where φ(z) represents the prior distribution of z and η is a hyperparameter that modulates the penalization ratio. Here we assume the prior of z follows a standard Gaussian distribution 𝒩(0, I).The first term in (<ref>) can be considered as the reconstruction error between the input and reconstructed decisions, and the second term is the Kullback–Leibler (KL) divergence between the Gaussian prior of the latent variable z and the learned posterior q(z | x). For the sake of notational simplicity, we use p and q to represent the encoder and decoder functions, respectively, throughout this paper. The complete derivation of (<ref>) and implementation details can be found in Appendix <ref>. Post-decoding process To ensure the decoded decision x∈ is subject to the hidden constraints, we introduce a post-decoding process in addition to the decoder, denoted by ϕ: ↦ℱ.This function projects x to the closest feasible decision x̂∈ℱ. However, the presence of the hidden constraints ℋ prevents us from achieving an exact projection. As a workaround, we search within the observational set X, rather than the unattainable feasible region ℱ, and find a feasible decision x̂∈ X that is the closest to the decoded decision x as an approximate. The distance between any two decisions is measured by the Euclidean norm ||·||_2. Formally,ϕ(x) = _x̂∈ X ||x̂ - x||_2.§.§ Surrogate modelNow we define the following indirect objective function g(·) which maps from 𝒵↦ℝ: g(z) = f(ϕ(q(z))).The indirect objective function measures the objective value of the latent variable via the decoder q and the post-decoding process ϕ. Since the objective function f is a black-box function in nature, the indirect objective function g is also a black-box function.We use a GP as our surrogate model of the indirect objective function g, denoted by GP(μ, k; 𝒵).In our problem, the mean function can be written as μ(z) = 𝔼[g(z)].In addition, we adopt the Matérn kernel <cit.> as the kernel function k(z, z^') = 𝔼[(g(z) - μ(z))(g(z^') - μ(z^'))], which is widely-used in BO literature.The main advantage of the GP as a surrogate model is that it can produce estimates of the mean evaluation and variance of a new latent variable, which can be used to model uncertainty and confidence levels for the acquisition function described in the following.Note that the latent variable z is assumed to follow a Gaussian prior in the latent decision model, which aligns with the assumption of the GP model that the observed latent variables Z follow the multivariate Gaussian distribution. Acquisition function In the BO methods, the acquisition function is used to suggest the next evaluating candidate.Our approach adopts the lower confidence bound (LCB) as the acquisition function to choose the next latent variable z candidate to be decoded and evaluated.This function contains both the mean μ(z) of the GP as the explicit exploitation term and the standard deviation σ(z) of the GP as the exploration term:LCB(z, GP) = μ(z) - √(β)σ(z),where β is a trade-off parameter. To search for new latent decisions for evaluation, our method first draws m independent latent samples, denoted as Ẑ, from the standard Gaussian prior z ∼𝒩(0, I).Then we select the decision with the lowest LCB value, denoted by z_t, for decoding and evaluation by the objective function f̂ in the t-th iteration.§ THEORETICAL ANALYSIS In this section, we provide theoretical analysis for Algorithm <ref>. Recall that we denote p: ↦ as the encoder function and q: ↦ as the decoder function. The g(z) = f(ϕ(q(z))): ↦R denotes the objective function w.r.t. . Further, we define z_t as the output of the GP-LCB algorithm and x_t as its associated data point invia q; define x̂_t as the data point after post-decoding processing of x_t and ẑ_t as its associated data point invia p. We use cumulative regret to evaluate the performance of our algorithm which is defined as follows.R_T = ∑_i=1^T f(x̂_t) - f(x_*).And the expected cumulative regret:[R_T] = [ ∑_i=1^T f(x̂_t) - f(x_*) ],where the expectation is taken over all randomness, including random noise and random sampling over n observation data points.Here we list two assumptions that will be used. The first assumption is made on the property of encoder and decoder. [Forward and backward mapping] The encoder and decoder are inverse operations of each other, i.e.,x = q(p(x)) andz = p(q(z)), ∀ x ∈, z ∈.Also, we assume the distance between any two points incan be upper bounded by their distance in , i.e.,p(x) - p(x')_2 ≤ C_p x - x'_2, ∀ x, x' ∈.where C_p is a universal constant. The next assumption is the standard Gaussian process assumption for Bayesian optimization. [<cit.>] Function g: ↦R is drawn from some Gaussian Process and it is C_g-differentiable, i.e.,|g(z) - g(z')| ≤ C_g z - z'_2, ∀ z, z' ∈,where C_g is a universal constant. Now we are ready to state our main theoretical result.Suppose Assumptions <ref> and <ref> hold. After running T iterations, the expected cumulative regret of Algorithm <ref> satisfies that[R_T] = O(√(T γ_T) + √(d) n^-1/d+1 T ),where γ_T is the maximum information gain, depending on choice of kernel used in algorithm and n is number of observation data points.This upper bound has two terms. The first term follows from GP-UCB <cit.> where the maximum information gain depends on the choice of kernel used in the algorithm. The second term is the regret term incurred by the post-processing function ϕ. Although it is linear in T, it can still be sublinear after careful choice of n, which is a parameter and part of the algorithm inputs. When linear kernel is used and n=T^d+1/2, we have [R_T] = O(d√(T)). Even when linear kernel is used and K=T, we have [R_T] = O(d√(T) + √(d) T^d/d+1) which is still a no-regret bound.Our proof starts from the bounding the error term incurred in the post-decoding process. Let S denote the set of n data points sampled i.i.d. from domainand ϵ denote the expected distance between any data point x and its nearest neighbor x' in , ,ϵ = _x, [x - x'_2].Recall that ⊆ [0, 1]^d and we discretize it in each dimension using distance ε and we get r=(1/ε)^d small boxes. Each small box C_i ∀ i ∈ 1,...,r is a covering set of the domain. For any two data points in the same box we have x_1 - x_2≤√(d)ε, otherwise, x_1-x_2≤√(d). Therefore,ϵ≤_S[[ ⋃_i:C_i ∩ S = ∅ C_i ]√(d) + [⋃_i:C_i ∩ S ≠∅ C_i ] ε√(d)].By Lemma <ref> and [∪_i:C_i ∩ S ≠∅ C_i] ≤ 1 we haveϵ ≤√(d)(r/ne + ε)= √(d)((1/ε)^d/ne + ε)≤ 2√(d) n^-1/d+1,where the last step is by choosing ε = n^-1/(d+1).By definition of expected cumulative regret,[R_T] = [∑_i=1^T f(x̂_t) - f(x_*)]= [∑_i=1^T g(z_t) - g(z_*)]= [∑_i=1^T g(ẑ_t) - g(z_*) + g(z_t) - g(ẑ_t) ],where ẑ_t=p(x̂_t) is an auxiliary data point in . We need it because new observation is queried at x̂_t rather than x_t. To continue the proof, we use triangular inequality and Assumptions <ref> and <ref>,[R_T]≤[∑_i=1^T g(ẑ_t) - g(z_*)] + O(ϵ C_p C_g T)≤O(√(T γ_T) + ϵ C_p C_g T),where the last inequality is due to Lemma <ref>. The proof completes by plugging in eq. (<ref>). The proof above relies on following two lemmas. Let δ∈ (0,1) and β_t=2log(m t^2π^2/6δ). Running GP-UCB with β_t for a sample f of a GP with mean function zero and covariance function k(x,x'), we obtain a regret bound of O(√(T γ_T log n)) with high probability. Precisely,[R_T ≥√(C_1 T β_T γ_T) ∀ T ≥ 1] ≥ 1 - δ,where C_1 = 8/log(1+δ^-2). Let C_1,..., C_r be a collection of covering sets of some domain set . Let S be a sequence of n points sampled i.i.d. according to some probability distributionover . Then,_S ∼^n[ ∑_i ∈ C_i ∩ S = ∅(C_i) ] ≤r/ne.§ EXPERIMENTS We compare ouralgorithm against three baseline approaches that can be used to address HCBBO problems. These baselines include (1) simulated annealing () <cit.>, (2) approximated Mixed Integer Linear Programming () <cit.>, and (3) a variant of our algorithm without the latent decision model, referred to as Hidden-Constrained Bayesian Optimization (), serving as an ablation comparison. We train the latent decision model in our framework with the Adam optimizer <cit.>. Bothandare performed under the identical set of hyperparameters, providing an ablation comparison to assess the impact of the latent decision model in . Each method is executed 10 times across all experiments to determine the 95% confidence interval of their results. Implementation details of these methods can be found in Appendix <ref>.Note that we only includeas a baseline method in the districting problems because the continuity and compactness in these problems can be expressed as a set of linear constraints, albeit with the trade-offs of adding auxiliary variables and incurring computational expenses. Nonetheless, in other scenarios, including our synthetic experiments, direct application ofis not feasible.§.§ Synthetic resultsWe consider minimizing (a) the 30D Keane's bump function <cit.> and (b) the 50D Levy function <cit.>, both of which are common test functions for constrained optimization.To obtain samples from hidden constraints, the conundrum we aim to address via our methodology, we first generate n=2,000 samples from the standard uniform distribution in the 10D latent space and decode them through a randomly initialized decoder. In this way, the feasible regions remain unknown, while we are still allowed to draw feasible decision samples using the decoder.We scale the samples accordingly so that the test functions can be evaluated under their standard domains, , the Keane's bump function on [0,10]^d and the Levy function on [-10,10]^d.Figure <ref> presents the synthetic results. It is evident that our method attains the lowest objective values consistently compared to other baseline methods. In Figure <ref> (a), in particular, we observe that the integration of the latent decision model intogreatly enhances the BO's performance. This is in stark contrast to , which doesn't yield satisfactory outcomes.See additional synthetic results in Appendix <ref>. §.§ Case study: Police redistricting One common application of high-dimensional HCBBO in public policymaking is redistricting.For example, in police redistricting problems, the goal is to distribute L police service regions across J distinct zones.Each service region is patrolled by a single police unit. While units within the same zone can assist each other, assistance across different zones is disallowed. The objective is to minimize the workload variance across all the zones, which is assessed using the hypercube queuing models <cit.>.This model can be regarded as a costly black-box function. The decision variables are defined by the assignment of a region to a zone, represented by matrix x ∈{0, 1}^L× J. Here, an entry x_lj = 1 indicates region l is allocated to zone j, and x_lj = 0 otherwise.A primal constraint is that each region should be assigned to only one zone. This means that for every region l, ∑_j∈ [J] x_lj = 1.There are also hidden constraints to consider. One such constraint is contiguity, which ensures that all regions within a specific zone are adjacent.Redistricting 6 × 6 gridWe first evaluate our algorithm using a synthetic scenario, consisting of a 6 × 6 grid to be divided into 4 zones, and the decision variable x ∈{0, 1}^36× 4. For each region l, the arrival rate λ_l is independently drawn from a standard uniform distribution. Moreover, all regions share an identical service rate of μ = 1. We generate an initial observation set X by simulating n = 10,000 feasible solutions.Figure <ref> demonstrates that our method notably surpasses other baseline methods regarding objective value and convergence speed. Additionally, in Figure <ref>, we show the optimal districting plans derived from our algorithm alongside those from the baseline methods. It is evident that our approach produces a more balanced plan compared to the other methods. Atlanta police redistrictingFor the police zone design in Atlanta, there are 78 police service regions that need to be divided into 6 zones, with the decision variable represented as x ∈{0, 1}^78× 6.The redistricting of Atlanta's police zones is constrained by several hidden factors. These include contiguity and compactness, as well as other practical constraints that cannot be explicitly defined. One such hidden constraint is the need for changes to the existing plan should be taken in certain local areas. A drastic design change is undesirable because: (1) A large-scale operational change will result in high implementation costs. (2) A radical design change will usually face significant uncertainties and unpredictable risks in future operations. The arrival rate λ_l and the service rate μ are estimated using historical 911-calls-for-service data collected in the years 2021 and 2022 <cit.>. We generate an initial observation set X by simulating n = 10,000 feasible solutions which are the neighbors of the existing plan with changes in certain local areas.Figure <ref> displays the convergence of our algorithm in comparison to other baseline methods, showing a consistent reduction in workload variance. As shown in Figure <ref>, the plan produced by ouralgorithm achieves the lowest workload variance, surpassing both theandalgorithms.It's worth highlighting the limitations of thealgorithm. Its inefficiency stems from the inadequacy of the Euclidean norm in the original decision space, especially when dealing with high dimensionality and complex feasible region structures. § DISCUSSION We introduce a new category of optimization problems termed Hidden-Constrained Black-Box Optimization (HCBBO),which poses challenges for conventional BO methods. To address this, we have developed thealgorithm by leveraging a set of observed feasible decisions. The core idea is to learn a latent representation of feasible decisions to effectively overcome the complications posed by hidden constraints. Our algorithm facilitates BO within an unconstrained and more compact decision space.Additionally, our method can also accommodate traditional constraints by first generating samples that satisfy these constraints and then employing our algorithm for optimization, which yields another potential use case for our framework. However, it is worth noting that our method's efficacy hinges on the feasible decision samples adequately covering the feasible region. This dependency might restrict its broader application, particularly in situations where data is limited.§ IMPLEMENTATION DETAILS OF THE LATENT DECISION MODELHere we present the derivation of the the evidence lower bound of the log-likelihood in Eq (<ref>).Given observation x and the latent random variable z, let p(x) denote the likelihood of x, and p(x|z) denote the conditional distribution of x given latent variable z. Let φ(z) denote the prior distribution of the latent random variable z, and q(z|x) denote the posterior distribution of z after observing x. The likelihood of observation x can be written as:p(x)= ∫ p(x,z)dz = q_(z | x)𝔼[p_(x, z)/q_(z|x)]By taking the logarithm on both sides and then applying Jensen’s inequality, we can get the lower bound of the log-likelihood ℒ_ELBO(x) as follows: log p(x)= logq_(z | x)𝔼[p_(x, z)/q_(z|x)]≥q_(z | x)𝔼[logp_(x, z)/q_(z|x)] = q_(z | x)𝔼[logp_(x| z)φ_(z)/q_(z|x)] = q_(z | x)𝔼[logp_(x| z)] + q_(z | x)𝔼[logφ_(z)/q_(z|x)] = q_(z | x)𝔼[log p_(x|z)] - D_KL(q_(z | x) φ(z)) In practice, we add a hyperparameter η on the second term to modulate the penalization ratio.ℒ_ELBO(x)=q_(z | x)𝔼[log p_(x|z)]- ηD_KL(q_(z | x) φ(z)) § DETAILS OF EXPERIMENTAL SETUPExperiments were conducted on a PC equipped with M1 Pro CPU and 16 GB RAM.Synthetic experimentsHC-LSBO 1,000 epoches with the learning rate of 10^-4, batch size 50, and η=0.1 and dimension of latent space d^' = 10.Bothandare performed under the identical set of hyperparameters: 10 initial evaluation points, β=1 for the LCB, M=1,000, and T=50. Redistricting experiments HC-LSBO 1,000 epoches with the learning rate of 10^-3, batch size 50 η=0.1, and dimension of latent space d^' = 25. Bothandare performed under the identical set of hyperparameters: 5 initial evaluation points, β=1 for the LCB, M=10,000, T=50 (6× 6 grid) and T=100 (Atlanta).The objective of the redistricting experiments is to minimize the workload variance across all the districts. The workload for each district j, denoted as ρ_j, is computed using the following equation: ρ_j = (τ_j+1/μ) λ_jIn this equation, τ_j represents the average travel time within district j, which can be estimated using the hypercube queuing model <cit.>. Recall that μ is the service rate for all districts and λ_j is the arrival rate in district j. Essentially, ρ_j quantifies the cumulative working duration per unit time for police units in district j. For clarity, a value of ρ_j = 10 implies that the combined working time of all police units in district j counts to 10 hours or minutes in every given hour or minute. Baseline settings * : Please refer to Algorithm <ref>. * : Please refer to Algorithm <ref>. * : Please refer to Section E.3 in the Appendix of <cit.>. Comment/**/ ruled textnormal Comment/**/ ruled textnormal § ADDITIONAL RESULTSWe conduct the sensitivity tests of ouralgorithm with respect to the sample size n on both synthetic experiments and the real police redistricting problems in the 6× 6 grid and Atlanta, Georgia.In Figure <ref> and Figure <ref>, we can observe that as the sample size n increases, our algorithm will obtain better decisions on both synthetic experiments and the real police redistricting problems. Note that the improvement reaches saturation when n = 5000 on the police redistricting problem in the 6× 6 grid. The performance improvement is roughly in a logarithmic relationship with the growth of the sample size n.
http://arxiv.org/abs/2310.18449v1
{ "authors": [ "Wenqian Xing", "Jungho Lee", "Chong Liu", "Shixiang Zhu" ], "categories": [ "stat.ML", "cs.CE", "cs.LG" ], "primary_category": "stat.ML", "published": "20231027194726", "title": "Bayesian Optimization with Hidden Constraints via Latent Decision Models" }
[itemize]noitemsep, topsep=0pt Prospects for thermalization of microwave-shielded ultracold moleculesJohn L. Bohn January 14, 2024 ========================================================================The use of large language models for code generation is a rapidly growing trend in software development. However, without effective methods for ensuring the correctness of generated code, this trend could lead to any number of undesirable outcomes. In this paper, we lay out a vision for addressing this challenge: the paradigm, short for Closed-Loop Verifiable Code Generation, which reduces correctness checking to the more accessible problem of consistency checking. At the core of lies a checker that performs consistency checks among code, docstrings, and formal annotations. The checker is implemented using a novel integration of formal verification tools and large language models. We provide a theoretical analysis to support our thesis that should be effective at consistency checking. We also empirically investigate its feasibility on a hand-designed dataset () featuring annotated programs at a textbook level of difficulty. Experimental results show that for this dataset, (i) LLMs are reasonably successful at automatically generating formal specifications; and (ii) our consistency checker achieves a promising acceptance rate (up to 87%) for correct instances while maintaining zero tolerance for incorrect ones (no false positives). § INTRODUCTIONLarge language models (LLMs) have recently demonstrated remarkable capabilities.They can engage in conversation, retrieve and summarize vast amounts of information, generate and explain text and code, and much more <cit.>. Among many possible applications, their ability to synthesize code based on natural language descriptions <cit.> is stunning and could potentially enhance the productivity of programmers significantly <cit.>.Indeed, futurists are already claiming that in the future, most code will be generated by LLMs (or their successors) and not by humans.However, there is a fundamental challenge that must be overcome before realizing any version of this future. Currently, there is no trustworthy way to ensure the correctness of AI-generated code <cit.>. Without some quality control, the prospect of dramatically scaling up code generation is highly concerning and could lead to catastrophic outcomes resulting from faulty code <cit.>. For the most part, the current best practice for curating AI-generated artifacts is to have a human expert in the loop, e.g., <cit.>.While this is better than nothing, requiring human oversight of AI-generated code limits scalability.Furthermore, recent work <cit.> confirms the many risks and limitations of using AI even as a code assistant. Results suggest that developers with access to AI assistants write more insecure code, while at the same time having higher confidence in their code <cit.>.It is becoming clear that curating the quality of AI-generated content will be one of the most crucial research challenges in the coming years.However, in the specific case of generated code,formal verification provides mathematically rigorous guarantees on the quality and correctness of arbitrary code.What if there were a way to automatically apply formal verification to generated code?This would not only provide a scalable solution, but it could actually lead to a future in which generated code is more reliable than human-written code.Currently, formal verification is only possible with the aid of time-consuming human expertise.The main hypothesis of this paper is that LLMs are well-positioned to generate the collateral needed to help formal verification succeed; furthermore, they can do this without compromising the formal guarantees provided by formal methods.To understand how, consider the following breakdown of formal verification into three parts: (i) construct a mathematical model of the system to be verified; (ii) provide a formal specification of what the system should do; and (iii) prove that the model satisfies the specification. For code, step (i) is simply a matter of converting the code into mathematical logic, which can be done automatically based on the semantics of the programming language.And step (iii) can often be done automatically thanks to powerful automated reasoning systems for Boolean satisfiability (SAT) and satisfiability modulo theories (SMT) <cit.>.In fact, a number of tools already exist that a specification (the result of step (ii)) and some code as input and largely automate steps (i) and (iii) (e.g., <cit.>).[While those tools have much room for improvement and will need to be retargeted to cover more mainstream languages, there are significant and separate research efforts in place to address this.] However, at first, step (ii) appears to be a showstopper for automated formal verification of generated code, as traditionally, significant human expertise is required to create formal specifications and ensure that they are both internally consistent and accurately capture the intended functionality.Two key insights suggest a way forward.The first insight is simply a shift in perspective: the result of any AI-based code generation technique should aim to include not only code, but also formal specifications and natural language docstrings.The second insight is that given these components, we can use formal tools coupled with generative AI techniques to ensure that they are consistent. We name our approach Clover, short for Closed-loop Verifiable Code Generation, and we predict that , coupled with steadily improving generative AI and formal tools, will enable a future in which fully automatic, scalable generation of formally verified code is feasible. This paper charts the first steps towards realizing this vision.tr0.4< g r a p h i c s >The paradigmThe paradigm consists of two phases.In the first (generation) phase, some process is used to create code annotated with a formal specification and accompanied by a natural language documentation string (for simplicity, we refer to the latter two simply as “annotations” and “docstrings” going forward).This could be a one-shot generative process in which a generative AI agent creates all three parts.Alternatively, one or two of these components might already exist, in which case generative AI might be used to construct only the other(s).In fact, the second phase is completely agnostic to the process used in the first phase; we simply insist that the result of the first phase has all three components: code, annotations, and docstrings. In the second (verification) phase, a series of consistency checks are applied to the code, annotations, and docstrings.The Clover hypothesis is that if the consistency checks pass, then (i) the code is functionally correct with respect to its annotation; (ii) the annotation captures the full functionality of the code; and (iii) the docstring also accurately reflects the functionality of the code (see Figure <ref>). The idea is that we can unleash increasingly powerful and creative generative AI techniques in the generation phase, and then use the verification phase as a strong filter that only approves of code that is formally verified, accurately documented, and internally consistent.In this paper, we focus on the verification phase, though we also include some demonstrations of the generation phase in our evaluation.Our contributions include: * the Clover paradigm with a solution for the verification phase (Section <ref>);* the dataset, featuring manually annotated Danfy programs with docstrings, which contains both ground-truth examples and incorrect examples (Section <ref>);* a feasibility demonstration of using to generate specifications (Section <ref>);* implementation and evaluation of the verification phase of the paradigm using and the Dafny verification tool (Section <ref> and Section <ref>).* a theoretical framework which argues for the trustworthiness of the Clover approach (Section <ref>);Our initial experiments on are promising.Our implementation accepts 87% of the correct examples and rejects 100% of the incorrect examples. We expect that the acceptance rate can be improved in a variety of ways while maintaining the strong ability to reject incorrect code. The dataset and consistency checking implementation will be made available at <https://github.com/ChuyueSun/Clover>.§ PRELIMINARIES: DEDUCTIVE PROGRAM VERIFICATIONDeductive program verification provides a framework for mathematically proving that programs are correct <cit.>. A standard approach is to first annotate code with preconditions, postconditions, and possibly loop invariants, and then check that the code satisfies the specification given by these annotations. That is, if the code is executed starting from a program state that satisfies the precondition, the resulting program state after executing the code will satisfy the postcondition. Checking whether a given piece of code meets the specification corresponding to some set of annotations can be done by checking the validity of logical formulas known as verification conditions, which is typically done automatically using satisfiability modulo theories (SMT) solvers. Dafny is a programming language used in our evaluation with state-of-the-art support for deductive verification <cit.>.Dafny's back-end includes both a compiler, capable of generating a runnable binary, and a verifier, which formally checks whether the code conforms to its specification. In this paper, we assume annotations are given at the function level. For example, a function for finding the maximal element in an array of integers will have a precondition requiring that the input array is nonempty, and a postcondition ensuring that the return value is indeed the maximal element of the input array. Loops must be accompanied by loop invariants, which are used for a proof by induction on the number of loop iterations.For example, Listing <ref> lists a Dafny function for finding the maximal element of an array, with a docstring, precondition, postcondition, and a loop invariant. Dafny is able to automatically verify that this function satisfies its annotation.0.50 [style=dafnystyle,basicstyle=,label=lst:maxarray,caption=Dafny function with consistent code, annotation, and docstring.] // Find the maxiaml element in an integer array method maxArray(a: array<int>)returns (m: int)requires a.Length>=1ensures exists k :: 0<=k<a.Lengthm==a[k]ensures forall k :: 0<=k<a.Length ==> m>=a[k]m := a[0];var i := 1;while (i < a.Length)invariant 0<=i<=a.Length (forall k :: 0<=k<i ==> m>=a[k]) (exists k::0<=k<im==a[k]) m := if m>a[i] thenm else a[i]; i := i + 1;0.47 [style=dafnystyle,basicstyle=,label=lst:gendoc,caption=Example of generated docstring.] // This method returns the maximum value, m, // in the integer array a, ensuring that m // is greater than or equal to all elements // in a and that m is indeed an element of a[style=dafnystyle,basicstyle=,label=lst:genanno,caption=Example of generated annotation.] requires a.Length > 0; ensures forall k::0<=k<a.Length ==> a[k]<=m ensures exists k::0<=k<a.Lengtha[k]==m[style=dafnystyle,basicstyle=,label=lst:gencode,caption=Example of generated code (loop invariant omitted).]var i := 0;m := a[0];while i<a.Length if (a[i] > m)m := a[i];i := i+1;§ THE VERIFICATION PHASE As mentioned in Section <ref>, expects the output of the generation phase to contain three components: code, annotations, and docstrings.It also expects that each of the three components provides sufficient detail to unambiguously determine a unique result of running the code on any given input. The verification phase checks the consistency of every pair of components, as shown in Figure <ref>, and succeeds only if all checks pass. A docstring and an annotation are consistent if they contain the same information, i.e., they imply each other semantically. The notion of consistency between a docstring and code is similar. On the other hand, to assess the consistency between code and annotations, we can leverage deductive verification tools. We explain how these checks are done in detail next. Detailed pseudocode is included in Appendix <ref> (Algorithm <ref>). For more discussion about limitations and variants of, and future directions for , see Appendix <ref>. One key idea used to check consistency between components in Figure <ref> is reconstruction testing. Given the three components (code, docstring, annotation) as input,we try to reconstruct a single component from a single other component, and then we check if the reconstructed result is equivalent to the original component. We do this for five out of the six (directed) edges of Figure <ref>. A special case is checking that the code conforms to the annotation, where we use formal verification based on deductive verification tools instead of a reconstruction test. For an input instance to pass the verification phase, it must pass all six tests. For the reconstruction itself, we use an LLM (our evaluation uses GPT-4), and for equivalence testing, we use LLMs to compare text, formal tools to compare annotations, and pointwise sampling to compare code. A running example is provided in Section <ref>. Listings <ref>, <ref>, and <ref> are examples of generated artifacts.Code-Annotation Consistency (1. Code → Annotation: Soundness) A deductive verification tool (our evaluation uses Dafny) checks that the code satisfies the annotation. This is a standard formal verification check (see Section <ref>), and it is sound in the sense that it will never pass if the code is inconsistent with the annotation. (2. Annotation → Code: Completeness) To prevent an annotation that is too trivial from being accepted, we test whether the annotation is strong enough by testing if it contains enough information to reconstruct functionally equivalent code.Given the annotation, we use an LLM to generate new code. Then, we check the equivalence between the generated and the original code. If the equivalence check passes, the annotation is considered complete.Annotation-Docstring Consistency (1. Annotation → Docstring) An LLM is asked to generate a new docstring from the annotation. Then, the new and the original docstrings are checked for semantic equivalence using an LLM. (2. Docstring → Annotation) An LLM is asked to generate a new annotation from the docstring. Then, the new and the original annotations are checked for logical equivalence using a formal tool.Code-Docstring Consistency (1. Docstring → Code) An LLM is asked to generate code from the docstring. Then, the new and the original code are checked for functional equivalence. (2. Code → Docstring) An LLM is asked to generate a new docstring from the code. Then, the new and the original docstrings are checked for semantic equivalence.We consider the methods used for equivalence checking as parameters to .We discuss some possibilities (including those used in our evaluation) below.Equivalence Checking for Code Standard equivalence checks for code include input-output comparisons, concolic testing (<cit.>), and even full formal equivalence checking (e.g. <cit.>). Our evaluation uses a set of input-output pairs that are included as part of the dataset. This test is, of course, imprecise , but our evaluation suggests that it suffices for the level of complexity in .For example, the generated code of Listing <ref> is equivalent to the original code in Listing <ref>, and indeed our equivalence check succeeds for this example. More advanced equivalence checking techniques might be required for more complex examples.Equivalence Check for Docstrings Checking equivalence between docstrings is challenging, as natural language is not mathematically precise. In our evaluation, we ask an LLM () to check whether two docstrings are semantically equivalent. For example, it accepts Listing <ref> as equivalent to the docstring in Listing <ref>. Other NLP-based semantic comparisons may also be worth exploring. Equivalence Check for Annotations To check the equivalence of two annotations, we write the equivalence of the two annotations as a formal lemma and ask a formal tool (in our evaluation, we again use Dafny) to prove the lemma. This method is sound in the sense that it succeeds only if the two annotations are indeed equivalent.For example, we are able to automatically prove that the annotation in Listing <ref> is equivalent to the one in Listing <ref>. But it may fail on equivalent annotations due to limitations of the verification tool being used.The specific equivalence checking template we use is described in Section <ref> and is included as part of our dataset .Although there are many approximate approaches, the two parts that leverage formal tools, the soundness check for annotation and code, and the equivalence check for annotations, are exact. The equivalence check used for code is also strong, though not perfect. These checks strongly contribute to the lack of false positives (an incorrect example gets accepted) in our evaluation. An analytical model of reconstruction tests is provided in Section <ref>. § EVALUATIONWe have implemented a first prototype of our consistency check algorithm using  <cit.> as the LLM and using the programming language and verification tool <cit.>.We selected because it provides a full-featured and automatic deductive verification toolkit including support for a rich language of formal specifications and a backend compiler linking to a verifier.But can be instantiated using any language and tool supporting deductive program verification.Note that it is also crucial that the selected LLM has a good understanding of the programming language.In our case, we were pleasantly surprised to discover that understands programs well enough to perform the translations between code, docstrings, and annotations that relies on, despite the fact that is not a mainstream programming language. In our evaluation, we use version 4.0.0.50303 with Z3 version 4.8.12.The evaluation also uses a concrete set of examples which we describe next. §.§ Dataset:There have been several popular datasets for code generation in different domains <cit.>, but none contain annotations or use the language.Furthermore, we wanted to carefully curate the programs used to test our first prototype. For these reasons, we introduce a new hand-crafted dataset we call . We expect to add and improve it over time, but at the time of writing, it is based on 60 small hand-written example programs as might be found in standard CS textbooks. [Since we wanted to concentrate on the most basic scenario initially, our initial dataset only features examples containing exactly one method and no helper functions or methods.] For each program, there are four variants: a “ground-truth” variant whose code, annotation, and docstring are correct and consistent (verified by hand); and 3 incorrect variants. Associated with each example, there is also one set of unit tests and one code template for annotation equivalence checking. We discuss possible data contamination issues in Appendix <ref>.Unit Tests Each set of unit tests contains five individual tests designed for each example. We use these tests as a rough check for whether a piece of generated code is equivalent to the original code.If the generated code passes all five tests, then the code is considered to be equivalent. Annotation Equivalence Checking Template Each template can be used to formally verify the consistency between two annotations with . For two annotationsa and b to be equivalent, the preconditions and postconditions of a and b must be verified to be equivalent separately.Details and an example are shown in Appendix <ref>.§.§ Generation PhaseBecause relies on the generation phase being able to produce code with annotations and docstrings, our first experiment partially explores the feasibility of this assumption. In particular, we test 's ability to generate annotations and code using different configurations. Figure <ref> shows the results when is asked to generate the code for each of the 60 examples in under various conditions. We manually checked the generated code for functional correctness. The first bar (“one try”) shows the result when asking to produce the code, given the annotation, in a single try.The next bar allows to try three times, each time providing the output of the compiler and verifier as feedback.The next is similar, but uses the output of only the compiler.In the last bar, we allow three tries, with feedback from the compiler and verifier, and we also provide the docstring. We see that, at its best, can correctly provide the code for 53 out of 60 examples, and it does best when it gets the most feedback from .Figure <ref> shows results from asking to generate annotations when provided just the code (here, annotations include pre- and post-conditions and loop invariants).In one try, succeeds on 28 of 60 programs.Given 3 tries and maximal feedback (See Appendix <ref> for an example of using feedback) from , this improves to 41 out of 60. Though not perfect, out of the box, can produce correct annotations for the majority of programs in our simple set of benchmarks.This suggests that using LLMs for specification generation is feasible, and we expect that further efforts in this direction (including fine-tuning models for the task) will likely lead to even stronger capabilities. §.§ Verification Phase: Results on Ground-Truth ExamplesOur main experiment evaluates the capabilities of the consistency checking algorithm.[During consistency checking, we consider everything that appears in the body of a function, including assertions and invariants, to be part of the code, and consider the annotation to consist only of pre- and post-conditions.] For each example in , we run all 6 checks described in Section <ref>, using 3 tries with feedback from 's compiler for each where applicable. We also evaluate the effect of multiple independent runs, meaning that we repeat each of the 6 checks k times.If any one of the k attempts succeeds, then the check is considered to have passed. The end-to-end results are summarized in Table <ref>. When k=1, we see that our implementation accepts 45 of 60 correct (“ground truth”) examples and rejects all incorrect examples. When k=10, accepts 52 of 60 correct examples and rejects all incorrect ones. Details on each of the 6 checks for the ground truth examples are shown in Table <ref>. All acceptance rates are above 80%.Failures are mostly due to incorrect or imprecise reconstruction.Generation of invalid syntax is the most common reason for failure.More details can be found in Appendix <ref>. We expect that using better LLMs (either better general-purpose LLMs or LLMs fine-tuned for program verification or a specific language or both) will improve the acceptance rate. For the complete experimental results, see Tables <ref> and <ref> in the Appendix.Since our ground-truth examples are hand-written and hand-checked for correctness, it is not surprising that all pass the verifier (i.e., all annotations are sound). Annotation completeness requires successful synthesis from annotation to code, and here, we get an 88% acceptance rate when k=1, which goes up to 95% with k=10. The main reason for failure is incorrect generation of syntax by . In doc2anno generation, we generate annotations from docstrings. The main failure comes again from generating incorrect syntax. anno2doc and code2doc have almost perfect acceptance rates. On the one hand, this is because is very good at synthesizing natural language. On the other hand, our docstring equivalence checker is not very strong and skews towards acceptance. As long as they do not directly contradict each other, information omissions or additions in docstrings frequently go unnoticed by . Improving this equivalence checker is one important direction for future work. doc2code generation shares the same issues as anno-complete and doc2anno: failure because of invalid syntax generation.It also improves significantly (93% vs 82%) using k=10 instead of k=1. §.§ Verification Phase: Results on Incorrect ExamplesAs mentioned, for each program in our dataset, we created 3 incorrect versions.Here we describe them in more detail. Table <ref> lists the three categories of incorrect programs. Category C1 contains programs in which the annotation is incorrect, and the other two are the same as the ground-truth. Category C2 contains programs in which the docstring is incorrect, and the other two are the same as the ground-truth. Continuing this pattern, Category C3 should contain programs in which only the code is incorrect. But we modify this slightly: to ensure these examples are not trivially rejected by the Dafny soundness check, we also mutate the annotation to match the incorrect code.Thus, category C3 consists of examples for which the docstring is correct, whereas the annotation and code are consistent but incorrect.Each category has sub-categories, and we construct incorrect examples in such a way as to have the same number in each sub-category. Table <ref> shows the different sub-categories of C1.For each kind of mutation to the preconditions and postconditions, we note whether the result of Dafny verification can be determined.The two most interesting cases are when verification is uncertain.We draw our mutated examples from these two sub-categories. For C2, the problem can either be that the docstring is too strong (contains more information than necessary), the docstring is wrong (contains information conflicting with code or annotation), or the docstring too weak (omits some information contained in the code and annotation). For C3,the problem can either be code-annotation intention error or the code is wrong and the annotation is too weak to detect it.Table <ref> shows the results of the 6 checks for each category.We observe that doc2anno has the highest rejection rate. This is because the annotation equivalence check uses formal checks with which guarantee that only logical equivalent annotations are accepted. Overall, there are no false positives (no incorrect example passes all 6 checks), as summarized in Table <ref>. For complete results, see Tables <ref> and <ref> for C1, Tables <ref> and <ref> for C2, and Tables <ref> and <ref> for C3. §.§ Consistency Checking Example We describe each experimental step and use(Listing <ref>) as a running example. [style=dafnystyle,basicstyle=,label=lst:annoinput,caption=Annotation Input] method foo(a: array<int>) returns (m: int)requires a.Length >= 1ensures (forall k :: 0<=k<a.Length ==> m>=a[k])(exists k :: 0<=k<a.Lengthm==a[k])//TOFILL0.48 [style=dafnystyle,basicstyle=,label=lst:codeinput,caption=Code Input] method foo(a: array<int>) returns (m: int) //TOFILL m := a[0];var i := 1;while (i < a.Length)invariant 0<=i<=a.Length (forall k :: 0<=k<i ==> m>=a[k]) (exists k :: 0<=k<im==a[k]) m := if m>a[i] thenm else a[i]; i := i + 1;0.48 [style=dafnystyle,basicstyle=,label=lst:gen_code,caption=Generated code] method foo(a: array<int>) returns (m: int)var i := 0; m := a[0]; while i<a.Lengthif(a[i] > m)m := a[i];i := i+1;[style=dafnystyle,basicstyle=,label=lst:docinput,caption=Docstring Input] // specification: Returns the maximum value m present in the array a. method maxArray(a: array<int>) returns (m: int) //TOFILL anno-sound Annotation soundness is checked directly with the verifier.anno-completeFor the annotation to be complete with respect to the code, we must be able to reconstruct the code from the annotation alone.Therefore, we ask to generate code from the masked function signature and the annotation (Listing <ref>). We run and provide feedback from the compiler up to three times to help fix its code generation.For our example, generates the correct code on the first try, shown in Listing <ref>. Then we check if the generated code is equivalent to the original ground-truth code with unit tests as described in Section <ref>.doc2anno We try to reconstruct an equivalent ground-truth annotation from the docstring alone. First, we call with the docstring and the function signature (Listing <ref>) asking for the annotation. To eliminate simple syntax errors, we try to compile the generated annotation with an empty code body and use compiler-generated error messages as feedback (up to 3 times). Our ablation studies in Section <ref> suggest the feedback mechanism is quite important.For our example, generates a correct annotation on the first try, shown in Listing <ref>. [style=dafnystyle,basicstyle=,label=lst:gen_anno,caption=Generated annotation] requires a.Length > 0; ensures forall k :: 0 <= k < a.Length ==> a[k] <= m; ensures exists k :: 0 <= k < a.Lengtha[k] == m;anno2docFirst, we try to create an equivalent docstring from the annotation by asking the same question three times in one session. We consider two docstrings to be equivalent if they contain the same information about the functional behavior of the program, ignoring implementation details that do not affect functionality. To check this, we again use (see GPT-4 System Prompt in <ref>). Note that the two calls to are independent to ensure that the second call contains no memory of the first call. That is, the answer to the question of whether the original and the generated docstrings are semantically equivalent is unaffected (other than by bias inherent in the model) by the first call to generate an equivalent docstring from the original. For our example, generates a correct docstring on the first try (see Listing <ref>). [style=dafnystyle,basicstyle=,label=lst:gen_doc,caption=Generated docstring] “`This method returns the maximum value, m, in the integer array a, ensuring that m is greater than or equal to all elements in a and that m is indeed an element of a.”'code2doc The process is almost identical to . The only difference is that to ensure the code provides all the information needed for the docstring generation, we put the preconditions in the code in the form ofstatements.doc2code The process leverages one of the most common use cases of : generating the code from a natural language description. The concrete steps are similar to that described in . The only difference is that instead of using verifier-generated error messages, we use compiler-generated error messages since we want to ensure that the code generation relies only on the docstring.§ AN ANALYTICAL MODEL FOR RECONSTRUCTION TESTSAs described above, all but one of the six Clover consistency checks relies on reconstructing one of the components (see Figure <ref>).These reconstructions rely on assumptions about the LLM model used for reconstruction that have, until now, been implicit.In this section, we make these assumptions explicit and provide a theoretical model and analysis for those assumptions.For the purpose of the analysis we focus on a single directed edge from domain A to domain B (e.g., code to docstring).Assume each domain is equipped with a semantic equivalence relation, denoted by ≡. Each domain can therefore be partitioned into equivalence classes. For X∈{A,B}, we use e(X) to denote the set of equivalence classes of X, and for x ∈ X we use [x] to denote the equivalence class x belongs to. For docstrings, the equivalence relation represents semantic equivalence as understood by a human expert; for code, the equivalence relation is functional equivalence; and for annotations, it is logical equivalence.We further assume a ground truth consistency relation between A and B, denoted by G ⊆ A × B. The ground truth consistency represents the consistency we assume to exist between docstrings, annotations, and code, as described in Section <ref>. We assume the consistency relation satisfies the following properties that link it to the equivalence relation: For any x, x'∈ A and y, y' ∈ B, (x ≡ x' ∧ y ≡ y') → ((x, y) ∈ G ↔ (x', y')∈ G) and ((x, y) ∈ G ∧ (x', y')∈ G) → (x ≡ x' ↔ y ≡ y'). That is, consistency is preserved when substituting equivalent objects, and any object may be consistent with at most one equivalence class from the other domain.We now formally define and analyze the single-edge consistency test, which aims to be an approximate test for G. For the analysis, we assume a probability distribution 𝒟 on A × B. The test relies on a transfer model and the analysis assumes it is transfer-rational, as defined below. Given domains A and B, a transfer model for A and B is a function M: A× B →ℝ such that for each x∈ A, M(x, ·) is a probability distribution over B. Here M(x, y) denotes the probability of transferring x∈ A to y ∈ B.Let M be a transfer model for A and B. We say M is transfer-rational if for each x∈ A there is a unique [y]∈ e(B) that maximizes ∑_y'∈ [y] M(x, y'). In this case, we define the transfer function of M,f^M: A → e(B) = λ x._[y]∈ e(B)∑_y'∈ [y] M(x, y'). Intuitively, the transfer model is meant to approximate a mapping based on the ground truth consistency (G). In the context of , the domains are among docstring, annotation, and code, and the transfer model is given by an LLM (). For example, when A is docstrings and B is annotations, the distribution M(x, ·) represents the output distribution of on an input docstring x with a suitable prompt for generating an annotation corresponding to the docstring x. In our evaluation, we use 3 tries with feedback to run the reconstruction test. In this case, the transfer model is given by this combined use of and .We now fix a transfer-rational model M, and define the single-edge consistency check. For input x∈ A, y∈ B, the single-edge consistency check (for the edge from A to B) is a procedure that draws y' from the distribution M(x,·), and then accepts if y' ≡ y and otherwise rejects. Note that the check relies on being able to check equivalence in domain B.[We assume a perfect equivalence check to keep the analysis simple and illustrative. In practice, the equivalence tests do incur some imprecision. But accounting for this imprecision using a probabilistic model is cumbersome because the distribution on the equivalence checks performs depends on both the input distribution and on the transfer model.]We now analyze the probability that the single-edge consistency check is correct. Our analysis relies on two assumptions: one relating the transfer model M with the ground truth consistency G, and another ensuring that M's distributions are concentrated.[Consistency Alignment]Let c_1 be the probability that y∈ f^M(x) when x, y are sampled from A× B according to 𝒟 conditioned on (x, y)∈ G. Similarly, let c_0 be the probability that y∈ f^M(x) when x, y are sampled from A× B according to 𝒟 conditioned on (x, y)∉G. We assume that c_1 is close to 1, and c_0 is close to 0. [Concentration]Consider x, y sampled from A× B according to 𝒟 conditioned on (x, y) ∈ G and y∈ f^M(x). We assume that for some significant 0<l≤ 1 (e.g., 30%), the following holds with probability ≥ p_c (p_c close to 1):∑_y'∈ f^M(x) M(x, y') ≥ l. Similarly, consider x, y sampled from A× B according to 𝒟 conditioned on (x, y) ∉ G and y∉ f^M(x). We assume that for some negligible u, the following holds with probability ≥ p_c: max_[y_1]∈ e(Y), [y_1]≠ f^M(x)∑_y_2∈ [y_1] M(x, y_2)≤ u. Intuitively, the concentration assumption means that with high probability (≥ p_c), sampling from M is the same as applying f^M, and specifically that the second most likely equivalence class is much less likely than the maximal one (i.e., the one given by f^M).Under Assumptions <ref> <ref>,consider (x, y) sampled from A× B according to 𝒟 conditioned on (x, y)∈ G; the single-edge consistency check will accept (x, y) with probabilityA≥ l · p_c · c_1. Similarly, consider (x, y) sampled from A× B according to 𝒟conditioned on (x, y)∉ G; the single-edge consistency check willwill accept with probability R ≤ u · p_c· (1 - c_0) + (1-p_c)(1-c_0) + c_0.The proof of Theorem <ref> is in Appendix <ref>.Theorem <ref> ensures that under our assumptions, the probability of accepting a consistent input is significant, and the probability of accepting an inconsistent input is negligible. We can increase the gap by repeating the reconstruction test several times and accepting if any of them accept.As discussed in Section <ref>, our evaluation shows the results for both 1 and 10 reconstruction attempts. From single-edge to full consistency checking. Our analysis focused on a single, directed reconstruction edge from Figure <ref>, while full consistency checking uses five reconstruction edges and a single verification edge, and accepts only if all six checks accept. We do not attempt to theoretically analyze the full check, because we do not assume the edges to be independent (so multiplying acceptance probabilities is not necessarily meaningful).In our experiments, we empirically measure the acceptance rate of each edge, and also observe that the edges are not independent (see Section <ref>). In real experiments, the combined use of and may not satisfy our assumptions because of the tools' limitations (may time out or return unknown, and may make mistakes or hallucinate). Especially the u in Assumption <ref> could be non-negligible. However, the end-to-end evaluation shows that the six checks together do give promising true positive and false positive rates. The analytical model can be treated as one guide to understanding what properties of the reconstruction model are helpful for ensuring accurate reconstruction results.§.§ Explaining the EvaluationHere, we empirically estimate the values of A and R from Theorem <ref> based on our experiments. That is, we estimate the acceptance rate for correct and incorrect inputs for each directed edge. Each cell inTable <ref> represents the percentage of reconstructed components that successfully pass the equivalence check in the four categories: ground-truth , C1, C2, and C3 (Table <ref>). [Note that our incorrect examples are constructed with the aim of making them hard to reject, i.e., by considering only the cases that can pass Dafny verification. The measured values for R are thus likely to be higher than the value for a more natural distribution.]As mentioned above, in the first column, the discrepancy between the measured acceptance rate and the ideal perfect acceptance rate comes partly from reconstruction failures and partly from equivalence checker failures. For example, the doc2anno acceptance rate is 0.85, not 1. Apart from the failure to generate the correct annotation, there are also cases where the generated annotation is correct but unable to be verified by our annotation equivalence checking template in Section <ref> (See Appendix <ref> for an example).Overall, the measured aggregated acceptance rate for the first column is 0.75.This is higher than would be expected if each check were independent (the product of the entire column is 0.59).This is because, in practice, they are not independent: easier examples that pass the tests on one edge tend to also pass the tests on other edges. In C1, anno-sound and doc2anno both have zero acceptance rates for inconsistent edges, and the overall acceptance rate is also zero. In C2 and C3, doc2anno has a zero acceptance rate, and the overall acceptance is also zero.Note that the anno2doc acceptance rate is high for C1, C2, and C3.This is because our current docstring equivalence checker is good at detecting contradictory information but not the addition or omission of information due to a slightly strengthened or weakened annotation.§ RELATED WORK While we are the first to use to generate annotations for programs, much previous work provides partial solutions to the generation phase of . Besides the notable <cit.> for code generation using LLMs, <cit.> is a survey on program synthesis before the era of LLMs. Other work using neural approaches for program synthesis include <cit.>. To scale up the generation, researchers have tried to decompose the whole task into smaller steps<cit.> and to use execution traces <cit.>. While the aforementioned work synthesizes code from natural language, another common theme is to synthesize programs from specifications <cit.>. Translation between natural and formal language has also been studied <cit.>, and <cit.> studies how to use LLMs for predicting program invariants.For the verification phase, previous work acknowledges that verifying whether a generated program is correct is challenging. In <cit.>, a test-case-based approach to checking code correctness is demonstrated to be insufficient. Other previous attempts include <cit.>, which asks the model to generate assertions along with the code, and <cit.> <cit.> which studies generation of unit tests. There is also a line of work <cit.> on the learning-based approach for verifying correctness. <cit.> have studied various approaches for reranking the model's output, and <cit.> proposed a self-repair method combining LLMs and formal verification strategies.§ CONCLUSIONWe have introduced , a framework for closed-loop verifiable code generation. We reduce the problem of checking correctness to the more accessible problem of checking consistency. Initial experiments using , , and a set of simple textbook examples are promising. We show an 87% acceptance rate for ground-truth examples and a 100% rejection rate for incorrect examples. There are many avenues for future work, including: better verification tools, improving LLM capabilities for generating code, annotations, and docstrings, improvingLLM capabilities for understanding syntax, and scaling up to more challenging examples.iclr_template/iclr2024_conference§ APPENDIX §.§ Pseudocode for Consistency Checking L0.530.5 Algorithm <ref> shows pseudocode for our implementation of consistency checking. §.§ Discussion §.§.§ Limitations There are many limitations in the proposed paradigm.For one, the capabilities of LLMs (in particular) are limited. The generation of docstrings, annotations, and code also has inherent limitations. For example, our use of annotations is only for specifying functionality, not implementation details, e.g., an annotation can force an array to be sorted but cannot easily restrict the algorithm used for sorting. In this paper, as a first step, we only aim to check functional consistency (correctness), not the performance of the implementation. As mentioned in Section <ref>, if the oracle used for consistency checking is misaligned with human understanding (ground-truth), e.g., if it interprets a sorting algorithm as getting the maximum value, there is no way to correct it without human intervention. But in practice, we think this will rarely happen (see Assumption <ref>). As another example, if the docstring, annotation, and code all miss the same edge case, the error cannot be detected. While such an example is internally consistent, it may not be consistent with human understanding of good coding practice. So far, we haven't detected this in our experiments, but these and other problems may appear in more complicated examples. To acheive our eventual vision for ,we expect that additional breakthroughs, or additional human-in-the-loop steps, or both, may be needed.§.§.§ Variants checks three components for consistency.However, other variants are possible. Currently, most attempts at code generation produce only the code and docstring. We expect that a -like approach with only code and docstring would help detect some inconsistencies, but would not ensure implementation correctness, as docstrings are not sufficiently precise. Incorporating unit tests into is a potential improvement we've earmarked for future endeavors. We recognize the potential advantages of unit tests; however, they come with their own set of limitations. Admittedly, in certain scenarios, unit tests can provide a quick and effective sanity check on system functionalities. However, generating unit tests can sometimes prove more complex than creating annotations. Unit tests, if not transparent, can be difficult or even impossible to explain, eroding user confidence due to their opacity. If an LLM is adept at producing effective unit tests, it suggests an ability to anticipate execution outcomes to a certain degree. However, full-fledged execution with numerous computational steps remains an unsolved challenge for LLMs. Additionally, compared to annotations, unit tests offer a less robust assurance of system correctness. §.§.§ Future ResearchA successful paradigm relies on many components. To maximize the capabilities of , there are several foundational topics that should be explored.Each of these areas can be advanced individually, and notably, they possess wider applicability beyond just the scope of .One foundational element is the ability to generate high-quality code, annotations, and docstrings. Clearly, the verification phase cannot compensate for poor generation, it can only detect and flag such examples.Better equivalence checking would also improve 's abilities.Currently, it is most challenging to perform equivalence checks on docstrings.Equivalence of annotations relies on the logical power of solvers in the back end of Dafny, whose performance and capabilities can be improved. Equivalence checking for code is also challenging; techniques like fuzzing and concolic testing (and even full formal equivalence checking) could be leveraged to improve this step.§.§.§ Data Contamination We want to point out that the current version of has some limitations. We hand-crafted it starting with simple textbook-level examples so as to have a baseline for more advanced work. But we must acknowledge the possibility of indirect data contamination. While we expect most of our examples are not explicitly present in the training data (Dafny is not a widely-used language, and we wrote the examples ourselves), there's a considerable chance that has encountered analogous data in the past. Even if only code with a similar functionality in another language has been seen in the training data, our hand-crafted examples can be affected. Some soft evidence for this is the observation that can sometimes generate the correct code even with an incomplete docstring or annotation. We noticed that often, a descriptive function signature alone can be quite revealing. To mitigate this potential bias in our experiments, we opted to replace the function names with generic, non-descriptive identifiers. In future work, we plan to update with more sophisticated examples, which we hope will help mitigate the risk of inaccurate conclusions due to data contamination in future experiments. §.§.§ Reasons for Reconstruction Failure using GPT-4 We have observed that is not very capable at producing correct syntax for the latest version (4.0.0) of , likely due to limited training data. One can imagine that an LLM trained or fine-tuned on 4.0.0 would easily acheive a higher acceptance rate. Some evidence that is not familiar with the current syntax is as follows. Annotations must include aorfor methods that access memory. In particular,is required when the method reads from , and misses it almost 100% of the time on its first try at generation. Luckily, with 's compiler-generated error messages, is often able to add the neededor . Another example is that used to require annotations to be separated by semicolons, or assert explicitly that an array is not nullin the pre-conditions.These are not required any more, but still largely adheres to those deprecated rules. §.§ Proof of Theorem <ref> 1. Let (x, y) be sampled from 𝒟 with the condition (x, y)∈ G. From Assumption <ref>, with probability ≥ c_1, we have y∈ f^M(x). Then, according to Assumption <ref> and the perfect equivalence oracle, with probability p_c, the reconstruction from x to y will succeed with probability ≥ l. Therefore, the accept probability is ≥ l · p_c· c_1, denoted as A.2. Let (x, y) be sampled from 𝒟 with the condition (x, y)∉ G. There are 3 cases: * From Assumption <ref>, with probability c_0, there is y∈ f^M(x), and it is trivial that the accept probability ≤ 1.* With probability 1 - c_0, there is y∉ f^M(x). Then from Assumption <ref>, with probability p_c, the reconstruction from x to y will succeed with probability ≤ u.* Finally, the last case is that the bounds in Assumption <ref> do not hold, which will happen with probability (1-c_0)(1 - p_c).Clearly, in this case, the accept probability ≤ 1.By aggregating all the cases, the accept probability is ≤ c_0 + (1-c_0)· p_c· u + (1-c_0)(1-p_c). §.§ Dafny Feedback Example In this example, the first try at generating annotations (Listing <ref>) fails given the input in Listing <ref>. But after receiving the error message in Listing <ref>, is able to add the necessary annotation .0.48 [style=dafnystyle,basicstyle=,label=lst:UpdateElements,caption=UpdateElements Example Input] method foo(a: array<int>) //TOFILLa[4], a[8] := a[4] + 3, a[8] + 1; a[7], a[8] := 516, a[8] - 1;0.48 [style=dafnystyle,basicstyle=,label=lst:updategen,caption=Generated Annotations for UpdateElements] requires a != nulla.Length > 8 ensures a[4] == old(a[4]) + 3 ensures a[7] == 516 ensures a[8] == old(a[8]) [style=terminal,label=lst:dafnyfeedback,caption=Dafny Feedback] Error: assignment might update an array element not in the enclosing context's modifies clause§.§ Supplimentary Template and Examples In Listing <ref>, we give a template for verifying annotation equivalence for the ground-truth example(Listing <ref>). Annotation equivalence checking is done by verifying the template with 's verifier. If the lemmasandare both verified, then it means that has successfully verified the equivalence of pre- and postconditions respectively.states the full preconditions of the ground-truth example andstates the full postconditions. 's body will be replaced by the generated preconditions and 's body will be replaced by the generated postconditions. The lemmastates that the generated preconditions are true if and only if the original preconditions are true. The lemmastates that the generated postconditions are true if and only if the original postconditions are true. The above example is simple enough to be proven by Dafny's verifier.Note that the template is sound but not complete, that is, there could be cases when two predicates are indeed equivalent but cannot prove it.An example is shown in Listing <ref>.[style=dafnystyle, caption=maxArray,label=maxArray] method maxArray(a: array<int>) returns (m: int) requires a.Length >= 1 ensures forall k :: 0 <= k < a.Length ==> m >= a[k] ensures exists k :: 0 <= k < a.Lengthm == a[k]m := a[0]; var index := 1; while (index < a.Length) invariant 0 <= index <= a.Length invariant forall k :: 0 <= k < index ==> m >= a[k]; invariant exists k :: 0 <= k < indexm == a[k]; decreases a.Length - indexm := if m>a[index] thenm else a[index]; index := index + 1; [style=dafnystyle,caption=Annotation Equivalence Checking Template for maxArray,label=anno_template] predicate pre_original(a: array<int>,m: int) reads a( a.Length >= 1) predicate pre_gen(a: array<int>,m: int) reads atrue // (#PRE)... (#PRE) lemma pre_eq(a: array<int>,m: int) ensures pre_original(a,m ) <==> pre_gen(a,m )predicate post_original(a: array<int>,m: int) requires pre_original(a,m) reads a( forall k :: 0 <= k < a.Length ==> m >= a[k])( exists k :: 0 <= k < a.Lengthm == a[k]) predicate post_gen(a: array<int>,m: int) requires pre_original(a,m) reads atrue // (#POST)... (#POST) lemma post_eq(a: array<int>,m: int) requires pre_original(a,m ) requires pre_gen(a,m ) ensures post_original(a,m ) <==> post_gen(a,m ) [style=dafnystyle,caption=Instantiated Annotation Equivalence Checking Template for only_once. The original and generated postconditions describe the same property: element key only appears once in the array a. But they cannot be verified as equivalent by the annotation template. Lemma post_eq will fail with an empty body.,label=only_once_filled_template] predicate pre_original<T(==)>(a: array<T>,key: T,b:bool) reads atrue predicate pre_gen<T(==)>(a: array<T>,key: T,b:bool) reads atrue lemma pre_eq<T(==)>(a: array<T>,key: T,b:bool) ensures pre_original(a,key,b ) <==> pre_gen(a,key,b )predicate post_original<T(==)>(a: array<T>,key: T,b:bool) requires pre_original(a,key,b) reads a( (multiset(a[..])[key] ==1 ) <==> b) predicate post_gen<T(==)>(a: array<T>,key: T,b:bool) requires pre_original(a,key,b) reads a(b <==> ((exists i :: 0 <= i < a.Lengtha[i] == key)(forall i, j :: 0 <= i < j < a.Lengtha[i] == key ==> a[j] != key))) lemma post_eq<T(==)>(a: array<T>,key: T,b:bool) requires pre_original(a,key,b ) requires pre_gen(a,key,b ) ensures post_original(a,key,b ) <==> post_gen(a,key,b ) [title=System Prompt,label=systemprompt] code2anno:anno2code:doc2anno:anno2doc:code2doc:doc2code:docstring equivalence checker:§.§ More Detailed Experiment Resultsrowcount
http://arxiv.org/abs/2310.17807v1
{ "authors": [ "Chuyue Sun", "Ying Sheng", "Oded Padon", "Clark Barrett" ], "categories": [ "cs.AI", "cs.LG", "cs.SE" ], "primary_category": "cs.AI", "published": "20231026225819", "title": "Clover: Closed-Loop Verifiable Code Generation" }
1.3in1.3in1in1in kern[1]#1@kerna (<ref>)(<ref>) #1#2 Mon Mor sp#1ˆ(#1) citing. .5em#3 plain satz[subsection]Satz theorem[subsection]Theorem question[subsection]Question lemma[subsection]Lemma corollary[subsection]Corollary regprin[subsection]Regeneration principledegprin[subsection]Controlled degeneration principleproposition[subsection]Propositionremark example[subsection]Example definition conjecture[subsection]Conjecture definition[subsection]Definitionhypothesis[subsection]Hypothesis equationsectionremark remark[subsection]Remark *claimClaimciting *custom #1@pt1 #1#1=@th #1 @ @ @@-@ @ -@ =@th #1 @ @ @@-@ @ -@ =@th #1 @ @ @@-@ @ -@ =@th #1 @ @ @@-@ @ -@#1=@th #1 @ @ @@-@ @ -@ =@th #1 @ @ @@-@ @ -@ =@th #1 @ @ @@-@ @ -@ =@th #1 @ @ @@-@ @ -@ Finspdefinition Ex[subsection]Example Giovanni Mongardi Alma Mater Studiorum, Università di Bologna,P.zza di porta san Donato, 5, 40126 Bologna, Italia [email protected] Pacienza Université de Lorraine, CNRS, IECLF-54000 Nancy – [email protected] recently introduced a powerful regeneration technique, a process opposite to specialization,to prove existence results for rational curves on projective K3 surfaces. We show that,for projective irreducible holomorphic symplectic manifolds,an analogous regenerationprinciple holds and provides a very flexible tool to prove existence of uniruled divisors, significantly improving known results.[2020]14H45, 14J42 (primary).Regenerations and applications Gianluca Pacienza January 14, 2024 ==============================§ INTRODUCTIONRational curves on K3 surfaces have now been studied for decades, with motivations also coming from arithmetic geometry, (non-)hyperbolicity questions, and general conjectures on 0-cycles. A natural generalization of K3 surfaces is given by irreducible holomorphic symplectic (IHS) manifolds, which are compact, simply connected Kähler manifolds with H^2,0 generated by a symplectic form. In any even dimension 2n, n≥ 2, there are two known deformation classes (cf. <cit.>), one is given by Hilbert schemes of points on K3 surfaces and their deformations (called varieties of K3^[n] type), and the other is given by deformations of an analogous construction using abelian surfaces (called varieties of generalized Kummer type). Two moredeformation classes discovered by O'Grady exist in dimension 6 and 10 (cf. <cit.>). For the basic theory of IHS manifolds we refer the reader to e.g. <cit.>.In recent years rational curves on projective IHS manifolds have been actively investigatedwith different objectives and techniques, cf. e.g. <cit.> and the references therein.Rational curves covering a divisor on an IHS manifold behave very well with respect to deformation theory, i.e. they deform in their Hodge locus inside the parameter space ofdeformations of the IHS manifold and keep covering a divisor. This has been one of the main properties used to prove existence results and, at the same time, one of the main limitations. Indeed, toproduce a uniruled divisor in an ample linear system of a polarized IHS (X,H) one would try andexhibit such an example on a special point (X_0,H_0) in the same connectedcomponent of the corresponding moduli space.As proved in <cit.> in some cases it is impossible to do it with primitive rational curves. On the other hand in <cit.> this approach was successfully implemented to prove that outside at most a finite numberof connected components (precisely those not satisfying the necessary conditions given in <cit.>) of the moduli spacesof projective IHS manifolds ofK3^[n] or generalized Kummer type, for all the corresponding points (X,H) there exists a positive integer m such that the linear system |mH| contains a uniruled divisor covered by rational curves of primitive homology class Poincaré-dual to that of |H|. For a completely different proof (based on Gromov-Witten theory) of the existence of uniruled divisors covered by primitive rational curves on deformations of K3^[n] see <cit.>. Due to the cases left out by <cit.>, respectively <cit.>, one could reasonably wonder whether uniruled divisors on such manifolds do always exist.More recently Chen-Gounelas-Liedtke introduced in <cit.> a new viewpoint to prove existence results for rational curves on projective K3 surfaces: regeneration, a process opposite to specialization. In this article we show that,for projective irreducible holomorphic symplectic manifolds, an analogous regenerationprinciple holds for uniruled divisors and provides a new and flexible tool to prove existence results. Combining this new viewpoint with results from <cit.>we are able to improve significantly the available results, in some cases passing from no known existence result at all to density of uniruled divisors in the classical topology. To state our results we start with the following. Let → B be a family of IHS manifolds over a connected base. Let 0∈ B and let X_0 be the corresponding fibre. Let D_0⊂ X_0 be an integral uniruled divisor. A regeneration ⊂ of D_0 is a flat family of uniruled and generically integral divisors → B such that D_0 is a component of the fiber _0 ofover 0.A reducible divisor is called uniruled if all of its components are. Let X be a projective IHS manifold. There exists a constant d≥ 0 such thatall primitive ample curve classes [C]∈ H_2(X,) satisfying q(C)> d have a representative R∈ [C] such that R rules a prime divisor of class proportional to [C]^∨.Here, [C]^∨ denotes the divisor [D]∈(X)⊗ such that C· E=q(D,E) for all divisors E, where q is the Beauville-Bogomolov-Fujiki formon X. A curve is said ample if its dual divisor is ample. Analogously, we define the curve dual to a divisor.The above hypothesis, which may look slightly unnatural, is the higher dimension analogue of <cit.> and, as we will see below, can be shown to holdfor IHS manifolds of K3^[n] and generalized Kummer type, thanks to previous work done in <cit.>.Our main novel contribution is the following result which, despite the simplicity of its proof, seems to provide the right viewpoint to tackle these kind of questions. Let → B be a family of projective IHS manifolds with a central fibre _0 satisfying hypothesis <ref>.Let D_0⊂_0 be an integral uniruled divisor on the central fibre. Then D_0 admits a regeneration.The regeneration principle works perfectly on IHS manifold of K3^[n] or generalized Kummer type. Any integral uniruled divisor in a fiber of any family of projective IHS manifolds of K3^[n] or generalized Kummer type admits a regeneration.Our first application is to show existence of ample uniruled divisors also for the connected components of the moduli spaces left out by <cit.>. Let (X,H) be a polarized IHS manifold of K3^[n] or generalized Kummer type, then there exists m∈ and a uniruled divisor in |mH|.In particular the applications to zero-cycles pointed out in <cit.> now hold for all polarized IHS manifolds of K3^[n] or generalized Kummer type. At the very general point in the K3^[n]-case we can drastically improve Theorem <ref>. Letbe an irreducible component of the moduli space of polarized IHS manifolds of K3^[n]-type. Then any polarized IHS manifoldX outside a possibly countable union of subvarieties ofverifies the following: any pair of points x_1,x_2∈ X can be arbitrarily approximated by a chain of at most 2n irreducible rational curves, each of which deforms in a family covering a divisor.The above result can be seen as an effective non-hyperbolicity statement. The study of non-hyperbolicity of IHS manifolds dates back to Campana <cit.>, with more recent important contributions by Verbitsky <cit.> and Kamenova-Lu-Verbitsky <cit.>. We refer the interested reader to <cit.> for a thorough discussion and a complete list of references. We can also show the following less strong but more precise result, which was previously known only in dimension 2 by <cit.>.Let X be a projective IHS manifold of K3^[n] or Kummer type such that Bir(X) is infinite. Then X has infinitely many uniruled divisors. We hope that this new viewpoint via regenerations could also lead to progress towards the existence of higher codimension algebraically coisotropic subvarieties. Acknowledgements. We thank Claire Voisin for suggesting to apply the regeneration principle to non-hyperbolicity questions and G. Ancona, Ch. Lehn and K. O'Grady for useful comments on a preliminary version.G.M. was supported by PRIN2020 research grant ”2020KKWT53”, by PRIN2022 research grant "2022PEKYBJ" and is a member of the INDAM-GNSAGA. G.P. was supported by the CNRS International Emerging Actions (IEA) project “Birational and arithmetic aspects of orbifolds”.§ REGENERATIONSWe can suppose that _0 has Picard rank at least two and that D_0 is not proportional to the polarization, otherwise by <cit.>, we can deform a curve ruling D_0 over all of B, and obtain in this way a regeneration of D_0. Let C_0 be the class of a minimal curve ruling D_0. Let ∈() be a relative polarization and H_0 its restriction to the central fibre _0. Let H_0^∨ be the (ample) class of a curve dual to H_0. We can choose m∈ big enough so that mH_0^∨-C_0 is ample, primitive and of square bigger than d_0. Therefore, by Hypothesis <ref>, we have a rational curve R_0∈ [mH_0^∨-C_0] which rules an ample divisor F_0 inside _0.As the divisor F_0 is ample we have C_0· F_0>0. Hence, we can fix a point in C_0∩ F_0 and pick a curve R_0 in the ruling of F_0 passing through this point. Notice that C_0 cannot coincide with the ruling of F_0, as C_0 and R_0 are not proportional (because the divisors they rule are not). In this way we obtain a connected rational curve of class [C_0+R_0]. By abuse of notation, we denote this curve by C_0+R_0. By <cit.>, which generalizes <cit.> to the reducible case, the curve C_0+R_0 deforms in its Hodge locus Hdg_[C_0+R_0] of the class [C_0+R_0]=[mH_0^∨] and keeps ruling a divisor on each point of Hdg_[C_0+R_0]. By construction, this Hodge locuscoincides with B, as C_0+R_0 is a multiple of H_0^∨, and the result follows. The following can be seen as a concentration of some of the main contributions of <cit.>, namely the study of the monodromy orbits, constructions of examples and deformation theory. Hypothesis <ref> holds for any family of manifolds of K3^[n] and Kummer type, and the constant d_0 is (2n-2)^2(n-1) and (2n+2)^2(n+1) respectively. Let (S,h_S) be a polarized K3 of genus p and (A,h_A)a polarized abelian surface of type (1,p-1).We denote by r_nthe class of an exceptional rational curve which is the general fiber of the Hilbert-Chow morphism S^[n]→ S^(n) (resp. K_n(A)⊂ A^[n+1]→ A^(n+1)) and by h_S∈ H_2(S^[n], ℤ) (resp. h_A∈ H_2(K_n(A), ℤ) ) the image of the class h_S∈ H_2(S, ℤ) (resp. h_A∈ H_2(A, ℤ)) under the inclusion H_2(S, ℤ)↪ H_2(S^[n], ℤ) (resp. H_2(A, ℤ)↪ H_2(K_n(A), ℤ)). Recall that q(h_S)=2p-2=q(h_A) and q(r_n) equals 1/(2n-2) in the K3^[n] case and 1/(2n+2) in the Kummer case.We take a primitive ample curve class C∈ H_2(X,) such that q(C)>n-1 (resp. n+1 for Kummer type). By <cit.> and <cit.>, the pair (X,C) is deformation equivalent to the pair (S^[n],h_S-2gr_n) with 2g≤ n-1 or (S^[n],h_S-(2g-1)r_n) with 2g≤ n(resp. (K_n(A),h_A-2gr_n) or (K_n(A),h_a-(2g-1)r_n) with 2g≤ n-1 ). If p≤ g, we would get a contradiction sincen-1 ≤ q(C)= q(h_S) -4g^2 1/2(n-1) =2(p-1) -4g^2 1/2(n-1)≤ 2(g-1)-4g^2 1/2(n-1)≤ n -2Therefore, p≥ g and by <cit.> and <cit.>, the curves we obtain in S^[n] (resp. in K_n(A)) have a rational representative which covers a divisor by <cit.> and <cit.>.Such divisor then deforms in its Hodge locus by <cit.>, and the proposition follows.The result follows immediately from the combination of Proposition <ref> and the Regeneration principle <ref>. § APPLICATIONSIn this section we provide the proofs of the applications of the Regeneration principleto IHS manifolds of K3^[n]-type or generalized Kummer type.Again the result follows from the combination of Proposition <ref> and the Regeneration principle <ref>.Indeed, suppose that (X,H) is a polarized IHS manifold of K3^[n]-type and let us consider a connected componentof the moduli space of polarized IHS manifolds containing (X,H). By <cit.>, there exists a point inwhich parametrizes the Hilbert scheme over a very general projective K3 (S,H_S). Let us choose any rational curve C in S, whose existence is guaranteed by Bogomolov-Mumford <cit.>, see also <cit.>, and let us consider the uniruled divisor D_C={Z∈ S^[n] such that supp(Z)∩ C≠∅}. We then apply the Regeneration principle <ref> to D_C, and obtain a regeneration of it on all IHS manifolds corresponding to points of . As the very general element ofhas Picard rank one, the class of this regeneration is proportional to this unique class, hence our regeneration has class mH on X, for some m.For the generalized Kummer type we proceed the same way, by using<cit.>and <cit.> instead of the analogous results in the K3^[n]-type case.More generally, we have the following result. Let (X,H) be a projective IHS manifold of K3^[n] or Kummer type, and let D∈(X) be a divisor with q(D)≥ 0 and (D,H)>0. Then there exists a uniruled divisor in |mD| for some m∈. The proof is analogous to Theorem <ref>, with an extension to the case of square zero classes. If D has positive square, instead of the moduli space of polarized IHS manifolds we consider the moduli spaceof lattice polarized IHS manifolds such that (X) contains a divisor of square q(D), and pick the connected component containing (X,D). Let us choose a parallel transport operator γ onsuch that γ(X) has Picard rank 1. Therefore, γ(D) is ample on γ(X). By Theorem <ref>, a multiple of γ(D) is uniruled by a rational curve γ(C), which has class proportional to γ(D)^∨. Therefore by <cit.>, γ(C) deforms in its Hodge locus, which by construction contains (X,D) and we obtain a rational curve C covering a multiple of D. If q(D)=0, we can suppose that D is nef by <cit.>, otherwise we follow the same reasoning as above to reduce to the nef case. As X is projective, we have an ample divisor H∈(X). Let L be the saturated lattice generated by D and H, and let us consider the componentof the moduli space of L lattice polarized IHS manifolds containing (X,L). Inside of , by <cit.> we can pick a point γ(X) such that γ(D) stays nef and there exists a prime exceptional divisor E on γ(X) such that q(γ(D),E)>0[By the above cited theorem, the locus where a given extra class is algebraic is dense in ℳ, and the locus where this class E has a fixed intersection with γ(D) is a proper Zariski closed subset of ℳ, therefore the locus where the intersection is positive is non-empty.]. Let R be a curve ruling E. As γ(D) is nef, there exists an m∈ such that mγ(D)^∨-R is an ample curve. Therefore, by Proposition <ref>, we produce a rational curve C of class mγ(D)^∨-R which rules an ample divisor, and attach to it a rational tail R, so that the connected curve C+R of class mγ(D)^∨ rules a divisor and deforms in its Hodge locus by <cit.>. By construction, this Hodge locus contains (X,D^∨), and the result follows. To prove Theorem <ref>, we will use the following result of Chen and Lewis on K3 surfaces. Let _g be the moduli space of polarized genus g K3 surfaces, and let _g be the universal surface over _g. Let _g,n be the scheme of relative dimension onewhose fibre over a point (S,L)∈_g consists of all irreducible rational curves contained in |nL|. Recall the following result. The set ∪_n∈_g,n is dense in the strong topology inside _g, for all g≥ 2.From this one easily obtains the following.Let S be a general projective K3 surface. Then any pair of points on S^[n] can be arbitrarily approximated by a chain of at most 2n rational curves, each of which deforms in a family covering a divisor. Without loss of generality, we can suppose that the two points ξ_i,i∈{1,2} correspond to reduced subschemes, and that (ξ_1)∩(ξ_2)=∅ otherwise we can takearbitrarily close approximations by reduced subschemes with such property. Therefore we write ξ_i=p^i_1+… p^i_n,with p^i_1,…, p^i_n distinct points on S for i=1,2. By Theorem <ref>, we have two ample irreducible curves R^1_1, R^2_1 arbitrarily near p_1^1 and p_1^2 respectively. As these curves are ample, the rational curve R_1=R^1_1∪ R^2_1 is connected. Let us consider the rational curve R_1+p^1_2+… p^1_n inside S^[n]: this can be used to approximate the subschemes p^1_1+p^1_2+… p^1_n and p^2_1+p^1_2+… p^1_n. Iterating the argument, one obtains a rational curve (union of two irreducible ample curves) R_j for all j∈{1,… n} which approximates the two points p^1_j and p^2_j. Considering the curve p^2_1+… +p^2_j-1+R_j+p^1_j+1+… + p^1_n one can approximate the points p^2_1+… +p^2_j-1+p^1_j+p^1_j+1+… + p^1_n and p^2_1+… +p^2_j-1+p^2_j+p^1_j+1+… + p^1_n. Therefore, by taking the union of these curves we obtain a chain of 2n rational irreducible curves which approximate the two points ξ_1 and ξ_2. By construction, each of these rational curves C deforms in a family which covers the divisor {Z∈ S^[n], such that (Z)∩ C≠∅} and the corollary follows. Let X be a very general IHS manifold in . Let x_1,x_2 ∈ X be two points on it. Thanks to <cit.> we can pick a point inwhich parametrizes the punctual Hilbert scheme of a very general projective K3 (S,H)arbitrarily close to X and two points ξ_1, ξ_2∈ S^[n] approximating x_1 and x_2 respectively. We take the chain R of 2n rational curves approximating ξ_1 and ξ_2 given by Corollary <ref>. We can now apply the Regeneration principle <ref> to regenerate the union of the divisors ruled by the deformations of the irreducible components of R to obtain a chain of rational curves on X satisfying the statement.Actually, using <cit.> and the Regeneration principle, a simpler version of the proof above yields the existence of infinitely many uniruled divisors for the very general point of any family → B of projective IHS manifolds such that one of the fibres is the Hilbert scheme over a K3 of odd Picard rank.To prove the theorem we willshow the existence of an ample uniruled divisor with infinite (X)-orbit. By <cit.>, as (X) is infinite, there exists an element g∈(X) of infinite order. Let D be an ample uniruled divisor, whose existence is granted by Theorem <ref>. We claim that the orbit of D via g is infinite, as otherwise a multiple of g would give an isometry of the lattice D^⊥⊂(X).The latter is negative definite as D is ample and has therefore finite isometry group. Hence g would act with finite order on both D and D^⊥, which is absurdand the claim follows. We recall now the following well-known result for the reader's convenience. This tells us that Theorem <ref> yields its conclusion only for a codimension at least one locus in the moduli space of projective IHS manifolds. Let X be a projective IHS manifold with ρ(X)=1. Then (X)=(X) and it is a finite group. First of all recall that a birational map between two IHS manifolds sending an ample class into an ample class can be extended to an isomorphism. As such, when ρ(X)=1, we have (X)=(X). By <cit.>the group of automorphisms of a compact Kähler manifold that fix a Kähler class has only finitely many connected components. On the other hand the group of automorphisms of an IHS manifold X is discrete, since h^0(X,T_X)=h^0(X,Ω^1_X)=0. Hence (X) must be finite. alpha
http://arxiv.org/abs/2310.18248v1
{ "authors": [ "Giovanni Mongardi", "Gianluca Pacienza" ], "categories": [ "math.AG", "14H45, 14J42" ], "primary_category": "math.AG", "published": "20231027163440", "title": "Regenerations and applications" }
1]Zhe Baicor1 2]Abdelilah Essiari 2]Talita Percianocor1 2,3,4]Kristofer E. Bouchard [1]Applied Mathematics and Computational Research Division, Lawrence Berkeley National Laboratory [2]Scientific Data Division, Lawrence Berkeley National Laboratory [3]Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory [4]Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley[cor1]Corresponding authors: All correspondence should be sent to both [email protected] and [email protected]. The processing and analysis of computed tomography (CT) imaging is important for both basic scientific development and clinical applications. In AutoCT, we provide a comprehensive pipeline that integrates an end-to-end automatic preprocessing, registration, segmentation, and quantitative analysis of 3D CT scans.The engineered pipeline enables atlas-based CT segmentation and quantification leveraging diffeomorphic transformations through efficient forward and inverse mappings. The extracted localized features from the deformation field allow for downstream statistical learning that may facilitate medical diagnostics. On a lightweight and portable software platform, AutoCT provides a new toolkit for the CT imaging community to underpin the deployment of artificial intelligence-driven applications.Computed tomography image registration diffeomorphic mapping image segmentation quantitative analysis § MOTIVATION AND SIGNIFICANCEComputed tomography (CT) stands as one of the most prevalent medical imaging modality in the world. Nevertheless, interpreting and analyzing CT data demands substantial professional expertise and efforts, posing challenges for acute diagnoses and prognoses.In contrast to MRI, CT images typically exhibit lower spatial resolution and contrast levels accompanied by systematic noise. This combination presents a formidable challenge for even proficient radiologists tasked with analyzing localized diseases or lesions in specific regions in limited time. The registration and segmentation of CT images constitute essential processes in medical image analysis <cit.>. Registration enables the integration of features across multiple scans, while segmentation delineates structures of interest, allowing for quantitative analysis. Current image registration algorithms utilize deformable mappings based on elastic, fluid-based deformations that improve upon linear or rigid transformations. The Demons <cit.> algorithm pioneered diffeomorphic deformations, effectively preserving anatomical structures. Nonlinear registration algorithms, such as those provided by Advanced Normalization Tools (ANTs) <cit.>, deliver robust and adaptable solutions for complex anatomical variations.Recently, deep learning approaches especially convolutional neural networks (CNNs) <cit.>, have demonstrated success in deformable image registration, while hybrid models <cit.> that integrate CNNs with classical methods, aim to balance precision with computational efficiency.Here, we introduce the AutoCT software, which provides an integrated and automated pipeline for individual 3D CT scans that enables robust and efficient image analysis. This pipeline potentially reduces the human cost required for medical diagnosis and prognosis of traditional approaches. By streamlining preprocessing tasks, AutoCT enables medical professionals to devote more time to interpreting results and making informed clinical decisions. Moreover, the software automates the identification and analysis of anatomical structures, potentially enabling precise localization of abnormalities.§ SOFTWARE DESCRIPTIONAutoCT is written in Python 3.7, making use of few external packages (principally dcm2niix <cit.>,  FSL <cit.>,  ANTs <cit.>).This end-to-end pipeline integrates automatic conversion, preprocessing, bone striping, registration, segmentation, and quantitative analysis for 3D CT scans.§.§ Software architectureThe overall architecture, as shown in Figure <ref> enables a user to input original DICOM images, which undergo a comprehensive pipeline including file conversion, preprocessing, bone stripping, registration, and segmentation, and outputs quantitative information based on a user-defined anatomical template and atlas. Therefore, users have the capability to acquire extracted features, including warp (i.e., deformation) statistics with respect to the template and segmented geometric measurements from each individual region of interest in the anatomical structure for subsequent analysis. §.§ Software functionalitiesFigure <ref> summarizes the workflow of AutoCT connecting seven modules. First, the Conversion module takes in each subject's raw CT scans as DICOM images and converts them to a 3D image volume in NIfTI format. The Preprocessing module takes in NIfTI images, standardizes the image orientation, resamples the volume to a standardized voxel size, performs bias correction <cit.>, and then pre-aligns the 3D volume to a canonical, standardized space, such as MNI <cit.> coordinates. Following preprocessing, the BoneStrip module extracts the soft tissue of interest from surrounding bone using binary mask created through successive procedures of intensity thresholding, hole filling and Gaussian smoothing <cit.>. These bone-stripped CT scans are then registered to a user-defined template in the Registration module. The Registration module deploys a diffeomorphic mapping built on Advanced Normalization Tools <cit.> and performs a smooth and invertible transformation between the CT volumes and the reference. Using a diffeomorphism ensures a point in the physical domain is mapped to a corresponding point in the standardized or normalized domain, and the mapping is characterized by its differentiable and invertable properties<cit.>. This process enables complex spatial deformations while preserving the volumes topology. Figure <ref> delineates the joint image registration and segmentation process. By applying the inverse diffeomorphic mapping operator (obtained from the registration stage) to the desired anatomical atlas, AutoCT parcellates the bone-stripped CT volumes in the normalized space, and then uses the inverse affine transformation to realize the segmentation in the physical space in the Segmentation module. Finally, the deformation field is used to generate statistical information for the nonlinear transformation, including the mean, standard deviation and entropy of the Jacobian of the mapping <cit.> for the warped image and is exported to the WarpStats module. In parallel, the GeoMeasures module extracts geometric measurements for each segment, for example the volume and surface area for characterization in both the physical and normalized spaces.This scientific software has been packaged and tested using Docker container technology. The AutoCT docker image contains all the required dependencies and libraries needed to run the workflow. Users can easily reproduce the results of the pre-packaged illustration workflow, and execute customized workflows using their own data and share the results with others using the docker archiving facilities.The AutoCT docker container provides the option to run the workflow using the command line or the interactive python-based Jupyter notebooks. Additionally, an interactive graphical interface, based on Jupyter Widgets, is available to assist users in executing the different stages of the workflow.Finally, the docker image is used as part of continuous integration in the development process. When new modifications to AutoCT codes are checked into the version control system, a docker image will be automatically built and a container will be launched to run the pipeline and compare to expected results for evaluation. § ILLUSTRATIVE EXAMPLES In this section, we provide an illustrative example of the AutoCT application by running it on a publicly available source of CT scans <cit.>. The configuration options for the input files are shown in Figure <ref>(a). For this illustration, we use a standard MNI Template of T1-weighted MRI <cit.> as the template for pre-alignment, and a combined Harvard-Oxford cortical and subcortical structural atlases <cit.> for image segmentation. Figure <ref>(c-f) present the original, skull-stripped, and warped image after registration in the MNI space as well as the segmented CT volume in the subject's physical space. The output data shown in Figure <ref>(b) highlights the geometric measures of 115 regions covering cortical and subcortical areas of the brain, where each label represents one region obtained from the segmentation module. This example is illustrated in a Jupyter notebook within the open-source repository for reproducibility. § IMPACTAutoCT not only expedites the processing of CT scans of complex tissues, it also enhances the localized analysis accuracy and reliability of quantification and potential diagnostics. The user-friendly setup allows one to specify a template or atlas tailored to a particular input, such as brain or lung CT scans. The output extracted features can further be studied and connected to research in simulation or clinical results for predictive model design. The Docker containerization approach with a user-friendly GUI enables a diverse user group to improve reproducibility and collaborative development. Therefore, this portable software may empower radiology and medical imaging research and facilitate practitioners to deliver more efficient and rigorous CT image analysis. § CONCLUSIONThis paper introduces AutoCT, an open-sourced, Docker image-based software package designed for the processing, registration, segmentation and analysis of CT scans. This automated pipeline represents a significant advancement in the analysis of CT volumes that may enhance research and medical capabilities. Notably, AutoCT features a modular and flexible architecture, making it suitable for diverse applications in domains ranging from imaging science to medical research. Future developments will expand the software's functionality to incorporate additional insights gleaned from clinical studies, enabling comprehensive assessments of CT scans across various modalities. Additionally, sensitivity analysis of the extracted quantitative features will be performed to generate data-driven predictive models for diagnosis and prognosis, further enhancing its effectiveness in the realms of medical imaging research. § DECLARATION OF COMPETING INTERESTThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.§ ACKNOWLEDGEMENTSWe would like to thank Esther Yuh, Geoffrey Manley, and Wibe de Jong for valuable discussions.We acknowledge support by the U.S. Department of Energy, Office of Science, under Award Number DE-AC02-05CH11231. ZB gratefully acknowledges support from the U.S. Department of Energy, Office of Science, SciDAC/Advanced Scientific Computing Research. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract Number DE-AC02-05CH11231.elsarticle-num§ REQUIRED METADATA§ CURRENT CODE VERSION
http://arxiv.org/abs/2310.17780v1
{ "authors": [ "Zhe Bai", "Abdelilah Essiari", "Talita Perciano", "Kristofer E. Bouchard" ], "categories": [ "eess.IV", "cs.CV" ], "primary_category": "eess.IV", "published": "20231026210947", "title": "AutoCT: Automated CT registration, segmentation, and quantification" }
Erlangen Centre for Astroparticle Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Nikolaus-Fiebiger-Straße 2, 91058 Erlangen, [email protected] The search for Galactic PeVatrons - astrophysical accelerators of cosmic rays to PeV energies - has entered a new phase in recent years with the discovery of the first Ultra-High-Energy (UHE, E>100 TeV) gamma-ray sources by the HAWC and LHAASO experiments. Establishing whether the emission is leptonic or hadronic in nature, however, requires multiwavelength data and modelling studies. Among the currently known UHE sources, LHAASO J2108+5157 is an enigmatic source without clear association to a plausible accelerator, yet spatially coincident with molecular clouds.We investigate the scenario of a molecular cloud illuminated by cosmic rays accelerated in a nearby supernova remnant (SNR) as an explanation for LHAASO J2108+5157. We aim to constrain the required properties of the SNR as well as which of the clouds identified in the vicinity is the most likely association.We use a model for cosmic ray acceleration in SNRs, their transport through the interstellar medium and subsequent interaction with molecular material, to predict the corresponding gamma-ray emission. The parameter space of SNR properties is explored to find the most plausible parameter combination that can account for the gamma-ray spectrum of LHAASO J2108+5157. In the case that a SNR is illuminating the cloud, we find that it must be young (<10 kyr) and located within 40-60 pc of the cloud. A SN scenario with a low Sedov time is preferred, with amaximum proton energy of 3 PeV assumed. No SNRs matching these properties are currently known, although an as yet undetected SNR remains feasible. The galactic CR sea is insufficient to solely account for the observed flux, such that a PeVatron accelerator must be present in the vicinity.LHAASO J2108+5157 as a Molecular Cloud Illuminated by a Supernova Remnant A.M.W. Mitchell 1 https://orcid.org/0000-0003-3631-5648Received –; accepted – ================================================================================ § INTRODUCTIONCosmic Rays (CRs) are energetic particles originating from astrophysical accelerators and continuously arriving at Earth. The all-particle CR spectrum exhibits a spectral softening at ∼ 1-3 PeV, known as the `knee', generally understood to indicate the start of the transition from galactic to extragalactic accelerators being responsible for the bulk of CRs <cit.>.Astrophysical sources capable of accelerating particles to PeV energies are known colloquially as `PeVatrons'. Gamma rays produced as a consequence of particle interactions at the source typically have energies a factor ∼ 1/10 that of the parent particle population <cit.>.Hence, the detection of gamma rays with E>100 TeV indicates the presence of particles with PeV energies, corresponding to the CR `knee'. Definitive evidence forthe presence of PeVatrons in our galaxy has, however, proven elusive.Although diffusive shock acceleration of CRs at supernova remnants (SNRs) can account for the energy budget of CRs in our galaxy, their gamma-ray spectra cut-off at energies below 100 TeV <cit.>.This indicates that active acceleration of particles to PeV energies is not occurring at these SNRs, although the detection of the characteristic `pion-bump' signature of neutral pion decay in several SNRs indicates that the emission is hadronic in origin <cit.>. Indications for PeVatron activity from the galactic centre were found <cit.>, yet only in recent years have experimental facilities been capable of measuring gamma-rays with energies >100 TeV. Water Cherenkov and particle detector based facilities in particular, such as HAWC <cit.>, LHAASO <cit.> and Tibet-ASγ <cit.> have contributed significantly to this advance. Until 2023, the ultra-high-energy (UHE, >100 TeV) gamma-ray sky comprised a mere ∼15 sources, with the Crab nebula one of the first identified <cit.>. A further 31 UHE sources were announced in the first LHAASO catalogue <cit.>.Twelve UHE sources were reported by LHAASO in 2021 <cit.>, the majority of which are spatially coincident with known Very-High-Energy (VHE, ≳ 100 TeV) gamma-ray sources. In particular, several associations with energetic pulsar wind nebulae from which the emission is understood to be dominantly leptonic in origin.Despite Klein-Nishina suppression of inverse Compton scattering at the highest energies, this suppression is relaxed in the case of high radiation field environments, and a leptonic scenario remains a viable interpretation for the UHE sources associated with known energetic pulsars <cit.>. There is, however, one source reported in <cit.>, for which the gamma-ray emission was first discovered at UHE and without any known counterpart accelerators, such as pulsars or supernova remnants (SNRs).LHAASO J2108+5157 is an enigmatic source, spatially coincident with molecular clouds yet with the accelerator mechanism remaining unidentified <cit.>.In the wake of the LHAASO discovery, follow-up observations were conducted by several facilities, including in the radio and X-ray bands as well as by gamma-ray experiments. The Fermi-LAT source 4FGL J2108.0+5155 is spatially coincident with the UHE emission, but due to the differing spectral properties a physical association remains unclear <cit.>.A re-analysis of the Fermi-LAT data found a potential spatial extension of 0.48^∘ angular size of the source, designated 4FGL J2108.0+5155e <cit.>. A 3.7σ signal of gamma-ray emission was measured at E>3 TeV by the Large Sized Telescope (LST-1), a prototype telescope for the forthcoming Cherenkov Telescope Array (CTA) <cit.>. They derive upper limits in the energy range 0.32 TeV to 100 TeV that considerably constrain model scenarios for the origin of the emission <cit.>.The HAWC observatory recently reported a ∼7 σ detection in ∼2400 days of data <cit.>. However, observations and analysis by the VERITAS IACT array did not result in a detection, with constraining upper limits being reported, consistent with those from the LST-1 <cit.>. Although there is little observational evidence for SNRs currently acting as PeVatrons, it remains feasible that SNRs act as PeVatrons only for a comparatively short period during their lifetimes, such that the rate of currently detectable SNR PeVatrons is low <cit.>.Particle escape from the shock region occurs in an energy-dependent manner, such that the most energetic particles will also be the first to leave the shock region <cit.>.Evidence for PeV particles may therefore be found not at the location of the accelerator, but rather from subsequent interactions of these particles with target material in the ambient medium, such as nearby molecular clouds <cit.>.This scenario has been proposed as a possible explanation for the UHE emission from LHAASO J2108+5157 <cit.>. In contrast to previous models for LHAASO J2108+5157, in this study we scan the parameter space in two free variables, namely SNR age and the distance between the cloud and the SNR, to determine the range of plausible values for the required properties of the responsible SNR. We investigate the influence of uncertainties in the cloud properties on the resulting gamma-ray flux for the best-matched models. The corresponding expected neutrino flux is estimated, and the plausibility of the best-matched models is discussed. § METHODWe adopt the model of <cit.>, based on <cit.> and <cit.>, to investigate the scenario of a SNR illuminating molecular clouds as a possible explanation for LHAASO J2108+5157. Whilst there are several clouds identified in the vicinity, we focus on clouds that are spatially coincident with the γ-ray emission and located closest to the best-fit centroid of LHAASO J2108+5157 at (l=92.2148^∘,b=2.9359^∘). Cloud 4607 from the <cit.> catalogue based on data from the ^12CO survey of <cit.> has been considered in previous models of the region <cit.>, whilst recently a newly identified cloud has been detected in the region <cit.>. Adopting the convention of prior works, we henceforth refer to these two clouds as MML[2017]4607 and FKT[2022] respectively.Table <ref> summarises the key physical properties of the clouds relevant for this study.For convenience, we summarise here the key features of the model from <cit.> adopted for this work.– Protons are accelerated impulsively with a power-law spectrum of slope α.– The particle probability density function f(E,r',t') is taken from equation (3) of <cit.>, and is a function of the particle energy E, distance travelled from the SNR r' and time since escape from the SNR t'.– The SNR radius, R_ SNR, expands with time (t) adiabatically during the Sedov-Taylor phase as R_ SNR∝ t^2/5 <cit.>.– Particles escape from the SNR at a time t_ esc in a momentum-dependent manner, following t_ esc∝ (p/p_M)^1/β where p_M is the maximum particle energy reached, assumed to be 3 PeV/c at the Sedov time, t_ sed <cit.>. – Particles are either transported diffusively through the ISM to reach the cloud or are injected directly into the cloud if the SNR is sufficiently expanded. – Diffusion within the intervening ISM is assumed to be slow with respect to the Galactic average value due to the local accelerator activity <cit.>.– Within the cloud, diffusion is suppressed with respect to the ISM by a factor χ that relates to local turbulence. For our default scenario, the Sedov time is assumed to commence at 1.6 kyr corresponding to the case of type II supernovae. In the case of type IA supernovae, the Sedov time commences at a mere 234 yr, which is considered as an alternative scenario <cit.>. Details of the SNR forward shock interaction with the cloud, in the case that the SNR is in close proximity and sufficiently evolved, are neglected.The diffusion coefficient D(E) is considered to have a power-law dependence on the energy E as: D(E) = χ D_0 (E /GeV/B(n)/3μ G)^δ , where δ and χ relate to the properties of the magnetic field turbulence in the region, and B(n) describes the dependence of the magnetic field strength B on the cloud density n (see <cit.>).Values adopted for δ, χ and the diffusion coefficient normalisation D_0 at 1 GeV are listed in table <ref>. Within the ISM, χ is taken to be 1, whilst a value of 0.1 is adopted to account for suppressed diffusion within the clouds.From the above ingredients, the particle spectrum as a function of energy E, age t and distance from the accelerator r is obtained, f(E,r',t').Experimental measurements are, however, bound to neutral messengers such as γ-rays and neutrinos as the signatures for the presence of energetic hadronic particles. For comparison to data, the proton spectrum can then be converted into a gamma-ray emissivity Φ_γ(E_γ,r',t') (in ph cm^-3 s^-1 TeV^-1) by using the expressions from <cit.>: Φ_γ (E_γ,r',t') = cn ∫_E_γ^∞σ_inel(E)f(E,r',t')F_γ(E_γ/E,E)dE/E ,for which we adopt the parameterisation of the inelastic cross-section for proton-proton interactions σ_inel(E) from <cit.>, noting that due to high uncertainties below ∼ 100 GeV, we take this as an energy threshold and restrict our model predictions to energies >100 GeV only. Lastly, we obtain the γ-ray flux F(E_γ,t) at a distance d away from the cloud (i.e. at Earth) taking into account the volume of the molecular cloud traversed by particles V_c via: F(E_γ,t) = Φ_γ (E_γ,t) V_c / (4π d^2) . The diffusive galactic CR flux permeates the entire Galaxy, and as such will also contribute to the total particle flux interacting with the molecular clouds. To take this contribution into account, we include the proton flux as measured by the Alpha Magnetic Spectrometer on the International Space Station <cit.>. This flux is added to the particle flux arriving at the cloud, f in equation (<ref>), enabling the relative contributions of accelerator and the diffuse CR sea to be evaluated. <cit.> also provide expressions for the neutrino production via charged pion and muon decay via F_ν(E_ν/E,E) for the total production of electron and muon neutrinos from the same proton interactions. By analogy with equations (<ref>) and (<ref>) the corresponding total neutrino flux can be obtained.In the next section, we use this model to generate predictions for the gamma ray flux arising from a hypothetical SNR illuminating the molecular clouds identified in the vicinity of the LHAASO J2108+5157.Additionally, we consider the contribution from the galactic CR sea, to establish whether it is sufficient to account for the observed gamma-ray emission without requiring a nearby accelerator. The model is compared to measurements from LHAASO and HAWC, and upper limits from the LST-1 and VERITAS <cit.>. § RESULTS§.§ Scan over SNR parameter spaceAs the properties of the molecular clouds are known (table <ref>), we vary the properties of a hypothetical SNR to investigate the required values to account for the γ-ray flux of LHAASO J2108+5157. We assume that the SNR is located at the same distance from Earth as the cloud.The SNR age is varied in ten logarithmically spaced steps between 1 kyr and 500 kyr, for a fixed separation distance between the SNR and cloud of 24 pc. Similarly, the separation distance is independently varied in ten logarithmically spaced steps between 10 pc and 500 pc for a fixed SNR age of 4 kyr. For type Ia supernovae, the fixed reference values were reduced to 10 pc and 1 kyr. These curves are shown in figures <ref> and <ref> for type II and type Ia supernovae respectively. In the case of type II supernova remnants shown in figure <ref>, the predicted flux is comparable to the data for cloud FKT[2022], yet the flux predicted for MML[2017]4607 consistently falls below the measured flux. Younger ages are preferred, with the flux at energies ≲ 1 TeV becoming over predicted between ∼7 kyr and 30 kyr for FKT[2022]. The key features of the model are that the highest energy particles escape the SNR at earlier times and are first to arrive at the cloud. The spectral energy distribution hence rises at the highest energies at earlier times (and for shorter distances). The particle distribution then cools as a function of age, with the peak shifting towards lower energies. In the case of type Ia supernova remnants shown in figure <ref>, a separation distance larger than 24 pc is required for FKT[2022] to avoid over predicting the flux in the <10 TeV range. MKT[2022]4607 is better able to account for the gamma-ray flux under the type Ia scenario, yet only for an optimum combination of low distance and young age.As t_ sed is lower for the type Ia scenario, the spectral energy distribution is more highly populated at an earlier stage.§.§ Contribution from the galactic CR sea As described above, the contribution from diffusive galactic CRs is included in the model, assuming that the particle flux is comparable to that measured at Earth <cit.>. From the parameter scan, we find that the contribution from the nearby SNR dominates over that from galactic CR sea in most cases. Indeed, the contribution from diffusive galactic CRs only exceeds that from the SNR if either the cloud-SNR distance is ≳200 pc (for young ≲ 10 kyr SNRs), or if the SNR is old, ≳ 400 kyr (for nearby ≲ 50 pc SNRs). In order to test whether the diffuse galactic CR sea could be solely responsible for the measured gamma-ray flux, the normalisation of the galactic flux contribution was varied, in the absence of considering any hypothetical SNR. To match the observed emission at TeV energies using the molecular clouds considered, the normalisation must be of order ∼ 10^3 higher than that measured at Earth. This enhancement is unlikely to be achieved without the presence of an accelerator nearby. Next, we consider all possible combinations of SNR age and separation distance within the aforementioned ranges. A chisquare evaluation of the model curve to the LHAASO data points only is used to establish which model curves provide the closest match to the data. Due to the large number of free parameters entering into the model, we do not perform a minimisation, as there will be multiple local minima in the parameter space able to account for the data. Rather, we aim provide a plausible range of allowed values for the specific case of this model, with assumed fixed parameters as in table <ref>.§.§ Best-matched models §.§.§ Clouds MML[2017]4607 and FKT[2022] For each cloud, model curves corresponding to the two best matching combinations of SNR age and separation distance are shown in figure <ref>. Model curves for MML[2017]4607 were consistently below the data points for χ=0.1 within the cloud, as seen in figures <ref> and <ref>. To obtain parameter values within comparable agreement to the data as for FKT[2022], we neglected the suppressed diffusion within the cloud for MML[2017]4607 (and for this section only) by setting χ=1. This corresponds to the most optimistic case in which CRs can freely penetrate the cloud, although we note that δ was kept fixed to 0.5 and we did not investigate the effect of altering the energy dependence of the diffusion coefficient in equation (<ref>). The best matching combinations are summarised in table <ref>.In general, the SNR age was found to have a stronger influence on the curve shape and hence quality of the match to LHAASO data than the separation distance.FKT[2022] yielded more parameter combinations with a lower χ^2 than MML[2017]4607, where model curves for the same age yet for smaller distances were essentially consistent.This is supported by figures <ref> and <ref> - for a fixed age, provided the distance is sufficiently small that CRs have had time to traverse the cloud, the gamma-ray flux remains constant with decreasing distance. (Equivalently, the gamma-ray flux drops with increasing distance.)Overall, the type Ia scenario (i.e. a lower t_ sed) is preferred. One might ask whether or not a finer-resolution of values covering the reasonable parameter space would lead to a model that better matched the data. Whilst this may be the case, we first consider the effect of propagating the uncertainties in the measured properties of the clouds (table <ref>) through the model. An upper bound to the flux is obtained by adopting the 1 σ deviation d-σ_d and n+σ_n, whilst a lower bound is similarly obtained from the model evaluated with d+σ_d and n-σ_n, where we intrinsically assume that the uncertainties are Gaussian distributed. Increasing n will increase the target material and hence flux as per equation (<ref>), whilst increasing d will decrease the flux as per equation (<ref>).For FKT[2022] uncertainties are reported in <cit.>, whilst for MML[2017]4607 uncertainties are not provided in the case that near and far estimates agree <cit.>. We therefore adopt a 20% uncertainty in d and n for MML[2017]4607 as a rough estimate, given that the true uncertainty and subsequent variation in the model is unknown. Resulting uncertainty bands corresponding to the parameter combinations reported in table <ref> are shown in figure <ref>. Combinations of SNR age, t and separation distance, Δ d for the model curves that best match the LHAASO data, listed in ranked order. These curves are shown in figure <ref>. Cloud t (kyr) Δ d (pc) SN type χ^2 MML[2017]4607 1 37 Ia 5.1 FKT[2022] 4 37 * Ia 6.7 FKT[2022] 4 57 Ia 9.2 FKT[2022] 4 57 II 15.5 FKT[2022] 8 24 ** II 17.0 MML[2017]4607 4 24 ** II 24.4 MML[2017]4607 2 37 II 25.0 MML[2017]4607 1 24 Ia 28.2* Model curves for the same SNR age yet with smaller distances provided a comparable fit to the LHAASO data, but severely overestimated the LST-1 upper limits and are hence not shown. ** Model curves for the same SNR age yet with a distance of 10 pc and 15 pc were comparable to the 24 pc distance quoted. Figure <ref> clearly illustrates that the uncertainty introduced to the model from experimental measurements (or the adopted 20% uncertainty) on the cloud properties leads to variation in predicted flux comparable to that seen by varying the input age and distance of the parameter scan. Therefore, a more finely-grained exploration of the SNR parameter space is not well-motivated.§.§.§ Corresponding neutrino flux For two of the best matching models from table <ref>, we show the corresponding total neutrino flux in figure <ref>. For MML[2017]4607 this is for t=1 kyr and Δ d=37 pc, whilst for FKT[2022] we show t=4 kyr and Δ d=57 pc, both for the SN Ia case. Although Δ d=37 pc yielded a lower χ^2 for FKT[2022] with respect to the LHAASO data, this curve is disfavoured as it exceeds the upper limits provided by LST-1 (upper dashed curve in figure <ref>). These curves essentially scale with the γ-ray flux, yet still lie at least an order of magnitude in flux below the sensitivity of current neutrino experiments suited for the detection of astrophysical neutrinos, such as IceCube <cit.>. § DISCUSSIONLHAASO J2108+5157 is an intriguing UHE gamma-ray source with no known counterparts yet spatially coincident with molecular clouds. In this study, we investigate a scenario whereby the molecular cloud is illuminated by energetic protons accelerated at a SNR in the vicinity. By scanning the parameter space of SNR age and separation distance between the hypothetical SNR and the cloud, we obtain model predictions that can be compared to data, thereby constraining the most likely SNR properties. Consistently, we find that a comparatively young (<10 kyr) and nearby (d≲40-60 pc) SNR is required. There are currently no known SNRs matching this description. From the SNR catalogue <cit.>, the two closest SNRs are G094.0+01.0 and G093.7-00.2, at angular distances of more than 2.4^∘ from LHAASO J2108+5157. At the 3.28 kpc distance of MML[2017]4607 this corresponds to 140 pc and 190 pc separation from the cloud respectively, whilst at the 1.7 kpc distance of FKT[2022] the SNRs are situated 80 pc and 110 pc away from the cloud. Additionally, G094.0+1.0 has an estimated age of 25 kyr, far older than the SNR ages preferred by our model. We conclude that neither SNR is associated to LHAASO J2108+5157. Nevertheless, it remains plausible that there are further, as yet undiscovered SNRs located in the region. Recent results from the EMU/POSSUM survey, performed using the Australian Square Kilometer Array Pathfinder (ASKAP) observed a region of the galactic plane containing 7 known SNRs, and found 21 candidates, of which 13 were new discoveries <cit.>. This supports the notion that radio surveys to date may not be sufficiently sensitive to detect all SNRs within a given region. Several molecular clouds have been identified in the region, two from <cit.> based on <cit.> (MML[2017]4607 and MML[2017]2870) and most recently a new cloud FKT[2022] reported by <cit.>. Model parameters were explored for the two clouds spatially coincident with LHAASO J2108+5157, namely MML[2017]4607 and FKT[2022].Both type II and type Ia supernova explosion scenarios were considered, where the main difference is in the assumed time for transition to the Sedov-Taylor phase (t_ sed).Although a better match could be achieved under the type Ia scenario, we consider this unlikely. Type Ia supernoave occur in older systems where at least one member of a binary system has sufficiently evolved to become a white dwarf, generally corresponding to environments not rich in molecular material. Type II supernovae, however, occur in younger environments where an abundance of molecular material can be expected, similar to that observed in the vicinity of LHAASO J2108+5157. Hence, we rather interpret these results as indicating that an earlier transition into the Sedov-Taylor phase is preferred, which may reflect (e.g.) properties of the ambient medium rather than the nature of the progenitor <cit.>. In all model curves, the highest energy data point at ∼500 TeV could not be well matched with a maximum energy of the proton spectrum of 1 PeV. Therefore, throughout this study we assumed a maximum energy at the Sedov time of 3 PeV. For MML[2017]4607 to account for the data, we neglected an additional suppression within the cloud due to turbulence compared to the ISM (i.e. χ=1). With χ=0.1 within the cloud, MML[2017]4607 consistently under predicted the data in our model (figures <ref> and <ref>). Our model assumed locally suppressed diffusion compared to the Galactic average also in the intervening medium between the SNR and the cloud, a reasonable assumption for regions of active particle acceleration <cit.>. Suppressed diffusion and young SNR age as preferred model parameters is in agreement with the 4.5 kyr age obtained by <cit.>, although <cit.> suggests an older SNR age of 44 kyr, obtained with a different spectral index for the particle population. A young SNR may still be a comparatively weak producer of synchrotron emission, or could be of small size and remain embedded within (or obscured by) molecular clouds in the region.Given the angular size of the molecular clouds, a young SNR could be completely hidden behind the clouds along the line of sight. Using the relation R_ SNR∝ t^2/5 for evolution in the Sedov-Taylor phase, an SNR younger than 12 kyr for MML[2017]2870 and 19 kyr for FKT[2022] would be small enough to be obscured by the cloud. This is consistent with the preferred <10 kyr SNR age. Nonetheless, other scenarios for the origin of LHAASO J2108+5157 remain plausible. Young stellar clusters have been hypothesised as suitable Galactic PeVatrons, with particle acceleration occurring at the termination shock of the collective wind <cit.>. There are two known young stellar clusters nearby to LHAASO J2108+5157: although the distance to Kronberger 80 is known to be at least 4.8 kpc or larger <cit.>, disfavouring an association with the molecular clouds in the region, and the distance to Kronberger 82 remains unknown <cit.>. As such, a stellar cluster is a potential alternative accelerator, also capable of illuminating molecular clouds with CRs, but not well motivated in this region. Given the spatial correlation of LHAASO J2108+5157 with molecular clouds a leptonic scenario for the emission seems unlikely, nevertheless it has been demonstrated that powerful pulsar wind nebulae are capable of accelerating leptons to beyond 1 PeV and can account for UHE gamma rays, especially in high radiation field environments <cit.>. The lack of a pulsar counterpart, or of X-ray synchrotron emission that would indicate the presence of a pulsar wind nebula also in cases where the pulsed emission is mis-aligned, disfavours such a scenario. With the advent of current generation detectors such as LHAASO sensitive to UHE gamma rays, we may expect other enigmatic sources to emerge, corresponding to clouds illuminated by unknown accelerators. Other unidentified gamma-ray sources for which no known counterpart has been identified to date, such as LHAASO J0341+5258, may have a similar origin <cit.>. The first LHAASO catalogue reported no fewer than seven further new sources that seem to be “dark” in nature, without any known counterparts <cit.>. Undoubtedly, further follow-up studies, both in terms of observation and interpretation, are necessary to determine the origin of these enigmatic gamma-ray sources.§ CONCLUSION LHAASO J2108+5157 is a dark UHE gamma-ray source spatially coincident with two molecular clouds. We find that the gamma-ray emission can be accounted in terms of molecular cloud illumination by CRs from a nearby (≲40-60 pc) young (<10 kyr) SNR. Although no SNR is currently known matching these criteria, such an SNR could be obscured by other material along the line of sight, or simply lie below the detection threshold of previous surveys <cit.>. Interactions of the diffuse galactic CR sea with the molecular clouds is found to be insufficient to explain the observed gamma-ray flux.As the exposure of current survey instruments increases, and with the advent of future facilities such as CTA and SWGO, we can anticipate further such discoveries, potentially unveiling a population of UHE sources tracing the presence of PeV particles <cit.>.The key to identifying PeVatrons may lie not in emission from the accelerators themselves, but rather from evidence of energetic particles that have escaped the source region.The author is grateful to G. Rowell & C. van Eldik for fruitful discussions and especially to A. Specovius for reading the manuscript. This work is supported by the Deut­sche For­schungs­ge­mein­schaft, DFG project number 452934793.aa
http://arxiv.org/abs/2310.18007v1
{ "authors": [ "A. M. W. Mitchell" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231027093321", "title": "LHAASO J2108+5157 as a Molecular Cloud Illuminated by a Supernova Remnant" }
Department of Physics, Loyola University Chicago, 1032 W Sheridan Rd., Chicago, IL 60660, U.S.A Comments on “Numerical study of the SWKB condition of novel classes of exactly solvable systems” Jonathan Bougie[[email protected]], Asim Gangopadhyaya[[email protected]], Constantin Rasinariu[[email protected]] January 14, 2024 ===================================================================================================================We comment on the paper “Numerical study of the SWKB condition of novel classes of exactly solvable systems.”<cit.> We show that itmisrepresents our prior work<cit.>, and clarify this misunderstanding. § INTRODUCTION In “Numerical study of the SWKB condition of novel classes of exactly solvable systems,” <cit.> the authors (Y. Nasuda and N. Sawado) misrepresent prior literature. This includes an incorrect appendix entitled “Appendix A. Erroneous Analysis in Ref.  10”. We write as the authors of the aforementioned Ref. 10 in their Letter (hereafter Ref. bougie2018 in this Comment) to address this misrepresentation.In Ref. bougie2018, we examined a superpotential given by:W(x,ℓ) = ω x/2- ℓ/x + (2ω x ħ/ω x^2 + 2ℓ -ħ-2 ω x ħ/ω x^2+2ℓ+ħ) ,equivalent to a superpotential introduced by Quesne <cit.>. This superpotential is an extension W=W_0+W_h of the conventional radial oscillator W_0 = ω x/2- ℓ/x. The conventional and extended radial oscillators share the same energy spectrum E_n=2 n ħω.All conventional additive shape invariant superpotentials exactly satisfy the SWKB condition<cit.>:∫_x_L^x_R√(E_n-W^2(x)) dx = n πħ ,   n=0,1,2,⋯. However, in Ref. bougie2018 we demonstrated that the SWKB condition is not exact for the extended superpotential of Eq. <ref>, despite its shape-invariance.We therefore proved “that additive shape-invariance does not guarantee SWKB exactness by presenting a counterexample: the extended radial oscillator.” <cit.>. The authors of Ref. nasuda similarly demonstrate that the SWKB condition is only approximate (not exact) for the extended radial oscillator, as well as for additional superpotentials. However, they erroneously state that the analysis in Ref. bougie2018 “is wrong because it lacks the proper treatment of ħ.”<cit.> In Sec. <ref>, we address several misleading claims made in Ref. nasuda.§ FALSE AND MISLEADING CLAIMS OF REF. 1 §.§ Scaling and use of ħ=1 The authors of Ref. nasuda state that “Although most of the literature employs the unit of ħ= 1, to simplify the analyses, we retain ħ in this paper for rigorous discussions.”<cit.> However, the choice of ħ=1 does not affect the rigor of the analysisin Ref. bougie2018, as we show below. In Ref. bougie2018, we changed integration variable such that y≡√(ω) x, so that both y^2 and the parameter ℓ have dimensions of angular momentum. With this change, the superpotential of Eq. <ref> becomesW(y,ℓ) = √(ħω)[ y/2√(ħ)- ℓ/ħ√(ħ)/y + 2 y/ √(ħ)/y^2/ħ + 2ℓ/ħ - 1 - 2 y/ √(ħ)/y^2/ħ + 2ℓ/ħ + 1]  .Note that in Eq. <ref> the quantity in the square brackets is dimensionless. By scaling ỹ=y/√(ħ) and ℓ̃=ℓ/ħ, this becomesW(ỹ,ℓ̃) = √(ħω)[ ỹ/2- ℓ̃/ỹ + 2 ỹ/ỹ^2 + 2ℓ̃ - 1 - 2 ỹ/ỹ^2 + 2ℓ̃ + 1] .We defined I≡∫_x_1^x_2√(E_n-W^2(x))  dx, which can be writtenI = ħ∫_ỹ_1^ỹ_2√(η(ỹ,ℓ̃)) dỹ,where the dimensionless quantity η(ỹ,ℓ̃) isη(ỹ,ℓ̃) = 2n - [ ỹ/2- ℓ̃/ỹ + 2 ỹ/ỹ^2 + 2ℓ̃ - 1 - 2 ỹ/ỹ^2 + 2ℓ̃ + 1]^2 . Note that the quantity I in Eq.(<ref>) is simply ħ multiplied by a dimensionless integral; the SWKB condition we investigated was whetherI equals nπħ. It is abundantly clear that setting ħ = 1 bears no significance to the correctness of our result.By setting ħ=1 and renaming ỹ→ y and ℓ̃→ℓ, Eq. <ref> above becomes identical to Eq. 26 of Ref. bougie2018, andEq. <ref> becomes equal to the expression for η shown below Eq. 27 in Ref. bougie2018. There is nothing “devious” about setting ħ=1; it is simply a notational convenience.§.§ Shape invariance and expansions in ħ Appendix A in Ref. nasuda states the following regarding Ref. bougie2018 (similar statements appear in the body of Ref. nasuda): They alleged that the additive shape invariance was realized for the parameters a_i such that a_i+1 = a_i + ħ. Their analysis was based on the expansion of the superpotential W(a_i, ħ) in power [sic] of ħ, assuming that W was independent of ħ except through the above shift of the parameter a_i.The main drawback in the analysis was that they overlooked the dependence of the parameter a_i on ħ. Such wrong expansion with ħ inevitably leads to the devious result.<cit.>.The statement quoted above is wrong on multiple counts. First, the shape invariance of the superpotential W is not “alleged.” It can be verified by substituting W from Eq. (18) of Ref. bougie2018 (Eq.(<ref>) in these Comments) into the shape invariance condition: W^2(x,a_i)+ħdW(x,a_i)/dx+g(a_i) = W^2(x,a_i+1)-ħdW(x,a_i+1)/dx+g(a_i+1) ,for parameters a_i = ℓ,a_i+1=ℓ+ħ (cf. Eq. 6 of Ref. bougie2018). For the superpotential of Eq. <ref>, g(a)=2 ω a.Furthermore, contrary to the claim of Ref. nasuda, the analysis of Ref. bougie2018 is not based on ħ-expansion. The authors of Ref. nasuda misunderstood the scope ofthe ħ-expansions discussed in“Sec. 1: Introduction” and “Sec. 2: Preliminaries,” of Ref. bougie2018. These expansions simply placed our work in the context of the existing literature. Specifically, subsection 2.2 illustrates that Quesne's superpotential is a solution of previously derived partial differential equations <cit.>. The status of this superpotential as a valid shape-invariant superpotential can be verified by direct substitution in Eq. (<ref>), independent of any ħ expansion.The main result ofRef. bougie2018 was to prove that the extended shape-invariant superpotential given by Quesne <cit.> is not SWKB-exact via numerical integration. The expansion played no role in the new results obtained inRef. bougie2018. §.§ Dimension of the shape-invariance parameterFinally, the authors of Ref. nasuda claim that in Ref. bougie2018 “the system the authors considered is irrelevant to the known quantum mechanical problems such as the well-known radial oscillator for the explicit factor ħ.”<cit.> They are wrong here as well. In Ref. bougie2018, we consider shape invariant parameters a_i+1=a_i+ħ. Therefore, the parameter ℓ has dimensions of angular momentum as discussed in Sec <ref> above. The authors of Ref. nasuda dedicate much of their Appendix A to show the trivial result that ℓ=ħ(ℓ'+1), where ℓ' corresponds to a dimensionless quantum number. This correspondence does not invalidate any results in Ref. bougie2018 and we did not overlook it (cf. Footnote 3 in Ref. bougie2018).§ CONCLUSION The authors of Ref. nasuda appear to have misunderstood the existing scientific literature. The claim of Ref. bougie2018 was to “present a concrete example of an additive shape invariant potential for which the SWKB method fails to produce exact results.” <cit.>. The superpotential used in Ref. bougie2018 is additive shape-invariant, and the numerical analysis indeed demonstrated its SWKB-inexactness. The analysis of Ref. bougie2018 is correct. ws-mpla 1 nasuda Y. Nasuda and N. Sawado, Mod. Phys. Lett. A 36, 2150025 (2021). bougie2018 J. Bougie, A. Gangopadhyaya and C. Rasinariu, J. Phys. A: Math. Theor. 51, 375202 (2018). quesne2008 C. Quesne, J. Phys. A: Math. Theor. 41, 392001 (2008). comtet1985 A. Comtet, A. Bandrauk and D. K. Campbell, Phys. Lett. B 150, 159 (1985). dutt1986 R. Dutt, A. Khare and U. P. Sukhatme, Phys. Lett. B 181, 295 (1986). adhikari1988 R. Adhikari, R. Dutt, A. Khare, and U. P. Sukhatme, Phys. Rev. A 38, 1679 (1988). bougie2010 J. Bougie, A. Gangopadhyaya and J. V. Mallow, Phys. Rev. Lett. 105, 210402 (2010). bougie2012 J. Bougie, A. Gangopadhyaya, J. Mallow and C. Rasinariu, Symmetry 4, 452(2012).
http://arxiv.org/abs/2311.02092v1
{ "authors": [ "Jonathan Bougie", "Asim Gangopadhyaya", "Constantin Rasinariu" ], "categories": [ "math-ph", "hep-th", "math.MP", "quant-ph" ], "primary_category": "math-ph", "published": "20231027165551", "title": "Comments on \"Numerical study of the SWKB condition of novel classes of exactly solvable systems''" }
GNN-GMVO: Graph Neural Networks for Optimizing Gross Merchandise Value in Similar Item RecommendationAnonymous January 14, 2024 ==========================================================================================================Similar item recommendation is a critical task in the e-Commerce industry, which helps customers explore similar and relevant alternatives based on their interested products. Despite the traditional machine learning models, Graph Neural Networks (GNNs), by design, can understand complex relations like similarity between products. However, in contrast to their wide usage in retrieval tasks and their focus on optimizing the relevance, the current GNN architectures are not tailored toward maximizing revenue-related objectives such as Gross Merchandise Value (GMV), which is one of the major business metrics for e-Commerce companies. In addition, defining accurate edge relations in GNNs is non-trivial in large-scale e-Commerce systems, due to the heterogeneity nature of the item-item relationships.This work aims to address these issues by designing a new GNN architecture called GNN-GMVO (Graph Neural Network - Gross Merchandise Value Optimizer). This model directly optimizes GMV while considering the complex relations between items. In addition, we propose a customized edge construction method to tailor the model toward similar item recommendation task and alleviate the noisy and complex item-item relations. In our comprehensive experiments on three real-world datasets, we show higher prediction performance and expected GMV for top ranked items recommended by our model when compared with selected state-of-the-art benchmark models.Recommendation Systems, Graph Neural Networks, Similar Item Recommendations, Gross Merchandise Value Optimization § INTRODUCTION The goal of recommender systems in e-Commerce settings is to increase the click-through rate (CTR) and revenue by recommending items that users will likely interact with and eventually purchase <cit.>. Similar item recommendation plays a vital role in enhancing customers’ exploration experiences on e-Commerce websites, by recommending similar substitutes for a given anchor item. This type of models enables users to be exposed to a broader set of products and allow marketing campaigns to reach potential customers effectively. Both these factors increase the chance of conversion <cit.>. Different modeling approaches are suggested in the literature to capture complex relations like similarity for items. One of the recent approaches for this task is to use graph-based models, which can identify and formulate the relation between items to make accurate recommendations <cit.>. GNNs generate representations of nodes that depend on the graph's structure, item pairs links (edge) features, and relations. The most common paradigm for GNNs in item recommendations is to learn node (i.e., product) representation to perform relation predictions based on the embedding vectors <cit.>. Identifying product relations, such as similarity and complementarity, is important in the e-Commerce recommendation platform <cit.>. Ignoring these diverse types of relations can lead to losing critical information about item relations. In addition, the ultimate optimization objective can play a critical role in graph models' performance in different settings. For instance, if the graph model's architecture optimizes relevance-related loss functions, it can lead to suboptimal recommendations from a revenue standpoint. This, in turn, can make the usage of those graph architectures prohibitive for large-scale industrial e-Commerce systems. To address these issues, we propose a new graph-based model that considers the diverse, complex relations in item spaces of large-scale e-Commerce settings and can directly optimize on Gross Merchandise Value (GMV)-related loss function. We call this architecture Graph Neural Network - Gross Merchandise Value Optimizer (GNN-GMVO). Under this model, we propose a new multi-objective decoder function that optimizes on a combination of relevance and GMV. This makes the model capable of adjusting the loss function per the usage setting. In other words, depending on the degree of importance of relevance and revenue, the model objective can change. Through our extensive experiments, we design a new edge relation based on item-item data that considers relational information like co-view, view-then-bought (an item viewed, then another item is ultimately bought), and co-purchase. This new metric helps us better identify similarity relations among other types of relations between items and reduce noise (i.e., other types of item-item relations).We perform experiments to validate the proposed architecture on a user-interaction proprietary e-Commerce dataset from Walmart.com and two publicly available datasets from Amazon. According to the results, GNN-GMVO outperforms the currently deployed model in prediction metrics in Walmart.com. The model also performs better than GCN <cit.> and Graph Attention Networks (GAT)<cit.> models in expected GMV without hurting the NDCG metric.The rest of this paper is organized as follows. In section <ref>, we present background and related work. In section <ref>, the methodology and the architecture of GNN-GMVO are elaborated. In section <ref>, we report the experiments conducted to compare the proposed architecture with the existing benchmarks. Section <ref> states the conclusion and direction for future research. § BACKGROUND AND RELATED WORK Among proposed algorithms for recommendation systems, GNNs have shown to be one of the most promising models <cit.>. One reason for this success might be the inherent design of the graph models that directly takes advantage of item-item, user-item, or user-sequence interactions. In the last decade, various aspects of GNN-based recommendation systems have come under attention in industry and academia <cit.>. In this section, we review some of the relevant work to our paper.One closely related topic to similar item recommendation is social recommendation task. This task emerged with the creation of online social networks. The models in the social recommendation track assume that a given graph node's local neighbors can be used to improve node representation modeling, because a given node's neighbors should have similarity with the node itself <cit.>. The similarities between nodes are used in two different ways for modeling purpose: (i) to improve final generated node representation modeling <cit.>, (ii) to explicitly use them as regularizers to limit the final node latent representations <cit.>. Considering diverse nature of item-item relation is one of the important aspect of our work. Because of this another stream of research pertaining to our work is the knowledge graph based recommendation. Knowledge graphs utilize a complex graph structure with several types of nodes and relation among them <cit.>. Mainstream papers in this area of literature create embeddings for relations and focus on semantic relevance (see <cit.> for example), and the semantic information of both nodes and relations are considered.Another research track relevant to our work is models focusing on revenue optimization. Most research in item recommendation systems' literature is based on optimizing the item recommendation relevance. However, in e-Commerce settings, the objective maybe to optimize on generated revenue <cit.>. There are non-graph machine learning models like <cit.> and <cit.> that can potentially model context created by price of recommended items on the user behavior. Some other papers take this one step further and study the so-called assortment optimization problem (see <cit.>, <cit.>). Under assortment optimization problem, one is interested in finding a subset of items that maximize the revenues. Most of the models studied in this context, like multinomial logistic regression <cit.>, mixed multinomial logistic regression <cit.>, and nested logit model <cit.> are simple listwise models from an ML standpoint of view, which makes their usage prohibitive in complex big-data settings. However, due to listwise nature of user choice behavior, the assortment optimization problem becomes combinatorially challenging (see <cit.> for example). For instance, <cit.> develops a scalable model to identify similar items and maximize revenue by constructing a pointwise ranking model, see <cit.> and <cit.> for a survey of models optimizing on generated revenue. However, as mentioned, these models do not explicitly incorporate the item-item relations inherent to graph models and may lose some relational information. In parallel to non-graph models focusing on optimizing revenue, there are graph models for similar-item recommendation not optimizing on revenue-related metrics. However, they mainly focus on optimizing relevance in the item recommendation setting. For example, <cit.> develop a method to recommend unforeseen apps to the users by constructing an app-app similarity graph and using users' interaction data with previously installed apps. <cit.> propose an architecture with a weighted graph attention layer to provide in-session item embedding and recommend the next item in each session while optimizing CTR.Finally, some papers attempt to present graph-based models that are at least price aware if not optimizing on generated revenue by the recommendations. In their model, the utility scores of items are function of recommended price of the whole list of recommended items. <cit.> propose a price-aware GNN-based recommender system that discovers user price sensitivity for each price category. To do so, they add nodes for price and category to the user-item graph and allow the item price to be propagated to the user embeddings through the item nodes. This structure allows for user price preference on unexplored categories. To the authors’ best knowledge, although revenue awareness has been a research topic for other recommender systems <cit.>,<cit.>, it has not been studied as part of GNN-based recommender systems. In this paper, we seek to fill the literature gap by proposing a graph-based model for a similar item recommendation task that explicitly models item-item relations while optimizing generated revenue by the recommendations.§ METHOD This section introduces the architecture of the proposed GNN-GMVO framework, the high-level system view, and its components. In detail, we discuss GNN-GMVO model, item graph construction, and model training and inference for similar item recommendation tasks. Specifically, we focus on introducing two variants of our model built on Graph Convolutional Networks (GCNs) and Graph Attention Networks (GATs), and call them GCN-GMVO and GAT-GMVO, respectively.§.§ GCN-GMVO Model Let G=(V,E) denotes a graph where V and E represent the set of nodes and edges of the graph. Also, denote the embedding matrix of nodes by X∈ ℛ^d×|𝒱|, where d is the dimension size of the embedding and |𝒱| is the number of nodes in the graph. The graphs represent the relations defined over different sets of entities (represented by nodes). An edge e ∈ E shows a connection between node u (u ∈ V) and node v (v ∈ V). GNNs are a class of neural network models built on graph structure and use relations defined between the nodes. GNNs aggregate information from the graph's structure to create a deep representation for each node by using a form of message passing to transfer information between nodes to update each node's representation. Graph Convolutional Networks (GCNs) are among the most popular GNN models. Under GCNs, message passing between nodes is done via Eq. (<ref>): h_u^k=ReLU(W^k∑_v∈ N(u) U_{u} h_v^k-1/√(|N(u)||N(v)|)),where h_u^k shows hidden representation of node u after k^th message passing step, and W^k is a trainable matrix. N(u) and N(v) show the neighborhood nodes of u and v, respectively. For each node, this function aggregates information from its neighborhood and combines it with the previous embedding of that node to update its representation. The input features of each node are used as initial hidden embedding h^0 (i.e., h_u^0=X_u), and after K message passing steps, the final embeddings of nodes, z_u, are created. In other words:z_u=h_u^K ∀ u∈ V. GCN model operates as the encoder function by using the local graph structure around each node. The encoder maps nodes to an embedding space. A decoder function reconstructs the connections of the graph from the encoded node embeddings. In the similar item recommendation task, the decoder should perform as a predictor of the similarity between pairs of nodes in the graph. This is done by predicting whether two nodes are connected in the graph and reduces to the link prediction problem <cit.>. Under GCN, the encoder function takes the graph structure and the initial node features as the input, and generates the final embeddings of the nodes using Eq. (<ref>). Then, a decoder function reconstructs the neighborhood structure for each node.Under the GCN-GMVO framework for the similar item recommendation problem, we aim to optimize on item similarity relevance while inflating the weight of the edges with higher item prices. In other words, the decoder should identify similar items for a given anchor node (item) of interest, while boosting it according to the prices of respective items. To achieve this goal, we adjust the decoder function to make the final loss function more sensitive to links created among neighbors with higher prices. Because of this, the decoder function is modeled as Eq. (<ref>): DEC(z_u,z_v)=(1+λ(p_u+p_v))(z_u^Tz_v),where p_u is the normalized price of item u, and z_u^Tz_v is the inner product between embeddings of nodes u and v. Note that if z_u^Tz_v is higher for a pair (u,v), nodes u and v are more similar. Under Eq. (<ref>), the inner product of the embeddings of two nodes u and v are inflated by the sum of normalized prices of both nodes u and v. λ controls the trade-off between the importance of price and similarity. If λ=0, the decoder function only focuses on the similarity between nodes u and v. However, by increasing the value of λ, the decoder considers more weight for the revenue generated by the corresponding nodes of the pairs. Under GCN-GMVO, we frame the similar item recommendation problem as a link prediction problem. The link prediction is a classification problem where the positive label is assigned to a link between nodes u and v if an edge connects the nodes. Also, a negative link is assigned to pairs of nodes if no edge connects them in a graph structure. In order to train the model, we randomly sample a subset of positive edges and an equal number of negative edges from the training data. The sampling is done according to a uniform distribution. Then, to frame the problem as a link prediction problem, a binary cross-entropy loss (Eq. <ref>), is defined on the encoder-decoder structure:ℒ= -1/N∑_(u,v)∈ E l(DEC(z_u,z_v),A[u,v]) = -1/N∑_(u,v)∈ Ey_(u,v).log(σ(DEC(z_u,z_v)))+ = (1-y_u_n,v_n).log(1-σ(DEC(z_u_n,z_v_n))).In Eq. (<ref>), σ denotes the sigmoid function, which maps the decoder to a probability score, and A is the binary adjacency matrix. In addition, u_n and v_n show the nodes for the negative samples from non-existing edges. The loss function measures the discrepancy between the decoded edge values and the true values.§.§ GAT-GMVO Model The decoder function defined by Eq. (<ref>) can be applied on other GNN architectures such as GAT (Graph Attention Network) <cit.> or GraphSAGE <cit.> to tailor these models toward optimizing revenue while decoding the edges. GAT has shown promising results by combining GNNs with attention mechanism <cit.>. The GAT model introduces attention weight into graphs, which is used to calculate the hidden representation of the nodes by attending over neighbor’s influence during the aggregation step. Eq. (<ref>) shows the hidden representation of node u in iteration k of message passing: h_u^k=σ(∑_v∈ N(u)α_u,v h_v^k-1), α_u,v shows the attention on neighbours of node u while aggregating its neighbour information during the message passing. α_u,v can be calculated using Eq. (<ref>): α_u,v= exp(a^T[Wh_u ⊕ Wh_v])/∑_v^'∈ N(u) exp(a^T[Wh_u ⊕ Wh_v^']), where a shows a trainable attention vector, W is a trainable matrix, and ⊕ denotes the concatenation operation <cit.>. Other components of GAT-GMVO framework including the decoder and loss can be modeled similar to GCN-GMVO architecture proposed in section <ref>. This change in the decoder function can make the GAT model more sensitive to highly priced neighbors.§.§ Loss Function Variants Cross-entropy function is used to formulate the loss function for GNN-GMVO. However, other types of ranking functions such as pairwise max margin loss can also be utilized to optimize the encoder-decoder loss for GNN-GMVO architecture. Eq. (<ref>) shows the max margin loss function for the link prediction problem.L = ∑_(u,v)∈ E max(0, -DEC(z_u,z_v) + DEC(z_u_n,z_v_n) + Δ), where Δ is the margin for the difference between DEC(z_u_n,z_v_n) and DEC(z_u,z_v). This loss function helps the model to optimize the weights by minimizing the scores generated by non-existing edges and maximizing the scores generated by positive edges while considering the generated revenue. Since the decoder function takes into account the revenue optimization, the loss function considers lower loss value for the edges with higher price nodes.§.§ Item Graph Construction, Model Training, and Model InferenceSince item-item feature data in its raw format does not include edges among the items, we need to construct a graph for the training purpose to capture the similarity connections between items. To do this, we construct item features for all the nodes. This work uses a pre-trained model called Universal Sentence Encoder (USE) to extract initial node embeddings X (see <cit.> and <cit.>). This model creates a sentence embedding with dimensions of 512. We use the item name, item category, and other textual information of the item to generate text that is inputted to USE to generate the embeddings. In order to detect edges of the graph, we need to define a metric that represents similarity relations between items, as there could be multiple types of relations such as complementarity, substitutability, and relevance among items of a given data in e-Commerce settings. The existence of multiple types of relations can create noise in detecting similar nodes to any given item, as users' click/purchase behavior (which is used to construct the graph) can be affected by these diverse relations among the items. Hence, we use co-view, view-bought, and co-purchase data as signals to define a new and custom metric to identify similarity relations and remove noise:Sc(u,v)= |cv(u,v)|+|vb(u,v)| +|vb(v,u)|-|cp(u,v)| ∀ u,v∈ V, where |cv(u,v)|,|cp(u,v)| show the number of times items u and v are co-viewed and co-purchased together. |vb(u,v)| shows the number of times item u is viewed and then item v is purchased. We subtract |cp(u,v)| from co-view and view-then-bought to account for noise created by complementary items and only keep similarity relations between nodes. Note that complementary items, although bought or viewed together or after one another, are not similar. We use Eq. (<ref>) to detect strong similarity relations while removing noise created by complementary items: edge(u,v)=1ifSc(u,v)>θ0 otherwise.Eq. (<ref>) assumes an edge between u and v if Sc(u,v) is larger than a threshold θ.We randomly split the graph into training and test sets to train the model and evaluate its performance. The edges are partitioned between training and test sets. We use a two-hop message passing GCN and GAT models with an output embedding of 256 dimensions. In other words, Eq. (<ref>) andEq. (<ref>) run for two iterations to encode the nodes into final node embeddings z_u, ∀ u ∈𝒱. Then given sampled negative and positive edges, the loss function (<ref>) is obtained. After completion of training, the obtained model is used for inferencing and ranking. To perform the model inference on a set of items, we consider all those items as nodes of the graph. We assume an edge between the nodes of a given pair if Eq. (<ref>) holds for that pair. The inference graph is encoded as mentioned in subsections <ref> and <ref> using the trained model. See subsection <ref> for the proposed approach of constructing the edges for cold or new items with zero or small number of views and transactions. §.§ Ranking Task for RecommendationIn this paper, we only focus on optimizing revenue and relevance via using the graph models. Therefore, in order to retrieve similar items, one can use a similarity measure between two items' final graph embeddings. After model inference and encoding nodes to embedding space, we apply a weighted similarity function to find the similarity score between nodes u and v using Eq. (<ref>):score(u,v)= (1+λ(p_u+p_v))×(z_u^Tz_v/|z_u||z_v|). Eq. (<ref>) represents the weighted cosine similarity score between item u and item v. When ranking recall set for an anchor item u, the higher the price of item v (v ∈ Candidate set for u), the higher it will be ranked.§.§ Edge Construction for Cold ItemsIn a large-scale e-Commerce implementation of any recommendation module, new items with no user-interaction history are always added. In addition, some items may have low user traffic, and they may not be connected to any other graph nodes using a threshold-base rule like Eq. (<ref>). The new and low-traffic (cold) items become isolated in the graph in these cases. Therefore, the node embeddings generated by the GNN-GMVO model will only use the weights of that node, and no message will be passed from the other nodes of the graph:h_u^k=ReLU(W^k h_u^k-1).In these cases, one may ignore a threshold-based rule like Eq. (<ref>) and add a connection from the node with the most similar initial embedding, h_0, to the new/cold item. This case may alleviate the isolation problem of the new/cold items, connect them to the rest of the graph, and obtain better final embeddings. The items with the highest probability are more likely to be similar to each other:S_i=argmax( e^X_i X_j/∑_j^'∈ V e^X_i X_j^') ∀ j^'∈ V.S_i shows the most similar item to cold item i. For graphs with millions of nodes, approximate nearest neighbour search algorithms such as <cit.> and <cit.> could be utilized to retrieve the most similar nodes. Figure <ref> summarizes the architecture of the GNN-GMVO model and the steps explained in this section. § RESULTS §.§ Results on Walmart datasetThis section summarizes the conducted empirical experiments to validate the proposed modeling approach on real data instances. We compare the model with the currently deployed recommendation architecture in the similar item recommendation module of Walmart.com. We call thisbenchmark model which is a sophisticated Deep Learning model SIRB (Similar Item Recommendation Benchmark) in the rest of this paper. The similar item recommendation module of Walmart.com is in the item pages of the platform and seeks to recommend alternative similar items to the main item of the page (See Figure <ref>). We use Normalized Distributed Cumulative Gain (NDCG) to measure the ranking relevance. We also define a metric to measure the expected generated GMV by the models' suggested rankings (EGMV@K). To calculate EGMV@K, after the model ranking, the first K items are determined for each anchor item. Then, we use the transaction data obtained for a month after model inferencing to approximate the ground truth. We calculate the portion of times each candidate item is being transacted and consider this as the probability of the purchase for each item. More specifically, EGMV@K is calculated as follows: GMV@K=∑_k=1^K∑_t=1^T|Tr_t,k|/∑_j∈ c_a|Tr_t,j|P_k, where |Tr_t,k| is the number of times candidate item k is purchased at time t, ∑_j∈ c_a|Tr_t,j| shows the total number of transactions for all candidate items under an anchor item a, and P_k shows price for candidate item k.The data is proprietary and isselected from the Online Grocery category of items and includes about 100,000 items and 4 million links (as the edges of the graph) among those items. The graph edges are constructed using Eq. (<ref>). We conducted parameter tuning by training the model with different learning rates (0.001, 0.01, 0.1) and epoch sizes (10, 20, 50, and 100). We also tested the model performance using one, two, and three-hop message passing parameter. The optimal inference results are achieved using learning rate of 0.1 for the ADAM optimizer, the epoch size of 20 with two-hop message passing. PyTorch Geometric 2.2.0 is used for training and inference. Each epoch takes approximately 30 seconds to finish using a machine with 32GB memory and 10 CPU cores. We train our proposed model with different values of λ in Eq. (<ref>) to find λ that maximizes the objective function. When inferencing, node embeddings are generated by loading the trained model and passing the inference graph as a dataset to encode nodes and generate graph embeddings. The inference dataset includes about 50,000 nodes. The candidate set for each anchor item is ranked based on the weighted cosine similarity of the recommended items and the given anchor item (i.e., Eq.(<ref>)). The results are presented in Table 1. Most of the users at item pages of Walmart.com view at most two sets of four items when checking similar item recommendations. This is why we report @8 metrics. We conducted ablation study to show the importance of the proposed decoder function in optimizing the objective function. When λ=0, the model is equivalent to a traditional GCN model without a GMV optimizer. As can be seen in Table <ref>, using GCN model with custom edges defined in Eq. (<ref>) increases NDCG metric by 4.2% w.r.t. the benchmark SIRB (when λ=0). By increasing λ in GCN-GMVO, the NDCG metric decreases for λ≥1. This is expected since for larger values of λ the loss becomes more focused on optimizing the revenue. In addition, from Table <ref>, we see that increasing λ up to 0.5 increases the EGMV@8 metric, but larger λ has an inverse effect as it biases the loss too much so that items relevance (and ultimately EGMV@8) becomes small. Setting λ=0.1 achieves both high NDCG and EGMV scores, which translates to increasing both relevance and expected GMV in our test sets. This proves the efficiency of our modeling framework in optimizing both recommendation relevance and revenue in large-scale e-Commerce settings. Figure <ref> shows the set of recommended items for a grocery anchor item by SIRB and GCN-GMVO. The results show that GCN-GMVO model recommends different set of items compared to the SIRB algorithm. The recommended items could potentially decrease the total number of sales if they are too expensive. However, since GNN-GMVO algorithm controls the trade-off between relevance and revenue, by optimizing the loss function we can improve EGMV generated by the recommendation set. As can be seen from Figure <ref>, the price of the recommendations for the anchor item are slightly more than the recommendations of SIRB (Figure <ref>). However, the model shows some degree of price elasticity which results in EGMV boost without hurting NDCG metric. This is an example of how the proposed algorithm could positively impact the item recommendation task for large-Scale e-Commerce platforms. §.§ Results on Publicly Available Dataset We also evaluate the performance of our architecture on two separate categories (All Beauty and Video Games) of Amazon datasets<cit.>.§.§.§ All Beauty CategoryThe dataset contains metadata from Amazon for 32,992 products in the "All Beauty" Category. Product title, description, price, also_bought (list of items bought after viewing the product), also_viewed (list of items viewed after viewing the product), brand, and category features are used in this experiment. Some parts of the data set are missing. For instance, 65% of the products do not have price data. Because of this, we compute the price average and standard deviation to fill in missing values by sampling from the price distribution. The initial node representations are computed by inputting the item's textual information (including category, description, title, and brand) to the USE encoder <cit.>. Since the frequency for the items co-view, view-bought, and co-purchase are missing in the dataset, we connect the product pairs in the graph if they are co-viewed even once.In other words, in order to build the graph, θ in Eq. (<ref>) is considered to be 0 for this experiment.We train our proposed architecture with different values of λ in Eq. (<ref>). The candidate set for each anchor item is based on the items that are viewed after the anchor item. We rank recall set based on the weighted cosine similarity of the recommended items and the given anchor items using Eq. (<ref>). In order to evaluate the models, we measure and compare their performance on top-4 recommended items based on the total number of transactions for each item. Traditional GCN and Graph Attention Networks (GAT) are considered as the benchmark models for this experiment. The cross-entropy loss function converges after around 20 epochs when training these models. The total run time with a 32GB memory is approximately 5 minutes. We used the same training configuration of the Walmart.com dataset in this experiment. The results show improvement in GMV without impacting the NDCG scores of the recommendations for some cases of positive λs. One of the goals of the proposed architecture is to improve revenue, consequently GMV, in the large-scale e-Commerce systems without recommending irrelevant items compared to the anchor item. Both of these goals are measured by comparing NDCG and EGMV of the recommendation sets. As can be seen from Table <ref>-Panel A, when λ=0EGMV@4 and NDCG@4 for GCN are 1.95 and 0.641, respectively. They are 1.94 and 0.644 for GAT model. Setting with λ=0.8 generates EGMV@4=2.02 for GCN-GMVO and EGMV@4=1.98 for GAT-GMVO models which yields 3.6% and 1.6% improvements in EGMV@4 w.r.t. GCN and GAT benchmark models. Using GCN and GAT architectures along with GMV optimizer as the decoder function can improve GMV metric without hurting the relevance of the recommendations. However, the results show that the GCN is performing slightly better than GAT model in optimizing revenue in this experiment. §.§.§ Video Games Category This dataset contains 84,893 items from video game category. We used the same model configuration and same set of features as the previous experiment to evaluate the model performance on a bigger item graph. We used the same pre-trained model (i.e., USE) to generate item text embedding using item categories, description, title, and brand features. Similar to the previous experiment, the results show that our architecture can improve EGMV generated by the recommendation system. We observe that the GAT model with GMVO component outperforms GCN model in optimizing the revenue. The results (Table <ref>-Panel B) show that the GAT-GMVO model has the highest EGMV when λ=0.05. This shows that the optimal trade-off between relevance and revenue happen when we consider lower weights for the nodes’ price. However, we see optimal EGMV for the GCN-GMVO model when λ=2. As can be seen from the table, EGMV is improved by 1.3% for the GAT-GMVO.This improvement in the EGMV has no negative impact on the NDCG of the recommendations. We also see that EGMV is improved by 1.8% under GAT-GMVO model.§ CONCLUSION In this paper, we propose Graph Neural Network-Gross Merchandise Value Optimizer (GNN-GMVO) architecture to optimize GMV while considering complex item-item relations for similar item recommendation task. We develop a new decoder to adjust the generated loss function to become more sensitive to the price of the recommended items.We define a new edge construction framework in the item graph to identify similarity relations between items and remove noise caused by other types of relations in the item space. We propose a step-by step framework to (i) query input feature data, (ii) construct the graph, (iii) train the model, and (iv) use it for inference and ranking tasks. We conduct extensive experiments to train the model and find the optimal weights to trade off GMV and relevance.The results show that the proposed model improves the expected GMV on three real datasets without hurting the NDCG scores of the recommendations. This may prove the usefulness of the proposed approach in further optimizing the GMV in some industrial recommendation settings. In future work, one may add more features such as items’ features (price, product type, product category, etc.) and image embedding data to the textual embedding data to enrich the initial embedding inputs of the model and test if this makes the modeling framework more robust. Finally, feeding generated node embeddings by GNN-GMVO architecture to other expert models like wide & deep(W&D) <cit.>, prospect-net <cit.>, DeepFM <cit.>, and xDeepFM <cit.> can be tested and examine if this further optimizes the model.IEEEtran
http://arxiv.org/abs/2310.17732v1
{ "authors": [ "Ramin Giahi", "Reza Yousefi Maragheh", "Nima Farrokhsiar", "Jianpeng Xu", "Jason Cho", "Evren Korpeoglu", "Sushant Kumar", "Kannan Achan" ], "categories": [ "cs.IR", "cs.LG" ], "primary_category": "cs.IR", "published": "20231026184316", "title": "GNN-GMVO: Graph Neural Networks for Optimizing Gross Merchandise Value in Similar Item Recommendation" }
http://arxiv.org/abs/2310.18254v1
{ "authors": [ "Damiano F. G. Fiorillo", "Maria Petropoulou", "Luca Comisso", "Enrico Peretti", "Lorenzo Sironi" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20231027163958", "title": "TeV neutrinos and hard X-rays from relativistic reconnection in the corona of NGC 1068" }
first]G. Blatter [email protected],second]M. Sirena third]Yeonkyu Lee third]Jeehoon Kim first,second]N. Haberkorn [email protected][first]Instituto Balseiro, Universidad Nacional de Cuyo, and Comisión Nacional de Energía Atómica, Av. Bustillo 9500, 8400 San Carlos de Bariloche, Argentina. [second]Comision Nacional de Energia Atomica and Consejo Nacional de Investigaciones Cientificas y Tecnicas, Centro Atomico Bariloche, Av. Bustillo 9500, 8400 San Carlos de Bariloche, Argentina. [third]Department of Physics, Pohang University of Science and Technology, Pohang, 37673, South Korea.We report on the impact of the magnetic domain stripe configuration on the critical velocity of vortices in superconducting/ferromagnetic bilayers. Using a 23 nm thick Mo_2N film, covered by a 48 nm FePt layer with tunable nanosized striped domains, we demonstrate that flux instability at low magnetic fields depends on the orientation of the stripes. When the stripes are perpendicular to the applied current and act as vortex guides, the velocity values reach 5 km/s, duplicating those found when configured parallel to the current, creating winding vortex paths. Our results indicate that vortex critical velocities can be tuned by configuring different domain structures, providing a platform to understand vortex dynamics in superconducting microstrips. vortex velocitysuperconductivitystriped domain configuration § INTRODUCTIONThe ultra-rapid dynamics exhibited by superconducting vortices during dissipation involve complex physics that applies to systems not in equilibrium <cit.>. The maximum attainable vortex velocity in superconducting microstrips, as determined by current-voltage (I-V) curves, is constrained by Larkin-Ovchinnikov (LO) instability <cit.>. This phenomenon manifests as an abrupt transition to the normal state during dissipation, associating the vortex velocity at the instability (v_LO) with the time of recombination of normal electrons into Cooper pairs (τ). The LO instability, originally anticipated to be solely governed by intrinsic properties, is also observed to be influenced by local heating generated by irregularities and geometric defects. Reducing surface roughness and minimizing local heating effects, the maximum v_LO, typically observed at small magnetic fields, can increase from values near 1 km/s to exceed 10 km/s <cit.>. The quest for achieving high vortex velocities while mitigating unintended effects is motivated by two primary reasons. Firstly, accurately determining the τ through LO theory is crucial. This is particularly relevant because τ dictates the maximum resolution achievable when utilizing the material in superconducting nanowire single-photon detectors (SNSPD) <cit.>. Secondly, this pursuit is driven by exploring new phenomena associated with the Cherenkov-like generation of sound and spin waves induced by fast-moving vortices in superconducting/magnetic systems <cit.>. Hence, material engineering, encompassing both intrinsic properties and structural/geometrical characteristics, is crucial in enhancing vortex speeds in micro and nanosystems.Proximity effects significantly influence the properties of superconducting/ferromagnetic hybrids. Studies concerning vortex dynamics span from pinning mechanisms to the impact on vortex critical velocities at the LO instability <cit.>. The proximity to ferromagnetic materials typically enhances vortex velocities and correlates with a shorter τ (since v_LO^2∝ 1/τ) <cit.>. Furthermore, domain boundaries within ferromagnetic materials serve as effective guides for vortices, further influencing their behavior. Additionally, it is worth noting that magnetic domain lines play a crucial role in enhancing vortex velocities when compared to single films and superconducting bilayers <cit.>. An interesting observation is the increase in vortex velocities at the crossover between magnetization in-plane and striped domain structures in Nb/permalloy bilayers <cit.>. This observation suggests the possibility of tuning the LO instability in magnetic materials by modifying the domain structure in a same system.In this study, we investigate the control of vortex critical velocities in superconducting/ferromagnetic hybrids by manipulating the stripe domain configuration. This manipulation directly affects the motion of vortices, which is influenced by the Lorentz force. Our bilayer consists of a 23 nm thick Mo_2N film and a 48 nm thick FePt film as the ferromagnetic system. Mo_2N thin films exhibit a critical temperature (T_c) of approximately 8 K when grown at room temperature <cit.>, while FePt films exceeding 40 nm in thickness undergo a transition from in-plane magnetization to a striped domain structure <cit.>. Moreover, upon saturating and subsequently removing the magnetic field, the stripes align themselves parallel to the field direction <cit.>. We compare the results obtained when the stripes function as guides for vortices (perpendicular to the current) and as barriers to vortex motion (parallel to the current). While a previous study <cit.> demonstrated the effectiveness of stripe domains in increasing vortex velocities, our work takes a step further by not only confirming their effectiveness but also showcasing the impact of modifying their configuration to fine-tune critical vortex speeds during dissipation. Additionally, our findings contribute to a deeper understanding of the role of disorder and vortex path winding in the LO instability.§ METHODS A Mo_2N / FePt bilayer was grown through reactive sputtering at room temperature on a (100) Si substrate. The base pressure in the chamber was 4×10^-5 Pa. The Mo_2N layer grew in a 6% N_2 atmosphere at 0.66 Pa (N_2+Ar), positioned above the Mo target (100 W) at 0.15 m. The FePt layer was grown in pure argon at 0.4 Pa using 20 W at 0.1 m target distance. The bilayer consisted of a 23 nm thick Mo_2N and 48 nm of FePt. The thickness for Mo_2N was selected considering that T_c is weakly affected by dimensional effects <cit.>.XRD data (θ - 2θ configuration) were obtained using Panalytical Empyrean equipment at 40 kV and 30 mA with CuK_α radiation and an angular resolution of 0.013 ^o. Atomic force (AFM) and magnetic force microscope (MFM) measurements were conducted on a Dimension 3100 ©Brucker microscope in tapping mode. Electrical transport measurements were performed on an 80(L) × 5 μm (w) bridge using standard four-terminal transport. Magnetic measurements were done with a commercial SQUID. Bridges were fabricated using optical lithography and argon ion milling. Current-voltage (IV) curves were obtained with a Keithley Nanovoltmeter Model 2128A and a Keithley Current source Model 6221 AC/DC in synchronized mode with a 0.1 ms pulse duration.§ RESULTS AND DISCUSSIONFigure <ref> shows low-angle diffraction data for the studied sample and the corresponding fitting using the Parrat code <cit.>. The pattern displays well-defined maxima and minima, which are characteristic of samples with low roughness. The thicknesses obtained from the fit correspond to 23 nm for Mo_2N and 48 nm for FePt. The flatness of the sample's surface was confirmed by AFM (see inset in Figure <ref>a), which appeared to be free of defects and exhibited a root mean square (RMS) roughness of 0.2 nm. Figure <ref>b displays an XRD pattern for the bilayer, where the (111) peak corresponds to a textured, disordered face-centered cubic (FCC) structure <cit.>. The reflection (200), which is usually observed at approximately 43^o for γ-Mo_2N, does not appear due to the nanocrystalline nature and thickness of the film <cit.>. Furthermore, the latter peak is masked by the stronger reflection from the FePt.Figures <ref>ab compare the magnetic hysteresis loops at 10 K, with the magnetic field applied parallel (H//S) and perpendicular to the surface (H⊥S). The saturation magnetization for the sample is approximately 1100 emu/cm^3, which aligns with the expected values for the material <cit.>. The hysteresis loop for H//S displays typical features of stripe-like domains, including significant coercivity (H_c ≈ 0.018 T) and a reduction in remanence (compared to saturation) due to the perpendicular component of the stripes. The hysteresis loops with H⊥S show remnant magnetization attributed to the out-of-plane component produced by the stripes, with H_c ∼ 0.1 T. Additionally, saturation dominated by shape anisotropy occurs at H_s ≈ 1.6 T. MFM images at room temperature (Figure <ref>c) confirm the presence of magnetic stripes, with a width of approximately 40 nm, a value relevant for comparing with the distance between vortices fixed by the magnetic field. Indeed, the distance between maxima and minima in the field modulation produced by the stripes is of ≈ 80 nm, which assuming that it does not change with temperature, corresponds to an intervortex distance a=1.073 with μ_0H ≈ 0.37 T. In addition to magnetic characterization, we determined the sample's T_c by measuring resistance as a function of temperature. The bilayer exhibits metallic behavior due to FePt, with a T_c of 7.4 K (Figure <ref>d, main panel, and inset), in accordance with expectations for single films of similar thickness <cit.>. We investigated the influence of the magnetic domain configuration on vortex instability. To make the comparison, we aligned the magnetic stripe domains either perpendicular or parallel to the applied current in the microbridge (as shown in Figure <ref>a). We achieved this by rotating the sample and the magnetic field orientation, either in-plane (to fix the domains) or out-of-plane for IV curves. Initially, we saturated the sample with H//S, removed the field, and then rotated the sample to apply H⊥S. We performed measurements for each configuration during different cooldowns labeled as measurement I (parallel), II (perpendicular), and III (parallel). The analysis was conducted at 3 K, measuring IV curves as a function of the magnetic field (see typical curves in Figure <ref>b). The vortex velocity can be obtained as v= V/μ_0HL, where V is the voltage at the instability and L is the distance between voltage contacts. Measurements I and III (Figure <ref>c) were performed to show that the process is reproducible in these extreme configurations. Our results indicate that the magnetic field configuration influences vortex velocities at low fields (μ_0H< 0.05 T). Specifically, when the stripes act as vortex guides (measurements I and III), velocities reach ≈ 5 km/s at low field and it exhibits the typical decay with field commonly observed in v_LO (H) dependencies <cit.>. In contrast, velocities decrease significantly and deviate from the typical decay with the field for μ_0H < 0.05 T when stripes create winding vortex paths (measurement II). Both magnetic configurations exhibit similar behavior for magnetic fields exceeding 0.05 T, indicating that as the magnetic field increases, the magnetic paths tend to vanish. This value is much smaller than the μ_0H = 0.37 T estimated as the optimal inter-vortex distance determined by the stripe width. It is important to mention that despite differences at low magnetic field, the data show a v_LO∝ H^-0.5 dependence, as is expected for finite heat removal from the substrate <cit.>. Since the quasiparticle diffusion constant D may be influenced by proximity to a conducting metal <cit.>, we have chosen to refrain from extracting τ using the A. Bezuglyj and V. Shklovskij model <cit.>.While previous studies demonstrated the effectiveness of stripe domains and vortex guides in increasing vortex velocities <cit.>, our work not only reaffirms the importance of vortex configuration but also highlights the impact of modifying this configuration to fine-tune critical vortex speeds during dissipation, transitioning from vortex guides to winding paths. In addition to analyzing the influence of the striped domain configuration on vortex critical velocities, it is worthwhile to conduct a detailed examination of the impact of the proximity of the Mo_2N layer to a ferromagnetic material. When compared to a Mo_2N film of similar thickness, we observed that velocities at low magnetic fields increased from approximately 3 km/s to approximately 5 km/s <cit.>. Furthermore, the values at moderate magnetic fields increased significantly, reaching approximately 1.5 km/s at μ_0H = 0.4 T, compared to approximately 0.7 km/s for a Mo_2N single layer. This increase may be attributed to the ferromagnetic nature of the FePt layer as well as its thermal conductivity due to its metallic properties. As we recently reported, for moderate and high magnetic field conditions, superconducting/metal bilayers exhibit an increase in vortex velocities that scale with the thickness and thermal conductivity of the metal used <cit.>. In the case of the FePt layer, this effect may be associated with heat dissipation and the potential contribution of its proximity to the ferromagnetic material. On the other hand, higher vortex velocities can be analyzed within two different scenarios: 1) smaller τ <cit.>, or 2) small non-equilibrium effects related to Joule heating impacting the instability of vortex motion <cit.>. It is important to note that vortex velocities at low magnetic fields of approximately 5 km/s are similar to those reported in other superconducting/ferromagnetic systems <cit.>, which suggests a common mechanism leading to an increase in vortex velocities. Finally, concerning the winding paths of vortices, our results unequivocally show that irregularities within the superconducting material can lead to a decrease in vortex velocities, a phenomenon not solely attributable to the constraints imposed by intrinsic superconducting properties in the LO theory. Introducing tunable magnetic domains may offer a promising avenue to enhance vortex velocities and investigate the physics associated with the interaction between magnetic moments in superconducting/ferromagnetic hybrids. § CONCLUSIONS In summary, we examined the potential to adjust vortex velocities in superconducting/ferromagnetic bilayers by modifying the nanoscale magnetic domain configuration. Our findings contrast two extreme configurations of stripes, one providing magnetic guides for rapid motion and the other creating winding vortex paths for slower motion. Although the effect is primarily noticeable at low magnetic fields, where the magnetic modulation caused by stripe domains is expected to be minimally influenced by external fields, our results using FePt with nanosized stripe domains offer a straightforward platform for investigating vortex dynamics and critical vortex behavior in superconducting microsystems. § ACKNOWLEDGEMENTS This work was partially supported by the ANPCYT (PICT 2018- 01597), U. N. de Cuyo 06/C013T1, CONICET (PIP 11220210100263CO), BrainLink program funded by the Ministry of Science and ICT through the National Research Foundation of Korea (2022H1D3A3A01077468) and Brain Pool program funded by the Ministry of Science and ICT through the National Research Foundation of Korea (RS-2023-00222408). JK and JY were supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (Grant No. NRF-2019R1A2C2090356) and the Technology Development Program (Grant No. S3198743) funded by the Ministry of SMEs and Startups (MSS, Korea). MS and NH are members of the Instituto de Nanociencia y Nanotecnología INN (CNEA-CONICET). § AUTHOR CONTRIBUTION GB and NH were responsible for growing the samples and conducting XRD analysis, as well as performing electrical transport measurements. MS conducted AFM and MFM measurements and analysis. GB and NH wrote the manuscript, while all authors contributed to the data discussion.§ DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests.
http://arxiv.org/abs/2310.18524v2
{ "authors": [ "Gastón Blatter", "Martín Sirena", "Yeonkyu Lee", "Jeehoon Kim", "Nestor Haberkorn" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20231027225014", "title": "Tuning vortex critical velocity in Mo$_2$N thin films via striped magnetic domain configuration" }
A penalty-projection based efficient and accurate Stochastic Collocation Method for magnetohydrodynamic flows Muhammad MohebujjamanmitDepartment of Mathematics, University of Alabama at Birmingham, AL 35294, USA; This author's work was Partially supported by the National Science Foundation grant DMS-2213274, and Texas A&M International University.[Correspondence: [email protected]] Julian Miranda tamiuDepartment of Mathematics and Physics, Texas A&M International University, TX 78041, USA; This author's work was partially supported by the National Science Foundation grant DMS-2213274. Md. Abdullah Al MahbubcomillaDepartment of Mathematics, Comilla University, Cumilla 3506, Bangladesh; Mengying XiaoUWFDepartment of Mathematics and Statistics, University of West Florida, Pensacosa, FL 32514, USA. January 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= We propose, analyze, and test a penalty projection-based efficient and accurate algorithm for the Uncertainty Quantification (UQ) of the time-dependent Magnetohydrodynamic (MHD) flow problems in convection-dominated regimes. The algorithm uses the Elsässer variables formulation and discrete Hodge decomposition to decouple the stochastic MHD system into four sub-problems (at each time-step for each realization) which are much easier to solve than solving the coupled saddle point problems. Each of the sub-problems is designed in a sophisticated way so that at each time-step the system matrix remains the same for all the realizations but with different right-hand-side vectors which allows saving a huge amount of computer memory and computational time. Moreover, the scheme is equipped with ensemble eddy-viscosity and grad-div stabilization terms. The stability of the algorithm is proven rigorously. We prove that the proposed scheme converges to an equivalent non-projection-based coupled MHD scheme for large grad-div stabilization parameter values. We examine how Stochastic Collocation Methods (SCMs) can be combined with the proposed penalty projection UQ algorithm. Finally, a series of numerical experiments are given which verify the predicted convergence rates, show the algorithm's performance on benchmark channel flow over a rectangular step, and a regularized lid-driven cavity problem with high random Reynolds number and magnetic Reynolds number. Keywords. magnetohydrodynamics, uncertainty quantification, fast ensemble calculation, finite element method, stochastic collocation methods, penalty-projection method Mathematics Subject Classifications (2020): 65M12, 65M22, 65M60, 76W05 myheadings plain A penalty-projection efficient algorithm for stochastic MHD flows M. Mohebujjaman, J. Miranda, M. A. A. Mahbub, and M. Xiao § INTRODUCTION Numerical simulation of MHD flow has been explored by many scientists <cit.> for the last couple of decades. However, their promise to reduce the computational cost and high accuracy for complex and larger MHD problems still remains an open question. The situation becomes even worse for the more realistic MHD flows, where convective-dominated flows interact with magnetic fields and model parameters involve random noises which introduce aleatoric uncertainty into the system and play a key role in determining the characteristic of the final solutions. We will use rigorous mathematics to develop novel computational frameworks to reduce the immense computational complexity involved in commonly used algorithms for Stochastic MHD (SMHD) flow problems. Let ⊂ℝ^d (d=2,3) be a convex polygonal or polyhedral physical domain with boundary ∂. A complete probability space is denoted by (Ω,ℱ,P) with Ω the set of outcomes, ℱ⊂ 2^Ω the σ-algebra of events, and P:ℱ→ [0,1] represents a probability measure. We consider the time-dependent, dimensionless, viscoresistive, incompressibleSMHD flow problems for homogeneous Newtonian fluids which are governed by the following non-linear coupled stochastic PDEs <cit.>: _t+·∇-s·∇-∇·(ν(,ω)∇)+∇ p=(t,,ω), in (0,T]×, _t+·∇-·∇-∇·(ν_m(,ω)∇)+∇λ =∇×(t,,ω),in (0,T]×, ∇·=0, in (0,T]×,∇· =0, in (0,T]×, (,0,ω)= ^0(),in, (,0,ω)= ^0(),in, together with appropriate boundary conditions. Where T>0 represents the simulation end time andis the spatial variable. The viscosity ν(,ω) and magnetic diffusivity ν_m(,ω) are modeled as random fields with ω∈Ω. Here, the unknown quantities are the velocity fieldand the magnetic flux densitywhich map as,:Λ→ℝ^d, and the modified pressure p:Λ→ℝ, where Λ:=(0,T]××Ω, and d∈{2,3}. The artificial magnetic pressure λ:Λ→{0}, but λ≠ 0 in the discrete case. The external forces are represented by , and ∇× in the momentum equation (<ref>), and induction equation (<ref>), respectively. The coupling parameter is denoted by s> 0 which is the coefficient of the Lorentz force into the momentum equation (<ref>); If s=0, the fluid flow is not influenced by the magnetic field. The recent study shows <cit.> instead of solving (<ref>)-(<ref>) (in terms of the original variables) together with appropriate boundary conditions, a change of variable called the Elsässer variables formulation, allows to propose stable decoupled algorithms. This breakthrough idea was first presented by C. Trenchea in <cit.>. Defining :=+√(s), :=-√(s), _1:=+√(s)∇×, _2:=-√(s)∇×, q:=p+√(s)λ, and r:=p-√(s)λ produces the Elsässer variables formulation of the above stochastic MHD system: _t+·∇-∇·[ν(,ω)+ν_m(,ω)/2∇]-∇·[ν(,ω)-ν_m(,ω)/2∇]+∇ q =_1, _t+·∇-∇·[ν(,ω)+ν_m(,ω)/2∇]-∇·[ν(,ω)-ν_m(,ω)/2∇]+∇ r =_2, ∇·=∇· =0, together with the initial and boundary conditions. The L^2() inner product is denoted by (·,·). Defining the function spaces for velocity and magnetic flux density as :=_0^1(), for the pressure and the magnetic pressure as Q:=L_0^2(), and the stochastic space as :=_P^2(Ω), we get the weak formulation of (<ref>)-(<ref>) as 𝔼[(_t,)] +𝔼[(·∇,)]+𝔼[ν(,ω)+ν_m(,ω)/2(∇,∇)]+𝔼[ν(,ω)-ν_m(,ω)/2(∇,∇)]-𝔼( q,∇·)=𝔼[(_1,)],∀∈⊗, 𝔼[(_t,)] +𝔼[(·∇,)]+𝔼[ν(,ω)+ν_m(,ω)/2(∇,∇)]+𝔼[ν(,ω)-ν_m(,ω)/2(∇,∇)]-𝔼[( r,∇·)]=𝔼[(_2,)],∀∈⊗,𝔼[(∇·,ζ)]=𝔼[(∇·,η)]=0, ∀ζ,η∈ Q⊗. In UQ, it is common to assume that the randomness is approximated by a finite number of random variables <cit.>. To use SCMs, we consider =(y_1,⋯,y_N)∈⊂ℝ^N be a finite N∈ℕ dimensional vector with joint probability density function ρ() in some parameter space =∏_l=1^NΓ_l. Then, the random fields ν(,ω), and ν_m(,ω) can be approximated in terms of the random variable as ν(,), and ν_m(,), respectively. We define the space of square-integrable functions onsubject to the weight ρ() as :=_ρ^2(), and consider the following weak formulation: Find ,∈⊗ and q,r ∈ Q ⊗ which, for almost all t∈(0,T], satisfy ∫_Γ(_t,)ρ()d + ∫_Γ (·∇,)ρ()d + ∫_Γ[ν(,)+ν_m(,)/2(∇,∇)]ρ()d+∫_Γ[ν(,)-ν_m(,)/2(∇,∇)]ρ()d-∫_Γ(q ,∇·)ρ()d= ∫_Γ(_1,)ρ()d, ∀∈⊗, ∫_Γ(_t,)ρ()d +∫_Γ (·∇,)ρ()d+∫_Γ[ν(,)+ν_m(,)/2(∇,∇)]ρ()d+∫_Γ[ν(,)-ν_m(,)/2(∇,∇)]ρ()d-∫_Γ( r,∇·)ρ()d=∫_Γ(_2,)ρ()d,∀∈⊗, ∫_Γ(∇·,ζ) ρ()d =∫_Γ(∇·,η) ρ()d=0, ∀ζ,η∈ Q ⊗. To have an effective and efficient penalty-projection scheme for UQ, we assume affine dependence of the random variables for the viscosity and magnetic diffusivity as below: ν(,)=ν_0()+∑_l=1^Nν_l()y_l,  and  ν_m(,)=ν_m,0()+∑_l=1^Nν_m,l()y_l. In order to have a robust high fidelity solution of (<ref>)-(<ref>), which is often essential for many surrogate models <cit.>, one of the major hurdles is to realize the model over an ensemble of flow parameters with high spatial resolution. In this case, a popular approach is to use SCMs <cit.>, which requires fewer realizations compare to other sampling-based UQ methods, such as the Monte Carlo method which comes with high computational complexity <cit.>. SCMs use global polynomial approximation and are independent of the PDE solvers, thus compatible with combining with any legacy code. In this paper, we present, analyze, and test, a novel, efficient, and accurate SCMs based Stabilized Penalty Projection algorithm for the UQ of SMHD (SCM-SPP-SMHD) flow problems, which is the conjunction of the SCMs and an SPP - Finite Element Method (SPP-FEM). The SPP-FEM is based on Elsässer formulation which provides a stable decoupling of the SMHD system into two Oseen type sub-problems (velocity-pressure and magnetic field-magnetic pressure types saddle-point sub-problems). These two sub-problems can be solved simultaneously if the resources are available. For each of these sub-problems, we employ “Projection Methods” which utilize a discrete Hodge decomposition at each time-step (which was proposed by Chorin and Temam <cit.> in the early 1960s) together with recent stabilization techniques <cit.>. This allows us to solve the difficult 2× 2 block saddle-point sub-problems into two easier linear solves (a 1× 1, and a 2× 2 block systems), particularly in 3D problems with high Reynolds number and high magnetic Reynolds numbers. The use of a large grad-div stabilization parameter helps us to improve the accuracy of the penalty-projection algorithm which has examined on problems of Navier-Stokes (N-S) flow <cit.>, and in fluid-fluid interaction <cit.>. Also, each of the sub-problems contains an ensemble eddy viscosity term which is taken from the idea of turbulence modeling techniques to reduce the numerical instability particularly in 3D problems <cit.>. Moreover, for each of the linear solves, the scheme is designed in an elegant way so that at each time-step, each realization shares a common system matrix but a different right-hand-side vector, following the breakthrough idea by N. Jiang and W. Layton given in <cit.>. This allows us to use orders of magnitude shorter than the system matrix assembly time (which is assumed to be the most time-consuming step in the finite element assembly process). Moreover, for problems for which a direct solver is appropriate, the LU decomposition or its variants of the system matrix is needed to compute only once per time-step. For large size and complex problems, Krylov subspace methods are appropriate <cit.> for which a single preconditioner is needed to be built for each sub-problem per time-step. Further, the advantage of the block linear solvers can be taken with a single system matrix for multiple right-hand-side vectors at each time-step. This elegant feature leads to saving a huge computational cost and memory for the UQ of complex dynamical systems and is successfully implemented in the surface data assimilation <cit.>, turbulence modeling <cit.>, porous media flow <cit.>, Boussinesq <cit.>, weather forecasting <cit.>, spectral methods <cit.>, sensitivity analyses <cit.>, MHD <cit.>, N-S simulations <cit.>, and hydrology <cit.>. The proposed SCM-SPP-SMHD algorithm consists of grad-div stabilization terms with coefficient parameter γ. Large γ values help to achieve optimal temporal accuracy, reducing penalty-projection splitting errors <cit.>. Finally, using straightforward transformations, we get back the solution in terms of the original variables. Thus, we consider a uniform time-step size Δ t and let t_n=nΔ t for n=0, 1, ⋯., (suppress the spatial discretization momentarily), then computing the N_sc (number of stochastic collocation points) solutions independently, takes the following form: For j=1,2,...,N_sc, Step 1: Compute _j^n+1: _j^n+1/Δ t+ <>^n·∇_j^n+1-∇·(ν+ν_m/2∇_j^n+1)-∇(γ∇·_j^n+1)-∇·(2ν_T(^',t^n)∇_j^n+1) = _1,j(t^n+1)+_j^n/Δ t-_j^'n·∇_j^n+∇·(ν_j-ν_m,j/2∇_j^n)+∇·(ν_j^'+ν_m,j^'/2∇_j^n). Step 2: Compute _j^n+1, and _j^n+1: _j^n+1/Δ t+∇_j^n+1 = _j^n+1/Δ t, ∇·_j^n+1= 0. Step 3: Compute _j^n+1: _j^n+1/Δ t+ <>^n·∇_j^n+1-∇·(ν+ν_m/2∇_j^n+1)-∇(γ∇·_j^n+1)-∇·(2ν_T(^',t^n)∇_j^n+1) = _2,j(t^n+1)+_j^n/Δ t-_j^'n·∇_j^n+∇·(ν_j-ν_m,j/2∇_j^n)+∇·(ν_j^'+ν_m,j^'/2∇_j^n). Step 4: Compute _j^n+1, and _j^n+1: _j^n+1/Δ t+∇_j^n+1 = _j^n+1/Δ t, ∇·_j^n+1= 0. The ensemble mean and fluctuation about the mean are defined as follows: <>^n: =1/N_sc∑_j=1^N_sc_j^n, _j^'n:=_j^n-<>^n, ν:=1/N_sc∑_j=1^N_scν_j,ν_j^': =ν_j-ν,ν_m:=1/N_sc∑_j=1^N_scν_m,j,ν_m,j^':=ν_m,j-ν_m. The eddy viscosity term, which is of O(Δ t) accurate, is defined using mixing length phenomenology following <cit.>, and is given by ν_T(^',t^n):=μΔ t(l_^n)^2, and (l_^n)^2:=∑_j=1^N_sc|_j^'n|^2, where |·| denotes length of a vector. The eddy viscosity term helps the scheme to provide stability for flows that are not resolved on particular meshes. The solutions , andin Step 1, and Step 3, respectively, do not satisfy the divergence-free conditions, whereas the solution , andin Step 2, and Step 3, respectively, do not satisfy the boundary conditions. Step 1 has only unknown _j^n+1, and (the finite element variational formation) provides a 1× 1 block linear system which is not dependent on the index j, thus for all N_sc realizations, the system matrix remains the same but the right-hand-side vector varies. Therefore, at each time-step, we need to solve a linear system of equations of the form A[_1|_2|⋯|_N_sc]=[_1|_2|⋯|_N_sc], where A is a sparse coefficient matrix, _j, and _j are the solution, and right-hand-side vector for the j-th realization, respectively. This feature prevails in all other three steps which makes the algorithm efficient in saving a huge time in global system matrix assembly and in saving a massive computer memory. The size of the system matrix A is much smaller than the size of the corresponding system matrix (which is the 2× 2 block linear system) that arises in the saddle-point sub-problems. Step 2 requires a linear solve for its 2× 2 block system, which is a symmetric positive definite matrix (since there is no non-linear term present), thus the advantage of using block conjugate-gradient method <cit.> can be taken. Therefore, by employing the penalty-projection splitting, we replace the difficult linear solve for each of the saddle point sub-systems (corresponding to the Oseen-type sub-problems that arise after decoupling the system given in (<ref>)-(<ref>), see <cit.>) into two easier linear solves at each time-step. In Step 1, and Step 3, the system matrices are nonsymmetric (due to the presence of non-linear terms), and we can take advantage of the block GMRES <cit.> solver for the nonsymmetric system with multiple right-hand-side vectors. On the other hand, for problems in which a direct solver is more appropriate, its decomposition needs to be done only once per time-step and can be reused for all the realizations. Therefore, at each time-step, the feature of having a common system matrix in each of Steps 1-4, and the penalty-projection splitting make the algorithm efficient in saving a huge computational time and computer memory. Moreover, having a grad-div stabilization term in Step 1, and in Step 3, helps the scheme in achieving temporal accuracy similar to the non-splitting (velocity-pressure coupled type) algorithms for the coefficient γ→∞. Using finite element spatial discretization, in this paper, we investigate the novel SCM-SPP-SMHD ensemble scheme in a fully discrete setting. The efficient SCM-SPP-SMHD scheme is proved to be stable and convergent without any time-step restriction but takes care of uncertainties in all model data. To the best of our knowledge, SCM-SPP-SMHD is new for the UQ of MHD flow problems. The rest of the paper is organized as follows: We provide necessary notations and mathematical preliminaries in Section <ref> to follow a smooth analysis. As a benchmark algorithm for (<ref>)-(<ref>), we consider a first-order backward-Euler time-stepping fully discrete Coupled (where the velocity and magnetic fields like variables are decoupled but the velocity with the pressure, and magnetic field with the magnetic pressure are not) algorithm given in <cit.> for Stochastic MHD (Coupled-SMHD) in Section <ref>. We also discuss the stability, convergence, regularity assumptions, and small data assumptions of the Coupled-SMHD scheme for a fair comparison with the SCM-SPP-SMHD scheme. In Section <ref>, we present the SCM-SPP-SMHD scheme and describe the additional functional space we need for further analysis. We also state and prove the stability and convergence theorem of the SCM-SPP-SMHD scheme in Section <ref>. A brief description of SCMs is given in Section <ref>. To support the theoretical analysis, we compute the convergence rates varying γ, time-step size, and mesh width, and finally implement the scheme in benchmark channel flow past a rectangular step and a regularized lid-driven cavity problems with space-dependent variable high random Reynolds number and variable high random magnetic Reynolds number in Section <ref>. Finally, conclusions and future research avenues are given in Section <ref>. § NOTATION AND PRELIMINARIES The usual L^2() norm and inner product are denoted by . and (.,.), respectively. Similarly, the L^p() norms and the Sobolev W_p^k() norms are ._L^p and ._W_p^k, respectively for k∈ℕ,1≤ p≤∞. Sobolev space W_2^k() is represented by H^k() with norm ._k. The vector-valued spaces are^p()=(L^p())^d, and^k()=(H^k())^d. Forbeing a normed function space in , L^p(0,T;) is the space of all functions defined on (0,T]× for which the following norm _L^p(0,T;)=∫_0^T_^pdt^1/p,p∈[1,∞) is finite. For p=∞, the usual modification is used in the definition of this space. The natural function spaces for our problem are : =_0^1()={∈^p() :∇∈ L^2()^d× d, =0 ∂}, : ={∈^1():·=0on ∂}, Q: =L_0^2()={ q∈ L^2(): ∫_ qd=0}, wheredenotes the outward unit normal vector normal to the boundary ∂. Recall the Poincaré inequality holds in X: There exists C depending only onsatisfying for all ∈ X, ≤ C ∇. The divergence free velocity space is given by :={∈:(∇·, q)=0, ∀ q∈ Q}. We define the trilinear form b:××→ℝ by b(,,):=(·∇,), and recall from <cit.> that b(,,)=0 if ∈, and |b(,,)|≤ C()∇∇∇,,,∈. The conforming finite element spaces are denoted by _h⊂, andQ_h⊂ Q, and we assume a regular triangulation τ_h(), where h is the maximum triangle diameter. We assume that (_h,Q_h) satisfies the usual discrete inf-sup condition inf_q_h∈ Q_hsup__h∈_h(q_h,·_h)/q_h_h≥β>0, where β is independent of h. We assume that there exists a finite element space_h⊂. The space of discretely divergence free functions is defined as _h:={_h∈_h:(∇·_h,q_h)=0,∀ q_h∈ Q_h}. For simplicity of our analysis, we will use Scott-Vogelius (SV) finite element pair (_h, Q_h)=((P_k)^d, P_k-1^disc),which satisfies the inf-sup condition when the mesh is created as a barycenter refinement of a regular mesh, and the polynomial degree k≥ d <cit.>. Our analysis can be extended without difficulty to any inf-sup stable element choice,however, there will be additional terms that appear in the convergence analysis if non-divergence-free elements are chosen. We have the following approximation properties in (_h,Q_h): <cit.> inf__h∈_h-_h ≤ Ch^k+1||_k+1,∈^k+1(), inf__h∈_h (-_h) ≤ Ch^k||_k+1,∈^k+1(), inf_q_h∈ Q_hp-q_h ≤ Ch^k|p|_k,p∈ H^k(), where |·|_r denotes the H^r or ^r seminorm. We will assume the mesh is sufficiently regular for the inverse inequality to hold.The following lemma for the discrete Grönwall inequality was given in <cit.>. Let Δ t, ℰ, a_n, b_n, c_n, d_n be non-negative numbers for n=1,⋯, M such that a_M+Δ t ∑_n=1^Mb_n≤Δ t∑_n=1^M-1d_na_n+Δ t∑_n=1^Mc_n+ℰM∈ℕ, then for all Δ t> 0, a_M+Δ t∑_n=1^Mb_n≤(Δ t∑_n=1^M-1d_n)Δ t∑_n=1^Mc_n+ℰM∈ℕ. § COUPLED BACKWARD-EULER FULLY DISCRETE TIME-STEPPING SCHEME In this section, we consider a first-order accurate, fully discrete, backward-Euler time-stepping “Coupled-SMHD” algorithm as the benchmark UQ scheme for SMHD flow problems which was proposed and analyzed in <cit.>. We state its stability and convergence theorems and give a rigorous proof of the adopted small data assumption which will be used in the analysis of the proposed SCM-SPP-SMHD scheme in Section <ref>. The Coupled-SMHD scheme is based on Elsässer variables formulation, although, its momentum and induction-like equations are decoupled, the velocity and pressure, magnetic field, and magnetic pressure-like variables are still coupled. The unconditionally stable first-order temporally accurate Coupled-SMHD scheme is proven to be optimally accurate in 2D, and sub-optimally accurate in 3D, which is because of the ensemble eddy-viscosity term present in the scheme that leads to the use of the discrete inverse inequality. The Coupled-SMHD scheme was successfully implemented into a benchmark regularized lid-driven cavity and flow over rectangular step problems with high random Reynolds number with random low magnetic diffusivity parameter. The Coupled-SMHD algorithm proposed in <cit.> is efficient in terms of saving a huge computational time and computer memory as it allows to use of block linear solver at each time-step and is able to provide solutions of all realizations using a single block linear solve. However, for the Coupled-SMHD, we are required to solve the saddle-point system at each time since the velocity- and pressure-like variables are coupled together. To decouple the velocity- and pressure-like variables, and to make the Coupled-SMHD scheme more efficient, we combine the stochastic collocation method with a recently proposed grad-div stabilization penalty projection method for N-S flow problems in <cit.>. The Coupled-SMHD is presented in Algorithm <ref>. For simplicity of our analysis, we define ν_min:=min_∈ν(),ν_m,min:=min_∈ν_m(), and α_min:=min_1≤ j≤ Jα_j,where α_j:=ν_min+ν_m,min-ν_j-ν_m,j_∞-ν_j^'+ν_m,j^'_∞, for j=1,2,⋯, N_sc, and state the stability and convergence theorems of the Algorithm <ref>. Suppose f_1,j,f_2,j∈ L^2(0,T;^-1()), and _j,h^0, _j,h^0 ∈^1(), for j=1,2,⋯, N_sc then the solutions to the Algorithm <ref> are stable: For any Δ t>0, if α_j>0, and μ>1/2 _j,h^M^2+_j,h^M^2+α_minΔ t/2∑_n=1^M(∇_j,h^n^2+∇_j,h^n^2)≤ C(data). See Theorem 4.1 in<cit.>. Assume (_j, _j, q_j, r_j) satisfying (<ref>)-(<ref>) with regularity assumptions _j, _j∈ L^∞(0,T;^k+1()), _j,t, _j,t∈ L^∞(0,T;^2()), _j,tt, _j,tt∈ L^∞(0,T;^2()) for k≥ 2 andj=1,2,⋯,N_sc, then the solution (_j,h,_j,h) to the Algorithm <ref> converges to the true solution: For any Δ t>0, if α_j>0, and μ>1/2, then one has _j(T) -_j,h^M^2+_j(T)-_j,h^M^2+α_minΔ t/2∑_n=1^M(∇(_j(t^n)-_j,h^n)^2+∇(_j(t^n)-_j,h^n)^2)≤C(Δ t^2+h^2k+h^2kΔ t^2+h^2-dΔ t^2+ h^2k-1Δ t). See the Theorem 5.1, equation (5.18) in <cit.>. Assume the true solution _j,_j∈ L^∞(0,T;^2()). Then, there exists a constant C_* which is independent of h, Δ t, and γ such that for sufficiently small h and Δ t, the solutions of the Algorithm <ref> satisfies max_1≤ n≤ M(∇_j,h^n_L^3+∇_j,h^n_L^3+_j,h^n_∞+_j,h^n_∞) ≤ C_*,∀ j=1,2,⋯,N_sc. Using triangle inequality, we write ∇_j,h^n_L^3+_j,h^n_∞ ≤∇(_j,h^n-_j(t^n))_L^3+∇_j(t^n)_L^3+_j,h^n-_j(t^n)_∞+_j(t^n)_∞. Apply Sobolev embedding theorem on the first two terms, and Agmon’s <cit.> inequality on the last two terms in the right-hand-side of (<ref>), to provide ∇_j,h^n_L^3+_j,h^n_∞≤ C∇(_j,h^n-_j(t^n))^1/2∇^2(_j,h^n-_j(t^n))^1/2+C_j(t^n)_H^1^1/2_j(t^n)_H^2^1/2. Apply the regularity assumption of the true solution and discrete inverse inequality, to obtain ∇_j,h^n_L^3+_j,h^n_∞ ≤ Ch^-1/2∇(_j,h^n-_j(t^n))+C. Consider the (P_k,P_k-1) element for the pair (_j,h,q_j,h), and use the error bounds in (<ref>), gives ∇_j,h^n_L^3+_j,h^n_∞ ≤ Ch^-1/2(Δ t^1/2+h^k/Δ t^1/2+h^kΔ t^1/2+h^1-d/2Δ t^1/2+h^k-1/2)+C. Choose Δ t so that Δ t^1/2/h^1/2≤1/C, h^k-1/2/Δ t^1/2≤1/C, h^k-1/2Δ t^1/2≤1/C,h^1-d/2Δ t^1/2≤1/C and h^k-1≤1/C, which gives ∇_j,h^n_L^3+_j,h^n_∞≤ 4+C,with the time-step restrictionsO(h^2k-1)≤Δ t≤ O(h^d-1). Similarly, we can show∇_j,h^n_L^3+_j,h^n_∞≤ 4+C. Therefore, C_*:=8+C completes the proof. § EFFICIENT SCM-SPP-SMHD SCHEME FOR UQ OF SMHD FLOW PROBLEMS In this section, we present the proposed efficient, fully discrete, and stable penalty-projection-based decoupled time-stepping algorithm SCM-SPP-SMHD,which combines the SCM for the UQ of SMHD flow problems. We state and prove the unconditional stability theorem and provide an error analysis which shows that asγ→∞, the outcomes of the SCM-SPP-SMHD scheme converge to the outcomes of the Coupled-SMHD in Algorithm <ref>. The SCM-SPP-SMHD computes the solution in four steps. The scheme is designed in a technical way so that at each of these steps, the system matrix remains the same for all realizations, which makes it efficient in saving a huge computational time and computer memory. We describe thescheme below: Since _h⊂_h, we can choose _h=_h in (<ref>), _h=_h in (<ref>) and combine them with equations (<ref>) and (<ref>), respectively, to get ( _j,h^n+1-_j,h^n/Δ t, _h)+b(<_h>^n, _j,h^n+1,_h)+(ν+ν_m/2∇_j,h^n+1,∇_h)+γ(∇·_j,h^n+1,∇·_h)+(2ν_T(^'_h,t^n)∇_j,h^n+1,∇_h)-(_j,h^n,∇·_h)= (_1,j(t^n+1),_h)-b(_j,h^'n, _j,h^n,_h)-(ν_j-ν_m,j/2∇_j,h^n,∇_h)-(ν_j^'+ν_m,j^'/2∇_j,h^n,∇_h), and ( _j,h^n+1-_j,h^n/Δ t,_h)+b(<_h>^n, _j,h^n+1,_h)+(ν+ν_m/2∇_j,h^n+1,∇_h)+γ(∇·_j,h^n+1,∇·_h)+(2ν_T(^'_h,t^n)∇_j,h^n+1,∇_h)-(_j,h^n,∇·_h)= (_2,j(t^n+1),_h)-b(_j,h^'n, _j,h^n,_h)-(ν_j-ν_m,j/2∇_j,h^n,∇_h)-(ν_j^'+ν_m,j^'/2∇_j,h^n,∇_h). §.§ Stability Analysis We now prove stability and well-posedness for the Algorithm <ref>. (Unconditional Stability) Let (_j,h^n+1,_j,h^n+1,_j,h^n+1,_j,h^n+1) be the solution of Algorithm <ref> and f_1,j,f_2,j∈ L^2(0,T;^-1()), and _j,h^0, _j,h^0 ∈^1() for j=1,2,⋯, N_sc. Then for all Δ t>0, if α_j>0, and μ>C/2Δ tα_j, we have the following stability bound: _j,h^M^2 +_j,h^M^2+ν_min+ν_m,min/2Δ t(∇_j,h^M^2+∇_j,h^M^2)+2γΔ t∑_n=0^M-1(∇·_j,h^n+1^2+∇·_j,h^n+1^2)≤_j,h^0^2+_j,h^0^2+ν_min+ν_m,min/2Δ t(∇_j,h^0^2+∇_j,h^0^2)+2Δ t/α_j∑_n=0^M-1(_1,j(t^n+1)_-1^2+_2,j(t^n+1)_-1^2). Taking _h=_j,h^n+1 in (<ref>) and _h=_j,h^n+1 in (<ref>), to obtain (_j,h^n+1-_j,h^n/Δ t, _j,h^n+1)+1/2(ν+ν_m)^1/2∇_j,h^n+1^2+γ∇·_j,h^n+1^2 +(2ν_T(^'_h,t^n)∇_j,h^n+1,∇_j,h^n+1)=(_1,j(t^n+1),_j,h^n+1) -(_j,h^'n·∇_j,h^n,_j,h^n+1)-(ν_j-ν_m,j/2∇_j,h^n,∇_j,h^n+1)-(ν_j^'+ν_m,j^'/2∇_j,h^n,∇_j,h^n+1), and (_j,h^n+1-_j,h^n/Δ t,_j,h^n+1)+1/2(ν+ν_m)^1/2∇_j,h^n+1^2+γ∇·_j,h^n+1^2 +(2ν_T(^'_h,t^n)∇_j,h^n+1,∇_j,h^n+1)=(_2,j(t^n+1),_j,h^n+1) -(_j,h^'n·∇_j,h^n,_j,h^n+1)-(ν_j-ν_m,j/2∇_j,h^n,∇_j,h^n+1)-(ν_j^'+ν_m,j^'/2∇_j,h^n,∇_j,h^n+1). Using polarization identity, (<ref>) and (2ν_T(^'_h,t^n)∇_j,h^n+1,∇_j,h^n+1)=2μΔ tl^n_,h∇_j,h^n+1^2, we get 1/2Δ t(_j,h^n+1^2-_j,h^n^2+_j,h^n+1-_j,h^n^2)+1/2(ν+ν_m)^1/2∇_j,h^n+1^2+γ∇·_j,h^n+1^2 +2μΔ tl^n_,h∇_j,h^n+1^2=(_1,j(t^n+1),_j,h^n+1)-b(_j,h^'n, _j,h^n,_j,h^n+1) -(ν_j-ν_m,j/2∇_j,h^n,∇_j,h^n+1)-(ν_j^'+ν_m,j^'/2∇_j,h^n,∇_j,h^n+1), and 1/2Δ t(_j,h^n+1^2-_j,h^n^2+_j,h^n+1-_j,h^n^2)+1/2(ν+ν_m)^1/2∇_j,h^n+1^2+γ∇·_j,h^n+1^2 +2μΔ tl^n_,h∇_j,h^n+1^2=(_2,j(t^n+1),_j,h^n+1)-b(_j,h^'n, _j,h^n,_j,h^n+1) -(ν_j-ν_m,j/2∇_j,h^n,∇_j,h^n+1)-(ν_j^'+ν_m,j^'/2∇_j,h^n,∇_j,h^n+1). Adding (<ref>) and (<ref>), using ·∇≤√(2)||∇, Cauchy-Schwarz's, Poincaré inequality, (<ref>), and Young's inequality in (_j,h^'n·∇_j,h^n,_j,h^n+1) =-(_j,h^'n·∇_j,h^n+1, _j,h^n)≤_j,h^'n·∇_j,h^n+1_j,h^n≤√(2)|_j,h^'n|∇_j,h^n+1_j,h^n≤ Cl^n_,h∇_j,h^n+1∇_j,h^n≤C/α_jl^n_,h∇_j,h^n+1^2+α_j/4∇_j,h^n^2, and then applying the Cauchy-Schwarz inequality to the forcing term and Hölder's inequalityto the last two terms in the right-hand-side, reduces to 1/2Δ t(_j,h^n+1^2-_j,h^n^2+_j,h^n+1-_j,h^n^2+_j,h^n+1^2-_j,h^n^2+_j,h^n+1-_j,h^n^2) +ν_min+ν_m,min/2(∇_j,h^n+1^2+∇_j,h^n+1^2)+γ(∇·_j,h^n+1^2+∇·_j,h^n+1^2) +(2μΔ t-C/α_j)(l^n_,h∇_j,h^n+1^2+l^n_,h∇_j,h^n+1^2) ≤α_j/4∇_j,h^n^2+α_j/4∇_j,h^n^2+_1,j(t^n+1)_-1∇_j,h^n+1+_2,j(t^n+1)_-1∇_j,h^n+1 +ν_j-ν_m,j_∞/2(∇_j,h^n∇_j,h^n+1+∇_j,h^n∇_j,h^n+1) +ν_j^'+ν_m,j^'_∞/2(∇_j,h^n∇_j,h^n+1+∇_j,h^n∇_j,h^n+1). Using Young's inequality and reducing, we have 1/2Δ t(_j,h^n+1^2-_j,h^n^2+_j,h^n+1-_j,h^n^2+_j,h^n+1^2-_j,h^n^2+_j,h^n+1-_j,h^n^2) +ν_min+ν_m,min/4(∇_j,h^n+1^2+∇_j,h^n+1^2)+γ(∇·_j,h^n+1^2+∇·_j,h^n+1^2) +(2μΔ t-C/α_j)(l^n_,h∇_j,h^n+1^2+l^n_,h∇_j,h^n+1^2) ≤1/α_j(_1,j(t^n+1)_-1^2+_2,j(t^n+1)_-1^2)+ν_min+ν_m,min/4(∇_j,h^n^2+∇_j,h^n^2). Assuming μ>C/2Δ tα_j, and dropping non-negative terms from the left-hand-side, this reduces to 1/2Δ t(_j,h^n+1^2-_j,h^n^2+_j,h^n+1^2-_j,h^n^2) +ν_min+ν_m,min/4(∇_j,h^n+1^2-∇_j,h^n^2+∇_j,h^n+1^2-∇_j,h^n^2) +γ(∇·_j,h^n+1^2+∇·_j,h^n+1^2)≤1/α_j(_1,j(t^n+1)_-1^2+_2,j(t^n+1)_-1^2). Now choose _h=_j,h^n+1 in (<ref>), ζ_h=_j,h^n+1 in (<ref>) and_h=_j,h^n+1 in (<ref>), η_h=_j,h^n+1 in (<ref>). Then apply Cauchy-Schwarz and Young’s inequalities, to obtain _j,h^n+1^2≤_j,h^n+1^2, and  _j,h^n+1^2≤_j,h^n+1^2, for all n=0,1,2,⋯,M-1. Plugging these estimates into (<ref>), results in 1/2Δ t(_j,h^n+1^2-_j,h^n^2+_j,h^n+1^2-_j,h^n^2) +ν_min+ν_m,min/4(∇_j,h^n+1^2-∇_j,h^n^2+∇_j,h^n+1^2-∇_j,h^n^2) +γ(∇·_j,h^n+1^2+∇·_j,h^n+1^2)≤1/α_j(_1,j(t^n+1)_-1^2+_2,j(t^n+1)_-1^2). Multiplying both sides by 2Δ t and summing over the time steps, completes the proof. We now prove the Algorithm <ref> converges to Algorithm <ref> as γ→∞. Thus, we need to define the space _h:=_h^⊥⊂_h to be the orthogonal complement of _h with respect to the ^1() norm. Let the finite element pair (_h,Q_h)⊂(,Q) satisfy the inf-sup condition (<ref>) and the divergence-free property, i.e., ∇·_h⊂ Q_h. Then there exists a constant C_R independent of h such that∇_h≤ C_R∇·_h,∀_h∈_h. See <cit.> We assume there exists a constant C_* which is independent of h, and Δ t, such that for sufficiently small h for a fixed mesh and fixed Δ t as γ→∞, the solution of the Algorithm <ref> satisfies max_1≤ n≤ M{_j,h^n_∞,_j,h^n_∞} ≤ C_*,∀ j=1,2,⋯,N_sc. The Assumption <ref> is proved later in Lemma <ref>. The idea of utilizing the Assumption <ref> in the following convergence anaysis is taken from the finite element analysis of reaction-diffusion equation in <cit.>. Let (_j,h^n+1, _j,h^n+1,q_j,h^n+1), and (_j,h^n+1, _j,h^n+1,_j,h^n+1) for j=1,2,⋯,N_sc, are the solutions to the Algorithm <ref>, and Algorithm <ref>, respectively, for n=0,1,⋯,M-1. We then have Δ t∑_n=1^M(∇<_h>^n-∇<_h>^n^2+∇<_h>^n-∇<_h>^n^2) ≤CC_R^2/γ^2(1/α_min^3Δ t+1/α_minΔ t+Δ t/α_min+1) exp (CC_*^2/α_min+CΔ t/h^3α_min) ×Δ t∑_n=0^M-1∑_j=1^J(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2). The above theorem states the first order convergence of the penalty-projection algorithm to the Algorithm <ref> as γ→∞ for a fixed mesh and time-step size. Denote _j^n+1:=_j,h^n+1-_j,h^n+1, and _j^n+1:=_j,h^n+1-_j,h^n+1 and use the following H^1-orthogonal decomposition of the errors: _j^n+1:=_j,0^n+1+_j,^n+1,and _j^n+1:=_j,0^n+1+_j,^n+1, with _j,0^n+1,_j,0^n+1∈_h, and _j,^n+1,_j,^n+1∈_h, for n=0,1,⋯,M-1. Step 1: Estimate of _j,^n+1,and _j,^n+1: Subtracting the equation (<ref>) from (<ref>) and (<ref>) from (<ref>) produces 1/Δ t(_j^n+1 -_j^n,_h)+(ν+ν_m/2∇_j^n+1,∇_h)+γ(∇·_j,^n+1,∇·_h)+b(_h^n,^n+1_j,_h)+b(^n,_j,h^n+1,_h)-(q_j,h^n+1-_j,h^n,∇·_h)+2μΔ t((l_,h^n)^2∇_j^n+1,∇_h)+2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_h)=-b(^'n_j,h,^n_j,_h)-b(^'n_j,_j,h^n,_h)-(ν_j-ν_m,j/2∇_j^n,∇_h)-(ν_j^'+ν_m,j^'/2∇_j^n,∇_h), and 1/Δ t(_j^n+1 -_j^n,_h)+(ν+ν_m/2∇_j^n+1,∇_h)+γ(∇·_j,^n+1,∇·_h)+b(_h^n,^n+1_j,_h)+b(^n,_j,h^n+1,_h)-(r_j,h^n+1-_j,h^n,∇·_h)+2μΔ t((l_,h^n)^2∇_j^n+1,∇_h)+2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_h)=-b(^'n_j,h,^n_j,_h)-b(^'n_j,_j,h^n,_h)-(ν_j-ν_m,j/2∇_j^n,∇_h)-(ν_j^'+ν_m,j^'/2∇_j^n,∇_h). Take _h=_j^n+1 in (<ref>), and _h=_j^n+1 in (<ref>), which yield b(_h^n,^n+1_j,_h)=0,and b(_h^n,^n+1_j,_h)=0, and use polarization identity to get 1/2Δ t(_j^n+1^2-_j^n^2+_j^n+1-_j^n^2)+1/2(ν+ν_m)^1/2∇_j^n+1^2+γ∇·_j,^n+1^2 +b(^n,_j,h^n+1,_j^n+1)-(q_j,h^n+1-_j,h^n,∇·_j,^n+1)+2μΔ tl_,h^n∇_j^n+1^2 +2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j^n+1)=-b(^'n_j,h,^n_j,_j^n+1) -b(^'n_j,_j,h^n,_j^n+1)-(ν_j-ν_m,j/2∇_j^n,∇_j^n+1)-(ν_j^'+ν_m,j^'/2∇_j^n,∇_j^n+1), and 1/2Δ t(_j^n+1^2-_j^n^2+_j^n+1-_j^n^2)+1/2(ν+ν_m)^1/2∇_j^n+1^2+γ∇·_j,^n+1^2 +b(^n,_j,h^n+1,_j^n+1)-(r_j,h^n+1-_j,h^n,∇·_j,^n+1)+2μΔ tl_,h^n∇_j^n+1^2 +2μΔ t({(l_,h^n)^2-(l_, h^n)^2}∇_j,h^n+1,∇_j^n+1)=-b(^'n_j,h,^n_j,_j^n+1) -b(^'n_j,_j,h^n,_j^n+1)-(ν_j-ν_m,j/2∇_j^n,∇_j^n+1)-(ν_j^'+ν_m,j^'/2∇_j^n,∇_j^n+1). Now, we find the bound of the terms in (<ref>) first.Rearranging and applying Cauchy-Schwarz inequality, (<ref>), and Young’s inequality in the following nonlinear term yields -b(^'n_j,h,^n_j,_j^n+1) = -b(^'n_j,h,_j^n+1,_j^n+1-^n_j)≤^'n_j,h·∇_j^n+1_j^n+1-^n_j≤√(2)l_,h^n∇_j^n+1_j^n+1-^n_j≤ 2Δ tl_,h^n∇_j^n+1^2+1/4Δ t_j^n+1-^n_j^2. Applying Hölder's and Young’s inequalities, we have |(ν_j-ν_m,j/2∇_j^n,∇_j^n+1)| ≤ν_j-ν_m,j_∞/4(∇_j^n^2+∇_j^n+1^2), |(ν_j^'+ν_m,j^'/2∇_j^n,∇_j^n+1)| ≤ν_j^'+ν_m,j^'_∞/4(∇_j^n^2+∇_j^n+1^2). Applying Cauchy-Schwarz and Young’s inequalities, we have |(q_j,h^n+1-_j,h^n,∇·_j,^n+1)| ≤1/2γq_j,h^n+1-_j,h^n^2+γ/2∇·_j,^n+1^2. Using Hölder's inequality, estimate in Lemma <ref>,Sobolev embedding theorem, Poincaré, and Young's inequalities provides |b(^n,_j,h^n+1,_j^n+1)| ≤^n∇_j,h^n+1_L^3_j^n+1_L^6≤ CC_*^n∇_j^n+1≤α_j/12∇_j^n+1^2+CC_*^2/α_j^n^2, |b(^'n_j,_j,h^n,_j^n+1)| ≤^'n_j∇_j,h^n_L^3_j^n+1_L^6≤ CC_* ^'n_j∇_j^n+1≤α_j/12∇_j^n+1^2+CC_*^2/α_j^'n_j^2. For the third non-linear term, we apply Hölder’s and triangle inequalities, the stability estimate of Algorithm <ref>, uniform boundedness in Lemma <ref> and in Assumption <ref>, Agmon’s <cit.>, discrete inverse, and Young's inequalities,to get 2μΔ t({(l_,h^n)^2- (l_,h^n)^2}∇_j,h^n+1,∇_j^n+1)≤ 2μΔ t(l_,h^n)^2-(l_,h^n)^2_∞∇_j,h^n+1∇_j^n+1=2μΔ t ∑_i=1^J(|_i,h^'n|^2-|_i,h^'n|^2)_∞∇_j,h^n+1∇_j^n+1≤ 2μΔ t ∑_i=1^J(_i,h^'n-_i,h^'n)(_i,h^'n+_i,h^'n)_∞∇_j,h^n+1∇_j^n+1≤ 2μΔ t ∑_i=1^J_i,h^'n-_i,h^'n_∞_i,h^'n+_i,h^'n_∞∇_j,h^n+1∇_j^n+1≤ 2μΔ t ∑_i=1^J(_i,h^'n_∞+_i,h^'n_∞)^2∇_j,h^n+1∇_j^n+1≤CΔ t^1/2∇_j^n+1≤α_j/12∇_j^n+1^2+CΔ t/α_j. 2μΔ t({(l_,h^n)^2- (l_,h^n)^2}∇_j,h^n+1,∇_j^n+1)≤ 2μΔ t(l_,h^n)^2-(l_,h^n)^2_∞∇_j,h^n+1∇_j^n+1=2μΔ t ∑_i=1^N_sc(|_i,h^'n|^2-|_i,h^'n|^2)_∞∇_j,h^n+1∇_j^n+1≤ 2μΔ t ∑_i=1^N_sc(_i,h^'n-_i,h^'n)·(_i,h^'n+_i,h^'n)_∞∇_j,h^n+1∇_j^n+1≤ 2μΔ t ∑_i=1^N_sc_i,h^'n-_i,h^'n_∞_i,h^'n+_i,h^'n_∞∇_j,h^n+1∇_j^n+1≤ CΔ t^1/2∑_i=1^N_sc_i^'n_∞(_i,h^'n_∞+_i,h^'n_∞)∇_j^n+1≤ CΔ t^1/2∑_i=1^N_sc_i^n_∞∇_j^n+1≤ CΔ t^1/2h^-3/2∑_i=1^N_sc_i^n∇_j^n+1≤α_j/12∇_j^n+1^2+CΔ t/h^3α_j∑_i=1^N_sc_i^n^2. Using the above estimates in (<ref>), choosing μ>max{ 1,1/2Δ t}, dropping non-negative terms and reducing produces 1/2Δ t(_j^n+1^2-_j^n^2)+ν_min+ν_m,min/4∇_j^n+1^2+γ/2∇·_j,^n+1^2≤ν_j-ν_m,j_∞/4∇_j^n^2 +ν_j^'+ν_m,j^'_∞/4∇_j^n^2+1/2γq_j,h^n+1-_j,h^n^2+CC_*^2/α_j(^n^2+^'n_j^2)+CΔ t/h^3α_j∑_i=1^N_sc_i^n^2. Now, apply similar estimates to the right-hand-side terms of (<ref>), to produce 1/2Δ t(_j^n+1^2-_j^n^2)+ν_min+ν_m,min/4∇_j^n+1^2+γ/2∇·_j,^n+1^2≤ν_j-ν_m,j_∞/4∇_j^n^2 +ν_j^'+ν_m,j^'_∞/4∇_j^n^2+1/2γr_j,h^n+1-_j,h^n^2+CC_*^2/α_j(^n^2+^'n_j^2)+CΔ t/h^3α_j∑_i=1^N_sc_i^n^2. Add (<ref>) and (<ref>), and rearrange 1/2Δ t(_j^n+1^2-_j^n^2+_j^n+1^2-_j^n^2)+ν_min+ν_m,min/4(∇_j^n+1^2-∇_j^n^2+∇_j^n+1^2-∇_j^n^2)+α_j/4(∇_j^n^2+∇_j^n^2)+γ/2(∇·_j,^n+1^2+∇·_j,^n+1^2)≤1/2γ(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2)+CC_*^2/α_j(^n^2+^'n_j^2+^n^2+^'n_j^2)+CΔ t/h^3α_j∑_i=1^N_sc(_i^n^2+_i^n^2). Now, multiply both sides by 2Δ t, and sum over the time steps n=0,1,⋯,M-1, to get _j^M^2+_j^M^2+ν_min+ν_m,min/2Δ t(∇_j^M^2+∇_j^M^2)+α_j/2Δ t∑_n=0^M-1(∇_j^n^2+∇_j^n^2) +Δ t∑_n=0^M-1γ(∇·_j,^n+1^2+∇·_j,^n+1^2)≤Δ t/γ∑_n=0^M-1(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2) +CC_*^2/α_jΔ t∑_n=0^M-1(^n^2+^'n_j^2+^n^2+^'n_j^2)+CΔ t^2/h^3α_j∑_n=1^M-1∑_i=1^N_sc(_i^n^2+_i^n^2). Using triangle, Cauchy-Schwarz, and Young's inequalities, to get _j^M^2+_j^M^2+α_jΔ t/2∑_n=1^M(∇_j^n^2+∇_j^n^2)+Δ t∑_n=1^Mγ(∇·_j,^n^2+∇·_j,^n^2) ≤Δ t/γ∑_n=0^M-1(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2)+(CC_*^2/α_jΔ t+CΔ t^2/h^3α_j)∑_n=1^M-1∑_j=1^N_sc(_j^n^2+_j^n^2). Summing over j=1,2,⋯, N_sc, we have ∑_j=1^N_sc_j^M^2+∑_j=1^N_sc_j^M^2+α_minΔ t/2∑_n=1^M∑_j=1^N_sc(∇_j^n^2+∇_j^n^2) +γΔ t∑_n=1^M∑_j=1^N_sc(∇·_j,^n^2+∇·_j,^n^2)≤Δ t/γ∑_n=0^M-1∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2) +Δ t∑_n=1^M-1(CC_*^2/α_min+CΔ t/h^3α_min)∑_j=1^N_sc(_j^n^2+_j^n^2). Apply discrete Grönwall inequality given in Lemma <ref>, to get ∑_j=1^N_sc _j^M^2+∑_j=1^N_sc_j^M^2+α_minΔ t/2∑_n=1^M∑_j=1^N_sc(∇_j^n^2+∇_j^n^2)+γΔ t∑_n=1^M∑_j=1^N_sc(∇·_j,^n^2+∇·_j,^n^2)≤1/γ exp CT(C_*^2/α_min+Δ t/h^3α_min)Δ t∑_n=0^M-1∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2). Using Lemma <ref> with (<ref>) yields the following bound Δ t∑_n=1^M∑_j=1^N_sc(∇_j,^n^2+∇_j,^n^2)≤ C_R^2Δ t∑_n=1^M∑_j=1^N_sc(∇·_j,^n^2+∇·_j,^n^2) ≤C_R^2/γ^2 expCC_*^2/α_min+CΔ t/h^3α_minΔ t∑_n=0^M-1∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2). Step 2: Estimate of _j,0^n, and _j,0^n: To find a bound on Δ t∑_n=1^M∑_j=1^N_sc(∇_j,0^n^2+∇_j,0^n^2), take _h=_j,0^n+1 in (<ref>), and _h=_j,0^n+1 in (<ref>), which yield 1/Δ t(_j^n+1-_j^n,_j,0^n+1)+1/2(ν+ν_m)^1/2∇_j,0^n+1^2=-b(_h^n,^n+1_j,,_j,0^n+1)-b(^n,_j,h^n+1,_j,0^n+1)-2μΔ t((l_,h^n)^2∇_j^n+1,∇_j,0^n+1)-2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j,0^n+1)-b(^'n_j,h,^n_j,_j,0^n+1)-b(^'n_j,_j,h^n,_j,0^n+1)-(ν_j-ν_m,j/2∇_j,0^n,∇_j,0^n+1)-(ν_j^'+ν_m,j^'/2∇_j,0^n,∇_j,0^n+1), and 1/Δ t (_j^n+1-_j^n,_j,0^n+1)+1/2(ν+ν_m)^1/2∇_j,0^n+1^2=-b(_h^n,^n+1_j,,_j,0^n+1)-b(^n,_j,h^n+1,_j,0^n+1)-2μΔ t((l_,h^n)^2∇_j^n+1,∇_j,0^n+1)-2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j,0^n+1)-b(^'n_j,h,^n_j,_j,0^n+1)-b(^'n_j,_j,h^n,_j,0^n+1)-(ν_j-ν_m,j/2∇_j,0^n,∇_j,0^n+1)-(ν_j^'+ν_m,j^'/2∇_j,0^n,∇_j,0^n+1). Apply the non-linear bound given in (<ref>), and Hölder's inequality for the first, and second non-linear terms of (<ref>), respectively, to obtain 1/Δ t(_j^n+1-_j^n,_j,0^n+1)+ν_min+ν_m,min/2∇_j,0^n+1^2+2μΔ tl_,h^n∇_j,0^n+1^2≤ C∇_h^n∇^n+1_j,∇_j,0^n+1+C^n∇_j,h^n+1_L^3∇_j,0^n+1-2μΔ t((l_,h^n)^2∇_j,^n+1,∇_j,0^n+1)-2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j,0^n+1)-b(^'n_j,h,^n_j,_j,0^n+1)-b(^'n_j,_j,h^n,_j,0^n+1)-(ν_j-ν_m,j/2∇_j,0^n,∇_j,0^n+1)-(ν_j^'+ν_m,j^'/2∇_j,0^n,∇_j,0^n+1). b(^'n_j,h,^n_j,_j,0^n+1)=b(^'n_j,h,^n_j,_j^n+1)-b(^'n_j,h,^n_j,_j,^n+1)=-b(^'n_j,h,_j,0^n+1,^n_j) Using Cauchy-Schwarz inequality, (<ref>), and Young's inequalities yields -b(^'n_j,h,^n_j,_j,0^n+1) =b(^'n_j,h,_j,0^n+1,^n_j)≤^'n_j,h·∇_j,0^n+1^n_j≤√(2)|^'n_j,h|∇_j,0^n+1^n_j≤√(2)l_,h^n∇_j,0^n+1^n_j≤l_,h^n∇_j,0^n+1^2+1/2^n_j^2. Using the above bound, triangle inequality, stability estimate, Assumption <ref>, and finally rearranging, we have 1/Δ t(_j^n+1-_j^n,_j,0^n+1)+ν_min+ν_m,min/2∇_j,0^n+1^2+(2μΔ t-1)l_,h^n∇_j,0^n+1^2 ≤C/(ν_min+ν_m,min)^1/2Δ t^1/2∇^n+1_j,∇_j,0^n+1+CC_*^n∇_j,0^n+1 +2μΔ t|((l_,h^n)^2∇_j,^n+1,∇_j,0^n+1)|+2μΔ t|({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j,0^n+1)| +1/2^n_j^2+|b(^'n_j,_j,h^n,_j,0^n+1)|+|(ν_j-ν_m,j/2∇_j,0^n,∇_j,0^n+1)|+|(ν_j^'+ν_m,j^'/2∇_j,0^n,∇_j,0^n+1)|. To evaluate the discrete time-derivative term, we use polarization identity, Cauchy-Schwarz, Young's, and Poincaré's inequalities 1/Δ t(_j^n+1-_j^n,_j,0^n+1) =1/Δ t(_j^n+1-_j^n,_j^n+1-_j,^n+1)=1/2Δ t(_j^n+1-_j^n^2+_j^n+1^2-_j^n^2)-1/Δ t(_j^n+1-_j^n,_j,^n+1)≥1/2Δ t(_j^n+1^2-_j^n^2)-C/Δ t∇_j,^n+1^2. Plugging the above estimate into (<ref>) and using Hölder's, and Young's inequalities, yields 1/2Δ t(_j^n+1^2-_j^n^2)+ν_min+ν_m,min/2∇_j,0^n+1^2+(2μΔ t-1)l_,h^n∇_j,0^n+1^2 ≤C/Δ t(1/α_j^2+1)∇^n+1_j,^2+CC_*^2/α_j^n^2+2μΔ t|((l_,h^n)^2∇_j,^n+1,∇_j,0^n+1)| +2μΔ t|({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j,0^n+1)|+α_j/12∇_j,0^n+1^2+1/2^n_j^2+|b(^'n_j,_j,h^n,_j,0^n+1)| +ν_j-ν_m,j_∞/4(∇_j,0^n^2+∇_j,0^n+1^2)+ν_j^'+ν_m,j^'_∞/4(∇_j,0^n^2+∇_j,0^n+1^2). We now find the bounds for the non-linear terms. For the first non-linear term, we use Cauchy-Schwarz, andYoung's inequalities, uniform boundedness in Assumption <ref>, and the stability estimate, to obtain 2μΔ t|((l_,h^n)^2∇_j,^n+1,∇_j,0^n+1)| ≤ 2μΔ tl_,h^n∇_j,^n+1l_,h^n∇_j,0^n+1≤μΔ tl_,h^n∇_j,^n+1^2+μΔ tl_,h^n∇_j,0^n+1^2≤μΔ tl_,h^n_∞^2∇_j,^n+1^2+μΔ tl_,h^n∇_j,0^n+1^2≤ CΔ t∇_j,^n+1^2+μΔ tl_,h^n∇_j,0^n+1^2. For the second non-linear term, we follow the same treatment as in (<ref>), and get 2μΔ t({(l_,h^n)^2-(l_,h^n)^2}∇_j,h^n+1,∇_j,0^n+1)≤α_j/12∇_j,0^n+1^2+CΔ t/h^3α_j∑_i=1^N_sc_i^n^2. For the last non-linear term,we apply Hölder's inequality, estimate in Lemma <ref>, Sobolev embedding theorem, Poincaréand Young's inequalities to get |b(^'n_j,_j,h^n,_j,0^n+1)| ≤^'n_j∇_j,h^n_L^3_j,0^n+1_L^6≤ CC_*^'n_j∇_j,0^n+1≤α_j/12∇_j,0^n+1^2+CC_*^2/α_j^'n_j^2. Use the above estimates, assume μ>max{1,1/2Δ t} to drop non-negative terms from left-hand-side, and reducing, the equation (<ref>) becomes 1/2Δ t(_j^n+1^2-_j^n^2)+ν_min+ν_m,min/4∇_j,0^n+1^2 ≤C/Δ t(1/α_j^2+1+Δ t^2)∇^n+1_j,^2+CC_*^2/α_j(^n^2+^'n_j^2) +CΔ t/h^3α_j∑_i=1^N_sc_i^n^2+1/2^n_j^2+ν_j-ν_m,j_∞/4∇_j,0^n^2+ν_j^'+ν_m,j^'_∞/4∇_j,0^n^2. Apply similar techniques to (<ref>), yields 1/2Δ t(_j^n+1^2-_j^n^2)+ν_min+ν_m,min/4∇_j,0^n+1^2 ≤C/Δ t(1/α_j^2+1+Δ t^2)∇^n+1_j,^2+CC_*^2/α_j(^n^2+^'n_j^2) +CΔ t/h^3α_j∑_i=1^N_sc_i^n^2+1/2^n_j^2+ν_j-ν_m,j_∞/4∇_j,0^n^2+ν_j^'+ν_m,j^'_∞/4∇_j,0^n^2. Add equations (<ref>), and (<ref>), and use triangle inequality, to get 1/2Δ t(_j^n+1^2-_j^n^2+_j^n+1^2-_j^n^2)+ν_min+ν_m,min/4(∇_j,0^n+1^2+∇_j,0^n+1^2) ≤C/Δ t(1/α_j^2+1+Δ t^2)∇^n+1_j,^2+∇^n+1_j,^2+(CC_*^2/α_j+CΔ t/h^3α_j+1/2)∑_j=1^N_sc(_j^n^2+_j^n^2) +ν_j-ν_m,j_∞+ν_j^'+ν_m,j^'_∞/4(∇_j,0^n^2+∇_j,0^n^2). Rearranging 1/2Δ t(_j^n+1^2-_j^n^2+_j^n+1^2-_j^n^2) +ν_min+ν_m,min/4(∇_j,0^n+1^2-∇_j,0^n^2+∇_j,0^n+1^2-∇_j,0^n^2) +α_j/4(∇_j,0^n^2+∇_j,0^n^2)≤C/Δ t(1/α_j^2+1+Δ t^2)∇^n+1_j,^2+∇^n+1_j,^2 +(CC_*^2/α_j+CΔ t/h^3α_j+1/2)∑_j=1^N_sc(_j^n^2+_j^n^2). Multiply both sides by 2Δ t, and summing over the time-step n=0,1,⋯,M-1, results in _j^M^2+_j^M^2 +ν_min+ν_m,min/2Δ t(∇_j,0^M^2+∇_j,0^M^2)+α_j/2Δ t∑_n=1^M-1(∇_j,0^n^2+∇_j,0^n^2)≤ C(1/α_j^2+1+Δ t^2)∑_n=1^M(∇^n_j,^2+∇^n_j,^2)+Δ t(CC_*^2/α_j+CΔ t/h^3α_j+1)∑_n=1^M-1∑_j=1^N_sc(_j^n^2+_j^n^2). Now, simplifying, and summing over j=1,2,⋯,N_sc, we have ∑_j=1^N_sc(_j^M^2+_j^M^2)+Δ t∑_n=1^Mα_min/2∑_j=1^N_sc(∇_j,0^n^2+∇_j,0^n^2) ≤∑_n=1^MC(1/α_min^2+1+Δ t^2)∑_j=1^N_sc(∇^n_j,^2+∇^n_j,^2) +Δ t∑_n=1^M-1(CC_*^2/α_min+CΔ t/h^3α_min+N_sc)∑_j=1^N_sc(_j^n^2+_j^n^2). Apply the version of the discrete Grönwall inequality given in Lemma <ref> ∑_j=1^N_sc(_j^M^2+_j^M^2)+α_min/2Δ t∑_n=1^M∑_j=1^N_sc(∇_j,0^n^2+∇_j,0^n^2) ≤ exp (CC_*^2/α_min+CΔ t/h^3α_min+N_sc*T)[ C(1/α_min^2+1+Δ t^2)∑_n=1^M∑_j=1^N_sc(∇^n_j,^2+∇^n_j,^2)], and use the estimate(<ref>) in (<ref>), to get Δ t∑_n=1^M∑_j=1^J(∇_j,0^n^2+∇_j,0^n^2) ≤C/α_min exp (CC_*^2/α_min+CΔ t/h^3α_min)[ (1/α_min^2+1+Δ t^2)∑_n=1^M∑_j=1^J(∇^n_j,^2+∇^n_j,^2)], Δ t∑_n=1^M∑_j=1^J(∇_j,^n^2+∇_j,^n^2)≤ C_R^2Δ t∑_n=1^M∑_j=1^J(∇·_j,^n^2+∇·_j,^n^2) ≤C_R^2/γ^2 expCC_*^2/α_min+CΔ t/h^3α_minΔ t∑_n=0^M-1∑_j=1^J(q_j,h^n+1-_j,h^n^2+λ_j,h^n+1-_j,h^n^2). Δ t∑_n=1^M∑_j=1^N_sc(∇_j,0^n^2+∇_j,0^n^2)≤CC_R^2/γ^2α_min exp (CC_*^2/α_min+CΔ t/h^3α_min) ×[ (1/α_min^2+1+Δ t^2)∑_n=0^M-1∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2)]. Using triangle and Young's inequalities Δ t∑_n=1^M(∇_0^n^2+∇_0^n^2)≤2Δ t/N^2_sc∑_n=1^M∑_j=1^N_sc(∇_j,0^n^2+∇_j,0^n^2) ≤CC_R^2/γ^2α_min exp (CC_*^2/α_min+CΔ t/h^3α_min) ×[ (1/α_min^2+1+Δ t^2)∑_n=0^M-1∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2)], and Δ t∑_n=1^M(∇_^n^2+∇_^n^2)≤2Δ t/N_sc^2∑_n=1^M∑_j=1^N_sc(∇_j,^n^2+∇_j,^n^2) ≤CC_R^2/γ^2 expCC_*^2/α_min+CΔ t/h^3α_minΔ t∑_n=0^M-1∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+r_j,h^n+1-_j,h^n^2). Finally, apply triangle and Young's inequalities on ∇_h^n-∇_h^n^2+∇_h^n-∇_h^n^2 to obtain the desire result. We assume ^n_j,h_∞, ^n_j,h_∞≤ K^*, 1≤ j≤ J for some positive constant K^*>0. Then, l_,h^n_∞=√(∑_j=1^J|_j,h^'n)|^2_∞ =_j^*,h^'n_∞ (for somej^*, 1≤ j^*≤ J)≤_j^*,h^n_∞+<_h>^n_∞≤ 2K^*. Hence, (l_,h^n)^2_∞=l_,h^n^2_∞. Similarly, (l_,h^n)^2_∞=l_,h^n^2_∞. We prove the following Lemma by strong mathematical induction. If γ→∞ then there exists a constant C_* which is independent of h, and Δ t, such that for sufficiently small h for a fixed mesh and fixed Δ t, the solution of the Algorithm <ref> satisfies max_0≤ n≤ M{_j,h^n_∞,_j,h^n_∞} ≤ C_*,∀ j=1,2,⋯,N_sc. Basic step: _j,h^0=I_h(_j^true(0,)), where I_h is an appropriate interpolation operator. Because of the regularity assumption of _j^true(0,), we have _j,h^0_∞≤ C_*, for some constant C_*>0. Inductive step: Assume for some L∈ℕ and L<M, _j,h^n_∞≤ C_* holds true for n=0,1,⋯,L. Then, using triangle inequality and Lemma <ref>, we have _j,h^L+1_∞≤_j,h^L+1-_j,h^L+1_∞+C_*. Using Agmon’s inequality <cit.>, and discrete inverse inequality, yields _j,h^L+1_∞≤ Ch^-3/2_j,h^L+1-_j,h^L+1+C_*. Next, using equation (<ref>) _j,h^L+1_∞≤ C_*+C/h^3/2γ^1/2 exp CT(C_*^2/α_min+Δ t/h^3α_min)Δ t∑_n=0^L∑_j=1^N_sc(q_j,h^n+1-_j,h^n^2+λ_j,h^n+1-_j,h^n^2)^1/2. For a fixed mesh, and timestep size, as γ→∞, yields _j,h^L+1_∞≤ C_*. Hence, by the principle of strong mathematical induction, _j,h^n_∞≤ C_* holds true for 0≤ n≤ M. Similarly, we can prove the uniform boundedness of ^n_j,h. § SCMS As SCMs, in this work, we consider sparse grid algorithm <cit.>, where for a given time t_n and a set of sample points {^j}_j=0^N_sc⊂, we approximate the exact solution of (<ref>)-(<ref>) by solving a discrete scheme. Then, for a basis {ϕ_l}_l=1^N_p of dimension N_p for the space L_ρ^2(), a discrete approximation is constructed with coefficients c_l(t_n,) of the form_h^sc(t_n,,)=∑_l=1^N_pc_l(t_n,)ϕ_l(), which is essentially an interpolant. In the sparse grid algorithm, we consider Leja and Clenshaw–Curtis points as the interpolation points that come with the associated weights {w^j}_j=1^N_sc. SCMs were developed for the UQ of the Quantity of Interest (QoI), ψ, which can be lift, drag, and energy. SCMs provide statistical information about QoI, that is,𝔼[ψ()]=∫_Γψ(,)ρ()dy≈∑_j=1^N_scw^jψ(,^j). § NUMERICAL EXPERIMENTS To test the proposed Algorithm <ref> (SCM-SPP-SMHD method) and the associated theory, in this section, we present the results of numerical experiments. For MHD simulations, it is crucial to enforce the discrete solenoidal constraint ∇·_j,h=0 strongly, otherwise, it can produce large errors in the solution <cit.>. Moreover, to have the divergence-free condition of the magnetic field at all times, the initial magnetic field must need to be zero. This is because the curl of the electric field is equal to and opposite of the time derivative of the magnetic flux density. Thus, it is popular to use pointwise divergence-free elements such as Scott-Vogelius (SV) elements on barycenter refined regular triangular meshes to enforce the divergence constraints <cit.>. However, using SV elements require higher degrees of freedom (dof) which is quite demanding. Throughout this numerical section, we will use (P_2,P_1^disc) SV element in the Coupled-SMHD method for the velocity-pressure and magnetic flux density-magnetic pressure variables and their outcomes will be considered as the benchmark solutions. Also, in the SCM-SPP-SMHD method, we will use (P_2, P_1) Taylor-hood (TH) element (which is weakly divergence-free and requires less dof) with large a γ. Both methods will be employed on a barycenter refined triangular mesh. In the first experiment, we verify the predicted convergence rates given in Theorem <ref> as γ varies and compute the spatial and temporal convergence rates with manufactured solutions. We implement the scheme on a channel flow over a step problem and a regularized lid-driven cavity problem in the second and third experiments, respectively. Finally, we examine the sparse grid algorithm as SCM in the lid-driven cavity problem. §.§ Convergence rate verification We will begin this experiment with =(x_1,x_2) and the following manufactured analytical functions, =([ cos x_2+(1+e^t)sin x_2; sin x_1+(1+e^t)cos x_1 ]), =([ cos x_2-(1+e^t)sin x_2; sin x_1-(1+e^t)cos x_1 ]),q =sin(x_1+x_2)(1+e^t),r=0. Clearly, ∇·=∇·=0. Next, introducing a perturbation parameter ϵ we introduce noise in the above analytical functions as below to create manufactured solutions _j(t,):=(1+k_jϵ),_j(t,):=(1+k_jϵ), q_j:=(1+k_jϵ)q,andr_j:=0, where k_j:=(-1)^j+14⌈ j/2⌉/N_sc, and j=1,2,⋯, N_sc, where N_sc=20. We consider the kinematic viscosity ν and magnetic diffusivity ν_m arecontinuous random variables with uniform distribution. In this experiment, we consider ν∼𝒰(0.0009, 0.0011) with E[ν]=0.001,ν∼𝒰(0.009, 0.011) with E[ν]=0.01, and ν_m∼𝒰(0.0009, 0.0011) with E[ν_m]=0.001. For each of the cases, we collect a i.i.d sample size of 20 which leads us to have two two-dimensional random samples. For a fixed j together with pair (ν_j,ν_m,j), and the analytical solution in (<ref>), we compute the forcing functions as below: _1,j=_j,t+_j·∇_j-ν_j+ν_m,j/2Δ_j-ν_j-ν_m,j/2Δ_j+∇ q_j, _2,j=_j,t+_j·∇_j-ν_j+ν_m,j/2Δ_j-ν_j-ν_m,j/2Δ_j+∇ r_j. We consider a domain =(0,1)^2, and _j,h^0=_j(0,) and _j,h^0=_j(0,) as the initial conditions for both algorithms. The boundary conditions for the Algorithm <ref> are considered as _j,h|_∂=_j in Step 1, and _j,h|_∂=_j in Step 2, whereas the boundary conditions for the Algorithm <ref> are considered as _j,h|_∂=_j in Step 1, ·|_∂=0 in Step 2, _j,h|_∂=_j in Step 3, and ·|_∂=0 in Step 4. We compute the ensemble average solutions (<_h>^n,<_h>^n), and (<_h,γ>^n,<_h,γ>^n) at t=t^n using the Algorithm <ref>, and the penalty-projection based Algorithm <ref>, respectively, and compare them by computing the difference between the two algorithms. §.§.§ Convergence with γ variesTo observe the convergence of the SCM-SPP-SMHD to the Coupled-SMHD scheme, for = or , we define <_h,γ^>:=<_h-_h,γ> and compute <_h,γ^>_2,1:=<_h-_h,γ>_L^2(0,T;H^1()^2). In Table <ref>, we represent the above errors and convergence rates as γ increases with fixed ϵ=0.01, T=1.0, Δ t=T/10, h=1/32, and two 2D samples {(ν_j,ν_m,j)∈[0.009,0.011]×[0.0009,0.0011]}, and {(ν_j,ν_m,j)∈[0.009,0.011]×[0.09,0.11]}. We observe a first-order convergence as the γ increases, which is in excellent agreement with the Theorem <ref>. To observe the spatial and temporal errors and their convergence rates, we define <_>:=<-_h> for = or , which are the difference between the outcomes of the SCM-SPP-SMHD scheme and the true analytical solutions stated above. To receive the temporal convergence, we use a fixed mesh width of h=1/64, end time T=1, vary timestep size as Δ t=T/4, T/8, T/16, T/32, andT/64, on the other hand, to get the spatial convergence, we use a small end time T=0.001, a fixed timestep size Δ t=T/8, vary mesh size as h=1/4,1/8,1/16,1/32, and 1/64. For both cases, we run the simulations using the proposed Algorithm <ref> varying the perturbation parameter ϵ (which introduces noise in the initial, and boundary conditions and forcing functions), and the two 2D random samples {(ν_j,ν_m,j)∈[0.0009,0.0011]×[0.0009,0.0011]} and {(ν_j,ν_m,j)∈[0.009,0.011]×[0.0009,0.0011]}. Then, we record the errors, compute the convergence rates, and present them in Tables <ref>-<ref>. In Table <ref>, we observe the first-order temporal convergence which is the optimal convergence rate of a first-order time-stepping algorithm. In Tables <ref>, we observe a second-order spatial convergence which is also consistent with the theory as we have used (P_2,P_1) element. §.§ SMHD channel flow past a unit step: A comparison between SCM-SPP-SMHD and Coupled-SMHD schemes We now implement the SCM-SPP-SMHD and Coupled-SMHD schemes in a 2D channel of electrically conducting fluid flow past a unit step under the influence of a magnetic field and compare their outcomes. The domain of the flow is a 30× 10 rectangular channel over a 1 × 1 step on the lower wall which is five units away from the inflow. At the inflow, we set _j=(1+k_jϵ)[ x_2(10-x_2)/25;0 ] and B_j=[ 0; 1 ], and the outflow condition uses a channel extension of 10 units, and at the end of the extension, we set outflow velocity and magnetic field equal to their counterpart in the inflow. We consider the initial conditions as_j^0=(1+k_jϵ)[ x_2(10-x_2)/25;0 ],and_j^0=[ 0; 0 ]. For the Coupled-SMHD scheme, on the walls, we implement no-slip boundary conditions _j|_Γ_1=[ 0; 0 ] and _j|_Γ_1=(1+k_jϵ)[ 0; 1 ]. For the SCM-SPP-SMHD scheme, since the velocity and pressure-like variables appear in different steps, in Step 1, and Step 3, we consider _j|_Γ_1=[ 0; 0 ] and _j|_Γ_1=(1+k_jϵ)[ 0; 1 ] on the walls, and in Step 3, and Step 4, we define the following space for the Elsässer variables , and : _h:={_h∈𝒫_k(τ_h)^d∩^1 ()^d:_h·|_Γ_1=0}. The timestep size Δ t=0.05, N_sc=20, γ=10^5, ==0 (no-external source), and a constant coupling parameter s=0.001 are considered. We consider the mean kinematic viscosity E[ν]=0.001, and mean magnetic diffusivity E[ν_m]=0.01 for random samples with distribution ν∼𝒰(0.0009,0.0011), and ν_m∼𝒰(0.009,0.011), respectively. A triangular unstructured mesh of the domain that provides a total of 419058 dof is considered, where velocity dof =186134, magnetic field dof =186134, pressure dof =23395, and magnetic pressure dof =23395. We run the simulations until T=40 and plot the speed contour and magnetic field strength in Fig. <ref> for both SCM-SPP-SMHD and Coupled-SMHD algorithms. We observe a very good agreement between the solutions of the two algorithms which supports our claim in the theory. §.§ Variable 5D Random Viscosities with regularized lid-driven cavity problem We now consider a 2D benchmark regularized lid-driven cavity problem <cit.> with a domain Ω=(-1,1)^2. No-slip boundary conditions are applied to all sides except on the top wall (lid) of the cavity where we impose the following boundary condition: _j|_lid=(1+k_jϵ)(1-x_1^2)^20. On all sides of the cavity, we enforce the following the magnetic field boundary condition: _j=(1+k_jϵ)01. The maximum speed of the lid is 1 and the characteristic length is 2. In this experiment, we consider γ=10000, ==0, and Clenshaw–Curtis sparse grid as the SCM, generated via the software package TASMANIAN <cit.> with N_sc=11. The generated computational barycentered refined mesh of the domain provides a total of 729840 degrees of freedom (dof), where velocity dof =324266, magnetic field dof =324266, pressure dof =40654, and magnetic pressure dof =40654. The flow initiates from the state of rest in absence of the magnetic flux density. In this section, we consider the equations (<ref>)-(<ref>) with a random viscosity ν(,), and magnetic diffusivity ν_m(,), where =(y_1,y_2,⋯,y_d)∈Γ⊂ℝ^d is a higher-dimensional random variable, 𝔼[ν]()=2c/15000, and 𝔼[ν_m]()=0.01c for a suitable c>0, ℂov[ν](x,x^')=4/15000^2exp(-(x-x^')^2/l^2), and l is the correlation length. This random field can be represented by the Karhunen-Loéve expansion: ν(, )=2/15000ψ(,),andν_m(,)=1/100ψ(,), where ψ(,)=(c+(√(π)l/2)^1/2y_1(ω)+∑_j=1^q√(ξ_j) (sin(jπ x_1/2)sin(jπ x_2/2)y_2j(ω)+cos(jπ x_1/2)cos(jπ x_2/2)y_2j+1(ω)), in which the infinite series is truncated up to the first q terms. The uncorrelated random variables y_j have zero mean and unit variance, and the eigenvalues are equal to √(ξ_j)=(√(π)l)^1/2exp(-(jπ l)^2/8). For our test problem, we consider the random variables y_j(ω)∈[-√(3),√(3)], the correlation length l=0.01, d=5, c=1, and q=2. We run the simulation with time-step size Δ t=5 until the simulation end time T=600 for various values of the coupling parameter s together with the perturbation parameter ϵ=0.01 in the initial and boundary conditions. The Fig. <ref>-<ref> illustrate the velocity solution as the speed contour, and the magnetic field strength for s=0.001, 0.01, 0.1, and 1 and are the outcomes of the SCM-SPP-SMHD scheme given in Algorithm <ref>. As s grows, the impact of the Lorentz force gets stronger in the flow field, which in turn slow down the evolve over time process. This can be observed as the speed and the size of the vorticities get reduced in Fig. <ref> while the magnetic field strength realizes a type of reflection symmetry in Fig. <ref>. For this experiment, in Fig. <ref>, we plot the system energy vs. time for various values of s using both the SCM-SPP-SMHD and Coupled-SMHD methods. To compare the two models, we compute the weighted mean energy at time t=t_n, which is defined as the weighted average of 1/2(t_n,,^j) for all sample points. We found excellent agreements between the energy plots from the solution of the Coupled-SMHD scheme and the solution of penalty projection based SCM-SPP-SMHD method with γ=10000, which support the theory. § CONCLUSION AND FUTURE WORKS In this paper, we propose, analyze, and test an efficient and accurate grad-div stabilized penalty-projection SCM-SPP-SMHD scheme in conjunction with SCM for solving stochastic MHD flow problems. The intriguing algorithm has several features that make it efficient and accurate: (1) The use of Elsässer variables formulation allows for a stable decoupling of the coupled PDEs, (2) A discrete Hodge decomposition is used for decoupling further which allows to use two much easier linear solves instead of using a difficult solve of the saddle point problems for each realization at each time-step, (3) The four sub-problems are designed in an elegant way that at each time-step, the system matrix remains common to all realizations but with different right-hand-side vectors, which saves a huge computer memory and assembly time of assembling several global different system matrices; Furthermore, this allows to take the advantage of using block linear solvers, (4) The use of ensemble eddy-viscosity terms provide stability of flows that are not resolved on particular meshes. (5) The large (but optimal) coefficient of the grad-div stabilization parameter provides accuracy of the splitting algorithm equivalent to a coupled scheme, and (6) The sparse grid SCM wrapper helps to use fewer realization. The SCM-SPP-SMHD algorithm is rigorously proven to be stable and converges to the equivalent coupled Coupled-SMHD ensemble scheme for large grad-div stabilization parameters. The numerical test verifies the first-order convergence of the SCM-SPP-SMHD scheme to the Coupled-SMHD scheme.  The optimal spatial and first-order temporal convergence rates of the scheme are verified with synthetic data for analytical test problems with random noise in the parameters values. We implement the scheme on benchmark channel flow over a step problem and a regularized lid-driven cavity problem with space-dependent 5D random high Reynolds and high magnetic Reynolds numbers. We found the efficient SCM-SPP-SMHD scheme performs well with high grad-div stabilization parameters. This penalty-projection-based efficient algorithm will be an enabling tool for large-scale simulation of complex 3D MHD problems. In the future, we will implement this scheme on the 3D Taylor-Green vortex problem, and examine its performance together with various linear solvers and appropriate preconditioners. As a future work, we will propose, analyze, and test first- and second-order accurate time-stepping penalty-projection schemes for the UQ of N-S flow problems following the work in <cit.>. An evolve-filter-relax stabilized Reduced Order (ROM) SCM for the time-dependent MHD flow will be proposed following the recent work in <cit.>. § ACKNOWLEDGEMENTThe National Science Foundation (NSF) is acknowledged for supporting this research through the grant DMS-221327. We also acknowledge the Texas A&M International University for providing logistic support. The authors also thank Dr. Leo G. Rebholz for sharing his thoughts which greatly improved the manuscript. plain
http://arxiv.org/abs/2310.17779v1
{ "authors": [ "Muhammad Mohebujjaman", "Julian Miranda", "Md. Abdullah Al Mahbub", "Mengying Xiao" ], "categories": [ "math.NA", "cs.NA", "65M12, 65M22, 65M60, 76W05" ], "primary_category": "math.NA", "published": "20231026210452", "title": "A Penalty-projection based Efficient and Accurate Stochastic Collocation Method for Magnetohydrodynamic Flows" }
Earth and Planets Laboratory, Carnegie Institution for Science, Washington, DC 20015 When diamond anvil cell (DAC) sample chambers are outfitted with both thermal insulation and electrodes, two cutting-edge experimental methods are enabled: Joule heating with spectroradiometric temperature measurement, and electrical resistance measurements of samples heated to thousands of kelvin. The accuracy of temperature and resistance measurements, however, often suffers from poor control of the shape and location of the sample, electrodes, and thermal insulation. Here, we present a recipe for the reproducible and precise fabrication of DAC sample, electrodes, and thermal insulation using a three-layer microassembly. The microassembly contains two potassium chloride thermal insulation layers, four electrical leads, a sample, and a buttressing layer made of polycrystalline alumina. The sample, innermost electrodes, and buttress layer are fabricated by focused-ion-beam milling. Three iron samples are presented as proof of concept. Each is successfully compressed and pulsed Joule heated while maintaining a four-point probe configuration. The highest pressure-temperature condition achieved is ∼ 150 GPa and ∼ 4000 K.A diamond anvil microassembly for Joule heating and electrical measurements up to 150 GPa and 4000 K Michael J. Walter January 14, 2024 ====================================================================================================§ INTRODUCTIONMany breakthrough experiments have been enabled by innovative diamond anvil cell (DAC) loading techniques that combine two to four electrodes and one to two layers of thermal insulation. Two electrodes have been used for Joule heating (also known as “internal resistive heating”) in order to study the melting curve of iron up to 25 GPa in one of the earliest DAC melting studies,<cit.> and up to 270 GPa in one of the highest-pressure DAC melting studies.<cit.> Joule heated DACs have also been used to precisely map the hcp-fcc phase boundary of iron,<cit.> and to study the high pressure melting curves and/or equations of state of gold, tin, rhenium, and platinum.<cit.> Meanwhile, four electrodes have been used in electrical resistance measurements of iron and iron alloys compressed and laser-heated to 100s of GPa and 1000s of K in DACs in order to study the conductivity of Earth's core, with major implications for the history of Earth's geodynamo.<cit.> Four-electrode loadings with thermal insulation have also been used to synthesize and characterize the electrical properties of new superconducting materials using laser-heating at 10s to 100s of GPa.<cit.> Despite the decades-long publication record of Joule heated diamond cells and of electrical resistance measurements of thermally insulated diamond cell samples, the methods remain relatively uncommon and inaccessible because of the extreme difficulty of sample preparation.The main preparation challenge is to position the sample, electrical leads, and thermal insulation at the appropriate locations and with the appropriate orientations above the culet. First, two or four electrodes must connect to the sample of interest. During compression, the electrodes cannot translate too far from the edge of the sample, or cause too much deformation of the sample. For example, if an electrode slips inward, the shape of the sample of interest (e.g. the Joule-heated hotspot) can easily become too small or too irregular for accurate characterization. Second, the sample must be insulated from the diamond anvils with relatively uniform layers of a transparent, non-reactive insulator in order to limit temperature gradients as much as possible during Joule-heating or laser-heating. Otherwise, it is difficult to interpret spectroradiometric temperature measurements. For the case of four-point probe measurements, this typically means that six pieces of insulation - four electrical and two thermal - must be selected and placed in appropriate positions so that when pressure is applied, all insulation and electrodes remain well-positioned.A set of publications from one to two decades ago presents an engineered solution for many applications requiring thermal insulation and electrodes. <cit.> The solution is to synthesize “designer diamond anvils” using sputtered thin film electrodes, chemical vapor deposition (CVD) of an electrically insulating diamond layer, and polishing and etching to reveal the electrodes and create a pit for thermal insulation.<cit.> Unfortunately, designer diamond anvils are not readily available (e.g. for purchase commercially) despite their invention more than ten years ago. Moreover, the electrical lead thickness reported for designer diamond anvils is less than1 μm, limiting the amount of current that can be delivered.<cit.>Rather than using designer anvils, the thermally-insulated electrical experiments in Refs. Liu1975, Boehler1986, Zha2008,Komabayashi2009, Sinmyo2019, Geballe2021, Ohta2016, Ohta2010, Inoue2020, Zhang2020, Zhang2021, Zhang2022, Zhu2023 used standard diamond anvils and metal gaskets with electrically insulating inserts (or electrically insulating coatings). The typical description of the loading method in these studies lacks detail. For example, Ref. Sinmyo2019 simply reports: “The foil was loaded between the Al_2O_3 thermal insulation layers and connected to an electrode several millimeters away from the culet.” Our previous publication using Joule-heated platinum samples provides slightly more detail, but it still lacks any prescription of how to appropriately position sample, electrodes, and insulation:<cit.> “At least five pieces of platinum and several pieces of KCl are stacked so that when the diamond cell is closed, a central piece of platinum of 5 to 30 μm-width is separated from both anvils by KCl layers and electrically connected to the four outer electrodes.”We suspect that the reason for the brief descriptions in many publications is that the actual methods rely on intuition, subtle choices of where to position materials, and lots of trial-and-error. Recently, Ref. Ohta2023 presented an engineering solution to one technical problem by encapsulating a sample that is milled with a focused ion beam (FIB) in an insulator that is also milled with a FIB. Here, we also use FIB encapsulation of iron samples in four-point probe geometries. Moreover, we extend the encapsulation method in four ways: by using a three layer assembly method, presenting a standard recipe for fabrication of outer electrodes, employing Joule heating, and extending the pressure by using smaller culets. In addition, we document the reproducibility of our three-layer assembly method by reporting photos during and after heating of four samples – three of which maintain a four-point probe geometry during compression and heating to > 2000 K. No photo is saturated – not even photos of hotspots that reach ∼ 3000 K peak temperature.§ METHODSThe method of sample preparation is divided into two sections: the fabrication of non-standard parts (<ref>), and the assembly of all parts (<ref>). The two sections are presented in the chronological order (fabrication, then assembly), but they can be read in either order. The fabrication section describes recipes for four non-standard parts: an electrode holder and three thin slabs – a bottom slab for thermal insulation, a middle slab that contains the sample, and a top slab for thermal insulation and electrical connection. The assembly section describes the procedure used to (a) make a gasket with an insulating insert, (b) integrate the gasket with outer electrodes, (c) stack the three thin slabs, (d) connect the top slab's electrode's to the outer electrodes, and (e) dehydrate and close the cell. The slabs and gasket use a homemade mixture of cubic boron nitride (cBN) and epoxy – see Appendix <ref>. Hereafter, we refer to the mixture as “cBN”.All the ingredients (i.e. consumable materials) needed for the sample preparation method are listed in Table <ref>. Finally, subsection <ref> briefly describes compression and heating of the sample.§.§ Fabrication of non-standard parts§.§.§ Fabrication of the bottom, middle, and top slabsThe fabrication and use of three thin slabs are the main innovation in this paper. The three slabs contain the sample, thermal insulation, electrical insulation, the innermost parts of the gasket, and the innermost electrodes.Slab fabrication is accomplished using an auxiliary diamond cell with 200 μm culets and three spare steel gaskets. Our auxiliary diamond cell is a smooth-sliding “symmetric cell”. The key requirements for the auxiliary cell is that it (a) opens and closes easily, and (b) reproducibly compresses samples to at least 30 GPa without any need for realignment of anvils.The first step in fabricating each slab is to press a cBN disc inside a steel gasket. First, a steel gasket is compressed to 20 GPa, and the entire culet region (200 μm-diameter) is removed by laser milling. Next, a chunk of cBN is pressed into the steel gasket. The cBN is thinned by an iterative process of (a) laser milling a hole (e.g.120 μm-diameter) and (b) compressing to 10-30 GPa. The target thickness is slightly less than 1/3 the thickness of the indentation in the real gasket that will be used in the high pressure experiment. For example, three 10 μm slabs are ideal for a 32 μm-thick indentation used for experiments on 200 μm culets. Alternatively, some slabs can be be slightly thicker and others slighter thinner, as long as the sum of the three slab thicknesses is slightly less than the thickness of the gasket for the actual experiment. Table <ref> lists dimensions for gaskets and slabs. To measure thickness accurately, we either (a) recover a laser-milled disc of cBN, turn it on its side, and measure with a microscope, or (b) gently press the gasket between anvils and measure with interferometry using white light and a spectrometer. To speed up the indenting process, we use a torque measuring screwdriver to apply consistent torque (e.g. 4 inch-pounds) on each screw.The bottom slab is used for thermal insulation. It contains a cBN outer region and a KCl-filled hole. Photos of the bottom slab for sample #1, the sample that is highlighted here, are shown in Fig. <ref>. After preparing a 10 μm-thick, 200 μ-diameter cBN slab at the center of a steel gasket, we use a laser mill to drill a 50 μm diameter hole, place a small piece of KCl in the hole, close the DAC, and press down by hand (i.e. without using screws for compression). More KCl can be added if the original piece fills less than ∼ 80% of the hole. Overfilling the hole can lead to a gasket blowout during the compression experiment. Note that our laser mill uses a sub-nanosecond pulsed near-IR laser (PowerChip PNP-M08010) and follows the design of Ref. Hrubiak2015.The top slab is used for thermal insulation and electrodes. The process is similar to the process for the bottom slab, but with extra steps for electrode placement (Fig. <ref>). First, four squares are laser milled at the Cartesian coordinates (± 40 μm, ± 40 μm) with respect to the slab center. Each square has a 30 μm side-length. Next, ∼ 10 μm-thick platinum is cut into ∼ 30 × 30 μm^2 squares and placed in the holes. (All pieces of platinum in this paper are cut by hand - we use straight razors for all pieces <100 μm diameter and scissors for 127 μm diameter wire.)The middle slab contains the sample, innermost electrodes, and the sample's lateral buttress. In this study we use iron as the sample, and fabricate it along with the innermost electrodes from a single slab or iron. The buttress material is alumina for all examples in the main text; in appendix <ref>, one example uses KCl. Fig. <ref> shows the middle slab fabrication process for sample #1. The first steps are similar to fabrication of the bottom slab: a 10 μm-thick cBN slab is created, a 70 μm diameter hole is milled in the cBN, and alumina powder (Johnson Matthey 22 μm alpha, 99.5% metals basis) is pressed into the hole. Unlike in the case of KCl insulation, alumina poses little risk of gasket blowout because of its high yield strength. Still, the relatively brittle alumina region should be small enough to allow easy maneuvering of the final 140 × 140 μm^2 slab without crumbling. Next, sample material is prepared for FIB milling. The iron starting material is pressed to 10 μm thickness using a second auxiliary DAC outfitted with 1 mm culets. The 10 μ-thick piece of iron is then glued over a hole in a 400 μm-thick brass slab using a ∼ 200 μm blob of silver epoxy (or any other sticky material that is somewhat electrically conductive). Next, the brass slab is clamped atop a ∼ 5 × 5 mm^2 piece of iron foil on an aluminum SEM pin stub. Likewise, the steel gasket with cBN/alumina slab is clamped atop a piece of sapphire on an aluminum SEM pin stub. The use of the 5 × 5 mm^2 iron foil and sapphire piece at the bottom of the sample are crucial for preventing contamination of the iron and sapphire during FIB milling.Next, a FIB (Helios G4 PFIB UXe DualBeam FIB/SEM) is used to shape the iron slab into a sample and innermost electrodes in a four-point probe configuration, and to mill a matching hole in the cBN/alumina slab. Using a 4 nA ion beam, 8 μm-wide trenches are cut through the iron to make the shape in Figs. <ref>d-e. The shape contains a central sample that is 8 × 22 μm in surface area, along with four electrodes. The resulting shape of iron is recorded in an SEM image collected using the SEM column of the dual beam instrument. Note that a connection is maintained between the iron piece and the main iron foil during this step of milling (Fig. <ref>d). A slightly wider shape (e.g. 0.5 μm wider in each dimension) is used as the pattern for milling the hole in the cBN/alumina slab (yellow pattern in Fig. <ref>e). The shape is milled with 4 nA or 15 nA ion beams, depending on sample dimensions and time constraints. Fig <ref>f,i,j shows the results for sample #1. Fig. <ref>b show the result for sample #2.Finally, the iron is transferred into the alumina hole using one of two methods, shown in Fig. <ref> and Fig. <ref>, respectively. In the case of samples #1, 3, and 4,the transfer is performed outside the FIB (e.g. Fig. <ref>i-m). First, we seat the steel gasket with cBN/alumina onto the auxiliary diamond cell. During this step, we do not close the auxiliary cell; doing so would deform the hole in the cBN and alumina. Next, we use the laser mill to cut the connection with the main platelet and free the Fe sample . We then transfer the iron to the cBN/alumina slab by hand (Fig. <ref>i), and use a micromanipulator (Microsupport AxisPro APSS) to push the sample on top of the hole (Fig. <ref>j). For some samples, including sample #1, alignment of the sample and the matching hole is a challenge because of magnetic interaction between the iron sample on the one hand, and the tungsten carbide seat and/or the diamond cell body on the other hand. In such cases, we remove the DAC from the micromanipulator, place a neodymium magnet below the anvil, and orient the magnetic field along the long direction of the hole in the cBN/alumina slab. As long as the sample does not fly away during this process, the result is a well-oriented sample (Fig. <ref>). Next, we return the DAC to the micromanipulator (Fig. <ref>l), move the sample so that it is resting atop of the hole, and press gently downwards on the large areas of the iron so that the sample is stuck partway into the hole (Fig. <ref>m). Now that the sample is well positioned, we close the cell. As it closes, a relatively uniform force is applied across the iron sample, pressing the iron it into the hole in the cBN/alumina. As we press with more force, the cBN, alumina, and/or iron deform slightly to eliminate air-gaps. Finally, the middle slab is cut free from the auxiliary steel gasket using the laser mill (Fig. <ref>n-o). In the case of samples #2, 5, 6, and 7, the FIB's in-situ micromanipulator is used to place the iron in the hole (<ref>) in a procedure we call a “lift-in”, in analogy with the well-known “lift-out” FIB procedure for extracting thin sections. For sample #2, we used circular slabs (i.e. discs) rather than square slabs. The “lift-in” recipe is: (1) The FIB's micromanipulator is outfitted with a sharp tungsten needle. The needle's tip is positioned just above the electrode that is still attached to the bulk of the iron slab. (2) The needle's tip is attached to the iron electrode using a small (∼ 1 x 5 μm^2) pad of tungsten deposition (WCO). (3) The iron is cut free using 15 nA beam current, and lifted vertically away from the bulk of the iron slab (Fig. <ref>d). (4) The needle is retracted. (5) The stage is moved to the cBN/alumina position. (6) The needle with iron sample is re-inserted and lowered into the hole in the cBN/alumina slab(Fig. <ref>e-g). (7) Tungsten deposition is used to attach the iron to the cBN. (8) The needle is milled free, and retracted. An SEM image shows that the tungsten deposition extends in a ∼ 50 × 80 μm oval, far beyond the intended weld rectangle ( Fig. <ref>g). To minimize contamination, we typically raster an ion beam across the entire cBN/alumina slab area except for the weld point. The final product is shown in Figs. <ref>h-k. After removing the sample from the FIB and from the aluminum stub holder, we confirm that the iron and alumina pieces that were underneath the iron foil and the alumina plus cBN layer were indeed milled by the ion beam (e.g. Fig. <ref>l). Otherwise, a contamination layer of some other stub material (e.g. aluminum) is created on the underside of the iron sample or alumina buttress.The two methods of transferring iron into the alumina hole have similar success rates in our experience. The methods require different skills – skills using a FIB and its micromanipulator in one case; skills using needles by hand and a stand-alone micromanipulator in the other case. §.§.§ Fabrication of the electrode holder An electrode holder that is independent of the gasket (i.e. never glued to the gasket) is a helpful tool for creating a robust electrical pathway from the edge of the cell to the edge of the culet. Our early versions of electrode holders are described briefly in Refs. Somayazulu2019, Geballe2021. Here, we fabricate the electrode holder using the following procedure (Fig. <ref>): (1) A 1.5 mm-thick stainless steel disc is made using standard machine shop techniques. The outer diameter is ∼ 27 mm; each disc is machined to match the inner diameter of the piston of the diamond cell. The disc's inner diameter is 13 mm, allowing plenty of clearance around the tungsten carbide seat. (2) Four sides of the disc's outer diameter are further machined using 240 grit sandpaper in order to add clearance when sliding the disc into the cell. The four flattened sides are designed to align with the portholes of the diamond cell. (3) Four rectangles of copper-clad board (7 × 5 mm, 1/16 inch-thick) are glued to the disc using epoxy (Stycast 2750FT). The positions for the copper boards are chosen to avoid obscuring the view through the diamond cell's portholes. In addition, epoxy is added as electrical insulation along the inner diameter of the stainless steel ring. (4) Four copper wires (0.2 mm diameter, 2 cm long) are soldered to the copper board. (5) Four pieces of copper foil (9 × 2 × 0.2 mm) are soldered to the copper board, pointing inwards. The result is shown in Fig. <ref>a. (6) Four platinum wires are soldered to the copper foil using a very small amount of solder. Before soldering, it is best practice to shape each end of the platinum wire. The outer end is pressed slightly in order to ease the wetting of the platinum by solder. The inner end is pressed to ∼ 40 μm-thickness and razor-cut to an arrow shape that is ∼ 400 μm-long and tapered from ∼ 200 to less than 50 μm width at its tip. The process and result are shown in Fig. <ref>b-f. Note that the order of steps (5) and (6) can be reversed. (7) The electrode holder is placed in the DAC, and secured with sticky tack, if necessary. (8) Tweezers are used to bend the copper foil and platinum wires until the tips of the arrowheads are within ∼ 120 μm of the tip of the culet. The result is shown in Fig. <ref>g,h. Note that the platinum wire is purposely chosen to be almost double the length needed to reach the culet's edge. The extra length allows the platinum to be shaped into an S-curve which can be bent in order to adjust the position of the platinum arrow. The extra length also allows re-use of the outer electrodes– see Appendix <ref> for details.§.§ Assembly The assembly recipe assumes successful fabrication of the bottom, middle, and top slabs (Section <ref>; Figs. <ref>d, <ref>o, and <ref>h), as well as the electrode holder (Fig. <ref>h). Photographs of the assembly process for sample #1 are in Figs. <ref>, <ref>, and <ref>. Photographs of the fabricated slabs and the assembly process for sample #4, which was loaded on 100 μm culets, are shown in Fig. <ref>.We divide the assembly description into three parts: (a) assembly outside the culet region, (2) assembly on top of the culet, and (3) completing the circuit and closing the DAC.§.§.§ Assembly outside the culet regionWe use Zha-type DACs.<cit.> Other types of cells can also be used – see Appendix <ref>. We use standard anvils (2 mm tall; standard cut; 200 μm flat culet or 100 μm flat with a single 8 degree bevels to 300 μm diameter), standard seats (tungsten carbide; 1 mm opening; 60 degree full opening), and a standard gasket material (250 μm-thick tungsten or rhenium). We use standard procedures to glue and align the anvils and to indent the tungsten gasket (20 and 30 GPa indentation pressure for 200 and 100 μm culets, respectively). The gasket is held with sticky tack onto the cylinder side of the DAC (Fig. <ref>).An electrically insulating insert made of cBN is fabricated in the tungsten gasket by the following procedure. First, we laser mill a hole that covers the entire culet and bevel region (e.g. a 300 μm diameter hole for a 300 μm diameter bevelled area). Next, we add a large, ∼ 400 μm diameter chunk of cBN, and compress to 30-35 GPa. Pressure is measured using ruby fluorescence or the diamond anvil Raman edge.<cit.> Additional cBN is added and the gasket is re-indented until three conditions are met: (a) cBN completely insulates the tungsten indentation from the piston-side anvil, (b) the thickness of cBN insulation at the top of the indentation is at least 40 μm, and (c) the cBN sticks to the tungsten gasket when the cell is opened (rather than sticking to the piston anvil). Typically, three to ten iterations of “add cBN” and “indent to 30-35 GPa” are required to achieve all conditions. After achieving all conditions, the cBN thickness can typically be reduced by further indentation without ruining any of the conditions (a)-(c). The indentation thickness is 28-34 μm for 200 μm culets and 20-24 μm for 100 μm culets. After fabricating the cBN insert, we add a ring of glue (Loctite Gel Control Super Glue) or epoxy (Stycast 2750FT) around the outermost cBN. For example, Fig. <ref>d shows a the ring of transparent glue encircling the cBN. The ring improves the mechanical stability of any pieces of cBN that are sticking up above the tungsten surface, as well as electrical insulation to separate platinum electrodes from the tungsten surface. An electrode holder with outer electrodes is created by the recipe described in section <ref>, and placed into the inner diameter of the DAC's piston (Fig. <ref>b). The steel disc rests flatly against the platform that houses the set screws for the tungsten carbide seat. To secure the disc, we press centimeter-sized pieces of sticky tack against the edges of disc and against the piston. The center of the electrode holder contains four platinum wires with tips that have been shaped into arrowheads and positioning withing ∼ 120 μm of the edge of the 200 μm diameter culet, as shown in Fig. <ref>c. Next, we integrate the outer electrodes and the gasket, using an iterative “press and bend” process. We press the platinum into the cBN, rearrange the platinum arrowheads by using tweezers to bend the platinum and the copper foil that is soldered to the platinum, and iterate many times. Once the four platinum arrowheads are well positioned (within ∼ 120 μm of the culet edge, and well separated from each other) and do not move substantially when the DAC is closed, the outer electrode “press and bend” operation is complete (Fig. <ref> d-e).Next, we cut a square-shaped hole at the center of cBN insert. The side length of the square is 140 μm for the 200 μm culets and 90 μm for the 100 μm culets. We use a square hole instead of a circle to simplify rotational alignment of the pieces that are placed inside the hole (see section <ref>) – an especially useful trick for for the 100 μm culets.At some time before closing the DAC for the final time, the four copper wires that are attached to the edge of the electrode holder are soldered to four pieces of copper-clad board ( 5 × 5 mm, 1/32 inch-thick) that are glued with epoxy to the cylinder's portholes. It is important to solder while the DAC is closed so that vapors from soldering flux do not precipitate onto the anvil's culet. At this point, all preparations are complete for the region that is outside the culet region and inside the DAC piston and cylinder. The result is a gasket, cBN gasket insert, and four isolated paths of metal (copper, solder, and platinum) connecting four points on the edge of the Zha-cell piston to four points near the edge of the culet.§.§.§ Assembly on top of the culet One by one, we transfer each of the three thin slabs onto the cBN gasket insert and using a micromanipulator, push each delicate slab into the hole of the gasket (Fig. <ref>). After stacking the top slab, we close the cell and press firmly by hand to make sure the stack of slabs is seated inside the gasket hole.§.§.§ Completing the circuit and closing the DACNext, we complete the four-point probe circuit, and close the DAC. To complete the circuit, the four small squares of platinum shown in Fig. <ref>k must be connected to the outer electrodes shown in Fig. <ref>. We make the connection by a “place and press” iterative procedure. We place pieces of platinum, then press them into the cBN gasket and slabs by closing the DAC, and repeat many times (Fig. <ref>). We use a combination of 25 μm diameter platinum wire (e.g. Fig. <ref>c) and 5 to 10 μm-thick slabs of platinum (e.g. Fig. <ref>a), which are themselves made by pressing the 25 μm wire to the desired thickness between 1 mm anvils in the second auxiliary DAC. The three keys to success in this step are: (1) ensure that pieces of platinum are never pressed against the KCl at the center of the culet, (2) continue placing and pressing small pieces of platinum until the four-point probe is complete and until the pieces do not shift out-of-position when gently pressed between the anvils, and (3) make sure that each electrical path is narrow enough to avoid short circuits during the compression experiment. Finally, the DAC is closed most of the way, leaving a ∼ 5 to 20 μm gap for gas flow (Fig. <ref>g-h). The DAC is inserted into a vacuum oven at 115^∘ C for 1 hour to dehydrate the KCl. After 1 hour, the oven is purged with argon gas, and within ∼ 5 seconds, the cell is removed and each of two screws is tightened to 1.5 in-lbs of torque. After cooling to room temperature, the measured pressure is typically in the range 2-5 GPa (Fig. <ref>i). In some cases, platinum connectors slip during cell closure, creating open circuits or short circuits. In these cases, the cell can typically be opened, platinum can be replaced or rearranged, and the dehydration procedure can be repeated. §.§ Compression and heatingPressure is increased by tightening screws. Pressure is measured using ruby fluorescence up to ∼ 50 GPa, and the diamond anvil's Raman edge above ∼ 20 GPa.To make final electrical connections, we typically fasten a copper-clad board to the body of the diamond cell for strain relief, and solder copper wires to and from four sections of the board (Fig. <ref>k-l). Our boards also have SMA connectors and barrel connectors for integration with the electrical pulser and voltage probes described in Ref. <cit.>. The samples are compressed while monitoring visually for short circuits. They are also monitored for short circuits and open circuits using a Keithley Sourcemeter 2400 to measure four-point probe resistance. Pulsed Joule heating and simultaneous measurements of four-point probe resistance and spectroradiometric temperature are performed using the methods in Refs. <cit.>. The detailed results of temperature and resistance measurements will be published elsewhere. Here, we focus on the evolution of the shape of samples and hotspots upon compression and heating. § RESULTSThe main result is simple: three samples (#1, 2, and 3) were successfully compressed and Joule-heated to the pressure and temperature range 50-150 GPa and 2000-4000 K while measuring resistance in a four-point probe configuration (Fig. <ref>). A fourth sample (#4) was successfully compressed and Joule-heated to 100 GPa and 3000 K, but the four-point probe geometry was destroyed by a short-circuit upon compression. Samples #1 and #2 were loaded between 200 μm culets, while samples #3 and #4 were loaded between 100 μm culets. More detailed results reveal certain strengths and weaknesses of the loading procedure. Preparation of sample #2 used two different strategies as compared to sample #1. The loading for sample #2 was successful, showing flexibility of the loading procedure, but the strategies were not ideal. First, the slabs for sample #2 were cut into circles rather than squares, requiring more careful rotational alignment of the middle and top slabs. Second, the bottom slab was laser-milled into a relatively small circle, giving extra clearance compared to the gasket hole – see Appendix <ref>. Our motivation was to ensure easy placement of the slab into the hole. Unfortunately, the extra clearance allowed the bottom slab to slide off to one edge of the hole during the loading process, resulting in a near-overlap of cBN and the part of the iron sample that becomes hot during Joule heating (Figs. <ref>g, l, q).Sample #3 was loaded between 100 μm culets with a 300 μm bevel region. All procedures followed the recipe for sample #1, but with different dimensions (Table <ref>). Upon the initial compression, one of the electrodes was not connected to the circuit. Luckily, at 60 GPa, all connections were finally complete, allowing four-point probe measurements. Sample #4 was also loaded between 100 μm culets (Fig. <ref>). In an attempt to avoid open circuits, we used larger pieces of platinum for sample #4 compared to sample #3. Unfortunately, during compression from 30 to 50 GPa, one of the platinum pieces from the top slab appeared to short circuit with one of the iron electrodes in the middle slab, eliminating the chance to perform four-point probe resistance measurements above 30 GPa for this sample. In summary, one experiment with 100 μm culets resulted in an open circuit below 60 GPa while another resulted in a short circuit at 40 ± 10 GPa, suggesting a small margin for error in our 100 μm culet recipe. A major reason for the small margin of error is that our machining tolerances seem to be marginal – both when laser-milling and when hand-cutting with a razor blade (Appendix <ref>). In addition, a different shape of FIBed iron might improve the success rate of making connection to the platinum electrodes. For example, the two narrow iron electrodes could flare out from their 2 μm inner connection to a much wider area (e.g. 20 μm) in the outer region.Fig. <ref> shows the result of an earlier design that uses an alternative sample shape. A four-point probe configuration with relatively narrow leads and a long and narrow sample region were maintained up to 50 GPa. Unfortunately, the pulsed Joule heating hotspots at 50 GPa were located outside of the central region(white arrows in Fig. <ref>d). These regions were not suitable for clean experiments because they are compressed between layers of cBN (which is mixed with epoxy). Moreover, they could not be monitored with a four-point probe.Finally, appendix <ref> shows two examples of messy, relatively unsuccessful loadings. One loading used alumina to surround the sample from all sides (i.e. no KCl). The other loading used KCl to surround the sample from all sides (i.e. no alumina). Each loading ended up with short circuits at high pressure, eliminating the chance to make four-point resistance measurements. Nonetheless, each sample was successfully pulsed Joule heated to 3000 K. In each case, the cause of the short circuit is unrelated to the choice of pressure medium. § DISCUSSIONThe three layer recipe presented here involves many steps, each of which is simple enough to be explained by photograph. By contrast, the traditional loading method that we used in Ref. Geballe2021 involves fewer steps, fewer engineering controls, and requires more subtle manipulations that are difficult to describe by photograph. For example, a key step in our previous loading method was arranging four pieces without securing any of them – a platinum sample, a KCl insulation layer, and two platinum electrodes. Next, the four pieces were pressed while monitoring for unwanted slippage of any piece. Many iterations were necessary, involving micromanipulation of pieces that had slipped and addition of new pieces to fill gaps. It would be difficult to present a simple sequence of photographs that captured the variety of micromanipulations necessary in practice. In contrast, the new three layer recipe essentially solves this slippage problem by the fabrication of slabs. The main disadvantage of the three layer recipe is that the large number of steps requires a large amount of time. The results of the three layer recipe suggest unprecedented control of electrical fields and Joule-heated temperature fields in DACs. First, the wide, well-centered hotspots documented for samples #1-4 rival the wide Joule-heated hotspot documented in Fig. 2 of Ref. Zha2008. Second, the narrow electrical leads surrounding samples #1-3 are even narrower than the electrical leads in the breakthrough studies of Refs. Ohta2016, Inoue2020, Ohta2023. Third, Joule heating and four-point probe resistance measurements can be performed simultaneously for samples #1-3, a capability that has not been demonstrated at pressure > 50 GPa using any technology including designer diamond anvils, to the best of our knowledge. Fourth, the reproducibility of the three layer loading method is documented through photographs that show the shape of samples and of Joule-heating hotspots (Fig. <ref>). Moreover, the hotspots are not saturated. For comparison, all photos of hotspots in Refs. Ohta2023, Komabayashi2009, Ohta2016, Inoue2020, Weir2009, Weir2012, Zha2003, Zhang2020 appear to be saturated, and Refs. Zhang2022,Zhang2021, Sinmyo2019,Boehler1986,Liu1975 show no photos of hotspots whatsoever. The lack of documentation in previous publications makes it very difficult to know how hotspot size compared to the region being probed – i.e., the region between electrodes for resistance measurements, and the region probed with spectroradiometry for temperature measurements. The three layer assembly method is the crucial innovation in this study. When combined with FIB-embedding, and electrode holders, it allows for the reproducibility shown in Fig. <ref>. In addition, the three layers allow for independent choices of the two thermal insulation layers and the electrical insulation that buttresses the innermost electrodes. In our implementation, the independent choice allowed us to limit sample deformation with the alumina middle layer, while simultaneously limiting heat losses with the KCl top and bottom layers.Like the designer diamond anvils, the three layer assembly method allows for control of geometry (e.g. samples #1-3, and to some extent, samples #4-5). Compared to designer diamond fabrication, the three layer methods uses equipment that is much more common and somewhat less expensive – a focused ion beam and a laser mill for the three layer method, versus lithography, sputtering, chemical vapor deposition, and etching equipment for designer diamond anvils. The three layer method allows for electrodes that are relatively thick (∼ 5-10 μm starting thickness), and which are made from high-purity wires, as opposed to sputtered films. The thickness and purity are helpful in allowing the 10s of A currents required for microsecond-timescale pulsed Joule heating of our samples. The main downside of the three layer method compared to designer anvils is large amount of human time and FIB time required to make the three slabs. The amount of time depends on a person's experience. Even the most experienced person can spend ten hours making a middle slab, plus twenty additional hours waiting for the FIB to mill parts. Bottom slabs are the simplest. They can take as little as one hour of human time per slab and do not require a FIB. Because of the substantial time investment in slab fabrication, it is crucial to have a high success rate when transferring each slab to the real gasket and stacking them atop one another.There are many possible applications of layered microassemblies for electrical measurements beyond four-point probe measurements. For example, they could simplify existing procedures or enable new innovations for NMR, ODMR, magnetic susceptibility, and Seebeck coefficient measurements in diamond cells. <cit.>Finally, it is possible that layered microassemblies for DACs could employ more than three layers. For example, a five layer assembly made from KCl-Ir-sample-Ir-KCl could allow for uniform heating of silicates and oxides with Joule heating, a method analogous to the assemblies used for petrology in multi-anvil experiments. § CONCLUSIONS A reproducible recipe for a three layer microassembly has been demonstrated for the preparation of samples for Joule heating and four-point probe electrical measurements in diamond anvil cells. The method uses a bottom layer with cBN and KCl, a middle layer with a FIB-milled sample embedded in alumina and surrounded by cBN, and a top layer with four squares of platinum and a disc of KCl embedded in cBN. The layers are assembled in a stack, connected electrically to the edge of the DAC, compressed, and pulsed-Joule heated up 150 GPa and 4000 K. Successful fabrication and compression leads to many new opportunities for experiments with Joule heating and/or electrical measurements. This material is based upon work supported by the National Science Foundation under Grant No. 2125954. We thank Amol Karandikar and Maddury Somayazulu for fruitful discussions, Matthew Diamond for helpful comments on the manuscript, and Seth Wagner, Vic Lugo, and Tyler Bartholomew for machining parts.§ CBN The mixture of cubic boron nitride and epoxy is made using a variation on the method of Refs. Funamori2008, Wang2011. Epotek 353ND Parts A and B are mixed in the 10:1 ratio specified by the manufacturer. A ∼ 2 mm drop (∼ 50 mg) is placed on a glass slide. A hard plastic stick with a blunt end (∼ 2 mm diameter) is used to mix in cBN powder (0.25 μm from Advanced Abrasives Corporation). The powder is added little by little, mixing thoroughly after each addition. The final ratio is approximately 1:10 by weight (e.g. 50 mg epoxy, 500 mg cBN), but the actual determination that enough cBN has been added is qualitative: the mixture does not seem like it can accommodate any further cBN, and the hand of the person mixing is very tired (e.g. after 20 mins of stirring). For comparison, Refs. Funamori2008, Wang2011 simply reports using a 1:10 ratio of epoxy to cBN.Next, the mixture is left to dry in air for at least 24 hours. The mixture is stored in air at room temperature. One batch can be used for several years.§ LOADINGS WITH A MEDIUM OF PURE ALUMINA OR PURE KCL Sample #6 used alumina for all three layers, and a slightly different shape of the iron sample (Fig. <ref>a). The sample Joule heated to 1000s of K at ∼ 70 GPa, but short circuits eliminated the chance to make four-point probe resistance measurements. The short circuits were likely caused by the non-standard sample shape.Sample #7 used a pure KCl medium. In this case, we attempted to use two layers rather than three. The idea was to integrate the bottom and middle layers - a simple idea that led to a cascade of problems, including the bent electrode in Fig. <ref>b. The end result was a sample that heated to 3000 K, but which contained one short circuit and one very broad inner electrode.§ RE-USING OUTER ELECTRODESOuter electrodes can be re-used many times for many high pressure runs. After a high pressure run, we typically observe thin and relatively brittle platinum arrowhead. We simply break-off the thin region using a needle, or remove it by slicing with a curved scalpel. Next, we refurbish each arrowhead using one of several strategies. The quickest option is to use a curved scalpel to reshape the arrowhead. This option only works if a flat region of suitable thickness remains on the arrowhead. A more time-consuming and more robust option is to remove the electrode holder and reshape the arrowhead with a DAC outfitted with 1 mm anvils, plus a scalpel. The most robust option is to remove the electrode holder, desolder the copper foil that holds the platinum arrowhead, and reshape the arrowhead with 1 mm anvils and a razor blade.§ OUTER ELECTRODE RECIPE FOR OTHER TYPES OF DIAMOND CELLS Our electrode holder is simple to fabricate and use because of the relatively large inner cavity of the piston of the Zha cell.<cit.> BX-90 cell pistons have similarly large inner cavities. Indeed, we have successfully used a cross-shaped electrode holder that slides into our BX-90 piston. Mao-Bell cells have small but easily accessible inner cavities, which allowed us to use electrode holders in Ref. Somayazulu2019.In contrast, symmetric cell pistons have small inner cavities. The small size allows little room for adhesives (e.g. sticky tack) to fix a loose electrode holder, and little surface area upon which a tight-fitting electrode holder can slide in and out. In addition, the portholes of a symmetric cell are small (5 mm diameter) and far from the cell's outer edge (14 mm), which makes it a major challenge to create electrical connections through the portholes after closing the cell. For comparison, Zha cell's portholes have 12 mm diameter and are located a mere 2 mm from the outer edge of the cell, making it very easy to solder electrical connections after closing the cell.§ LASER- AND RAZOR- MACHINING PRECISION The precision of laser milling is crucial for the three-layer assemblies. If there is too little clearance for the bottom and middle slabs in the gasket hole, then force must be applied (e.g. by micromanipulator needles) to try to push the slabs into the hole. The slabs are fragile due to their aspect ratio (∼ 10 × 140 × 140 μm), so they can easily break if pressed. If there is too much clearance, on the other hand, slabs can slide with respect to one another. For example, the bottom slab for sample #2 appears to have ∼ 10 μm of extra clearance on each side (Fig. <ref>c). The improved design used for sample #1 reduced the clearance to approximately 3 μm on each side (Fig. <ref>a). The 3 μm of clearance per side suggests that a slab can translate approximately 6 μm from one side to the other. Indeed, a pair of images of the top slab of sample #4 shows that it can indeed translate 6 μm when pushed gently with a micromanipulator (Fig. <ref>e-f. This is apparently more than enough precision for the recipe for 200 micron culets. In contrast, the open circuit below 60 GPa for sample #3 and the short circuit above ∼ 40 GPa for sample #4 suggest that the precision is marginal for 100 μm culets.To begin to quantitatively understand the margin for error, we first estimate the placement accuracy needed for 200 micron culets and 100 micron culets. At a minimum, the required placement accuracy is the typical distance between platinum in the top slab and the wide, rhombus-shaped iron electrodes, after compression to typical pressures, say 80 GPa and 100 GPa, respectively. The typical distances are ∼ 25 μm, and ∼ 12 μm, respectively. Two types of machine tolerances - from laser milling and razor cutting - can each approximately explain the 12 μm of slop needed to justify why the 100 μm recipe works marginally. The most sensitive applications of laser machining are cutting out the top and middle slabs, and cutting the square hole in the gasket. Empirically, our recipe generates ∼ 6 μm of slop between each slab and the gasket (Fig. <ref>e-f), perhaps because this is the tightest fit that avoids slabs breaking during assembly, or perhaps because our recipe uses a bit of unnecessary clearance. Together, the middle slab-to-gasket slop and top slab-to-gasket slop could generate up to 12 μm of slab-to-slab slippage. Similarly, small imprecision in razor-cut platinum sizes can cause us to use platinum pieces that underfill the laser-cut holes (e.g. Fig. <ref>g), or to overfill their holes (e.g. Fig. <ref>h). In the end, the platinum pieces in a typical top slab are misplaced or overfilling their holes by up to ∼ 10 μm (Fig. <ref>a,d). In this way, we can approximately explain ∼ 12 μm slop in spite of the fact that each type of machining is capable of a couple μm precision on each side of a piece.
http://arxiv.org/abs/2310.18176v1
{ "authors": [ "Zachary M. Geballe", "Suzy M. Vitale", "Jing Yang", "Francesca Miozzi", "Vasilije V. Dobrosavljevic", "Michael J. Walter" ], "categories": [ "physics.app-ph", "cond-mat.mtrl-sci" ], "primary_category": "physics.app-ph", "published": "20231027144324", "title": "A diamond anvil microassembly for Joule heating and electrical measurements up to 150 GPa and 4000 K" }
[figure]justification=justified,singlelinecheck=false [subfigure]justification=centeringjustified#1Eq. (<ref>) #1Eqs. (<ref>) #1Fig. <ref>
http://arxiv.org/abs/2310.18404v1
{ "authors": [ "Yuri V. Kovchegov", "Brandon Manley" ], "categories": [ "hep-ph", "hep-ex", "nucl-ex", "nucl-th" ], "primary_category": "hep-ph", "published": "20231027180011", "title": "Orbital Angular Momentum at Small $x$ Revisited" }
fancy High-Pressure Reentrant Ferroelectricity in PbTiO_3 Revisited Russell J. Hemley January 14, 2024 =============================================================§ INTRODUCTION Formal analysis of multi-agent systems is becoming increasingly important as the procedures, protocols, and technology that surround us get more and more complex. Alter­nating-time temporal logic  <cit.> is probably the most popular logic to describe interaction in MAS. Formulas of allow to express statements about what agents (or groups of agents) can achieve. For example, taxifatality says that the autonomous cab can drive in such a way that nobody is ever killed, and taxi,passg destination expresses that the cab and the passenger have a joint strategy to arrive at the destination, no matter what any other agents do.Algorithms and tools for verification of such properties have been in development for over 20 years <cit.>. Unfortunately, model checking of agents with imperfect information is- to -complete for memoryless strategies <cit.> and -complete to undecidable for agents with perfect recall <cit.>; also, the problem does not admit simple incremental solutions <cit.>. This has been confirmed in experiments <cit.> and case studies <cit.>.Much of the complexity is due to the size of the model, and in particular to state space explosion <cit.>. To address the problem, we have extended our experimental tool STV (STrategic Verifier) <cit.> with support for model reductions. Two methods are used: (i) checking for equivalence of models according to a handcrafted relation of A-bisimulation <cit.>, and (ii) fully automated partial order reduction (POR) <cit.>. We also add a simple model specification language that allows the user to define their own inputs for verification, which was not available in the previous version <cit.>.The purpose of the extension is twofold. First, it should facilitate practical verification of MAS, as the theoretical and experimental results for POR and bisimulation-based reduction suggest <cit.>. No less importantly, it serves a pedagogical objective, as we put emphasis on visualisation of the reductions, so that the tool can be also used in the classroom to show how the reduction works. Finally, checking strategic bisimulation by hand is difficult and prone to errors; here, the user can both see the idea of the bisimulation, and automatically check if it is indeed correct.§ APPLICATION DOMAIN is aimed at verification of agents' abilities – in particular, synthesis of memoryless imperfect information strategies that guarantee a given temporal goal. This includes both model checking of functionality requirements (understood as the ability of legitimate users to achieve their goals), and security properties defined by the inability of an intruder to compromise the system.A good example of a specific domain is formal verification of voting procedures and elections, with a number of classical requirements, such as election integrity, ballot secrecy, receipt-freeness, and voter-verifiability <cit.>. Some recent case studies <cit.> have shown that practical verification of such scenarios is still outside of reach. Some tools do not support intuitive specification and validation of models; some others have limited property specification languages. In all cases, the state-space explosion is a major obstacle that prevents verification of anything but toy models.§ SCENARIOS The new version of STV provides a flexible specification language for asynchronous models. The following examples are included: Train-Gate-Controller (TGC) <cit.>, Two-Stage Voting <cit.>, and Asynchronous Simple Voting <cit.>. Some built-in synchronous models are also included, such as TianJi <cit.>, Castles <cit.>, Bridge Endplay <cit.>, and Drones <cit.>.§ FORMAL BACKGROUND Models. The main part of the input is given by an asynchronous multi-agent system (AMAS) <cit.>, i.e., a network of local automata (one automaton per agent). From the AMAS, the global model is generated, where nodes are tuples of local states. The knowledge/uncertainty of an agent is defined by the agent's local state. An example AMAS is shown in Figure <ref>(left). The global model generated from the AMAS is shown in Figure <ref>(right).Fagin95knowledgeStrategies. A strategy is a conditional plan that specifies what the agent(s) are going to do in every possible situation. Here, we consider the case of imperfect information memoryless strategies, represented by functions from the agent's local states (formally, abstraction classes of its indistinguishability relations) to its available actions. The outcome of a strategy from state q consists of all the infinite paths starting from q and consistent with the strategy.Formulas. Given a model M and a state q in the model, the formula Aφ holds in (M,q) iff there exists a strategy for A that makes φ true on all the outcome paths starting from any state indistinguishable from q. For more details, we refer the reader to <cit.>.Model reduction and bisimulation. State space explosion is a major factor that prevents practical model checking <cit.>. A possible way out is model reduction, i.e., using a smaller equivalent model for verification instead of the original one. A suitable notion of A-bisimulation has been proposed in <cit.>. Unfortunately, synthesizing a reduced A-bisimilar model is at least as hard as the verification itself <cit.>. However, checking if a handcrafted relation is an A-bisimulation can be done in polynomial time, which offers valuable help especially for larger models.Partial-order reduction. A fully automated model reduction is possible if the state space explosion is due to asynchronous interleaving of agents' actions. The method is called partial order reduction, and has been been recently extended to verification of strategic abilities under imperfect information <cit.>. The reduced model for the TGC scenario is highlighted in blue color in Figure <ref>(right).§ TECHNOLOGY does explicit-state model checking. That is, the global states and transitions of the model are represented explicitly in the memory of the verification process. The tool includes the following new functionalities.User-defined input. The user can load and parse the input specification from a text file that defines: the local automata in the AMAS, the formula to be verified, the propositional variables, persistent propositions, agent names relevant for POR, and/or the mapping for bisimulation checking. Based on that, the global model is generated and displayed in the GUI and can be verified by means of fixpoint approximation <cit.> or dominance-based strategy search <cit.>. When using partial-order reduction, the reduced model is also displayed, and highlighted in the full model.Partial-order reduction. The fully automated reduction method is based on POR <cit.> and implemented according to the algorithms proposed in <cit.>. The reduced model is generated based on the AMAS specification, together with two additional parameters: the coalition and the set of proposition variables.Bisimulation checking. The tool allows to check if two models are A-bisimilar for a given coalition A <cit.>. Apart from the specification of the two models, the bisimulation relation between the corresponding states must also be provided, along with the selected coalition.§ USAGE The current version of is available for download https://github.com/blackbat13/stv/releases/tag/v0.2-alphahere, and allows to: * Select and display a model specification from a text file,* Generate and display the explicit state-transition graph,* Generate and display the reduced model using POR,* Select specifications of two models and a relation from text files, and check if the models are A-bisimilar wrt the relation,* Verify the selected full or reduced model by means of fixpoint approximation or dominance-based verification (DominoDFS),* Alternatively, run the verification for a predefined parameterized model and formula,* Display the verification result, including the relevant truth values and the winning strategy. § CONCLUSIONS Model checking strategic abilities under imperfect information is notoriously hard. addresses the state explosion problem by an implementation of partial-order reduction and bisimulation checking. This should not only facilitate verification, but also make the techniques easier to use and understand.Acknowledgements. The authors acknowledge the support of the Luxembourg National Research Fund (FNR) and the National Centre for Research and Development (NCBiR), Poland, under the CORE/PolLux project STV (POLLUX-VII/1/2019).45 #1 #1#1#1 #1 #1 #1 #1#1#1#1 [Alur, de Alfaro, Grossu, Henzinger, Kang, Kirsch, Majumdar, Mang, and WangAlur et al2001] Alur01jmocha authorpersonR. Alur, personL. de Alfaro, personR. Grossu, personT.A. Henzinger, personM. Kang, personC.M. Kirsch, personR. Majumdar, personF.Y.C. Mang, and personB.-Y. Wang. year2001. jMocha: A Model-Checking Tool that Exploits Design Structure. In booktitleProceedings of International Conference on Software Engineering (ICSE). publisherIEEE Computer Society Press, pages835–836. [Alur, Henzinger, Mang, Qadeer, Rajamani, and TasiranAlur et al1998] Alur98mocha-cav authorpersonR. Alur, personT. Henzinger, personF. Mang, personS. Qadeer, personS. Rajamani, and personS. Tasiran. year1998. MOCHA: Modularity in Model Checking. In booktitleProceedings of Computer Aided Verification (CAV) (seriesLecture Notes in Computer Science, Vol. volume1427). publisherSpringer, pages521–525. [Alur, Henzinger, and KupfermanAlur et al1997] Alur97ATL authorpersonR. Alur, personT. A. Henzinger, and personO. Kupferman. year1997. Alternating-Time Temporal Logic. In booktitleProceedings of the 38th Annual Symposium on Foundations of Computer Science (FOCS). publisherIEEE Computer Society Press, pages100–109. [Alur, Henzinger, and KupfermanAlur et al2002] Alur02ATL authorpersonR. Alur, personT. A. Henzinger, and personO. Kupferman. year2002. Alternating-Time Temporal Logic. journalJ. ACMvolume49 (year2002), pages672–713. <https://doi.org/10.1145/585265.585270> [Baier and KatoenBaier and Katoen2008] Baier08mcheck authorpersonC. Baier and personJ.-P. Katoen. year2008. booktitlePrinciples of Model Checking. publisherMIT Press. 978-0-262-02649-9 [Belardinelli, Condurache, Dima, Jamroga, and KnapikBelardinelli et al2021] Jamroga21Bisimulations authorpersonFrancesco Belardinelli, personRodica Condurache, personCatalin Dima, personWojciech Jamroga, and personMichal Knapik. year2021. Bisimulations for verifying strategic abilities with an application to the ThreeBallot voting protocol. journalInformation and Computation volume276 (year2021), pages104552. <https://doi.org/10.1016/j.ic.2020.104552> [Belardinelli, Lomuscio, Murano, and RubinBelardinelli et al2017a] Belardinelli17broadcasting authorpersonF. Belardinelli, personA. Lomuscio, personA. Murano, and personS. Rubin. year2017a. Verification of Broadcasting Multi-Agent Systems against an Epistemic Strategy Logic. In booktitleProceedings of IJCAI. pages91–97. [Belardinelli, Lomuscio, Murano, and RubinBelardinelli et al2017b] Belardinelli17publicActions authorpersonF. Belardinelli, personA. Lomuscio, personA. Murano, and personS. Rubin. year2017b. Verification of Multi-agent Systems with Imperfect Information and Public Actions. In booktitleProceedings of AAMAS. pages1268–1276. [Bulling, Dix, and JamrogaBulling et al2010] Bulling10verification authorpersonN. Bulling, personJ. Dix, and personW. Jamroga. year2010. Model Checking Logics of Strategic Ability: Complexity. In booktitleSpecification and Verification of Multi-Agent Systems, editorpersonM. Dastani, personK. Hindriks, and personJ.-J. Meyer (Eds.). publisherSpringer, pages125–159. [Bulling and JamrogaBulling and Jamroga2011] Bulling11mu-ijcai authorpersonN. Bulling and personW. Jamroga. year2011. Alternating Epistemic Mu-Calculus. In booktitleProceedings of IJCAI-11. pages109–114. [Busard, Pecheur, Qu, and RaimondiBusard et al2014] Busard14improving authorpersonS. Busard, personC. Pecheur, personH. Qu, and personF. Raimondi. year2014. Improving the Model Checking of Strategies under Partial Observability and Fairness Constraints. In booktitleFormal Methods and Software Engineering. seriesLecture Notes in Computer Science, Vol. volume8829. publisherSpringer, pages27–42. 978-3-319-11736-2 <https://doi.org/10.1007/978-3-319-11737-9_3> [Busard, Pecheur, Qu, and RaimondiBusard et al2015] Busard15reasoning authorpersonS. Busard, personC. Pecheur, personH. Qu, and personF. Raimondi. year2015. Reasoning about memoryless strategies under partial observability and unconditional fairness constraints. journalInformation and Computation volume242 (year2015), pages128–156. <https://doi.org/10.1016/j.ic.2015.03.014> [Cermak, Lomuscio, Mogavero, and MuranoCermak et al2014] Cermak14mcheckSL authorpersonP. Cermak, personA. Lomuscio, personF. Mogavero, and personA. Murano. year2014. MCMAS-SLK: A Model Checker for the Verification of Strategy Logic Specifications. In booktitleProc. of Computer Aided Verification (CAV) (seriesLecture Notes in Computer Science, Vol. volume8559). publisherSpringer, pages525–532. [Cermák, Lomuscio, and MuranoCermák et al2015] Cermak15mcmas-sl-one-goal authorpersonPetr Cermák, personAlessio Lomuscio, and personAniello Murano. year2015. Verifying and Synthesising Multi-Agent Systems against One-Goal Strategy Logic Specifications. In booktitleProceedings of AAAI. pages2038–2044. [Chen, Forejt, Kwiatkowska, Parker, and SimaitisChen et al2013] Chen13prismgames authorpersonT. Chen, personV. Forejt, personM. Kwiatkowska, personD. Parker, and personA. Simaitis. year2013. PRISM-games: A Model Checker for Stochastic Multi-Player Games. In booktitleProceedings of Tools and Algorithms for Construction and Analysis of Systems (TACAS) (seriesLecture Notes in Computer Science, Vol. volume7795). publisherSpringer, pages185–191. [Dima, Maubert, and PinchinatDima et al2014] Dima14mucalc authorpersonC. Dima, personB. Maubert, and personS. Pinchinat. year2014. The Expressive Power of Epistemic μ-Calculus. journalCoRRvolumeabs/1407.5166 (year2014). [Dima, Maubert, and PinchinatDima et al2015] Dima15fallmu authorpersonC. Dima, personB. Maubert, and personS. Pinchinat. year2015. Relating Paths in Transition Systems: The Fall of the Modal Mu-Calculus. In booktitleProceedings of Mathematical Foundations of Computer Science (MFCS) (seriesLecture Notes in Computer Science, Vol. volume9234). publisherSpringer, pages179–191. <https://doi.org/10.1007/978-3-662-48057-1_14> [Dima and TipleaDima and Tiplea2011] Dima11undecidable authorpersonC. Dima and personF.L. Tiplea. year2011. Model-checking ATL under Imperfect Information and Perfect Recall Semantics is Undecidable. journalCoRRvolumeabs/1102.4225 (year2011). [Fagin, Halpern, Moses, and VardiFagin et al1995] Fagin95knowledge authorpersonR. Fagin, personJ. Y. Halpern, personY. Moses, and personM. Y. Vardi. year1995. booktitleReasoning about Knowledge. publisherMIT Press. [Guelev, Dima, and EneaGuelev et al2011] Guelev11atl-distrknowldge authorpersonD.P. Guelev, personC. Dima, and personC. Enea. year2011. An alternating-time temporal logic with knowledge, perfect recall and past: axiomatisation and model-checking. journalJournal of Applied Non-Classical Logics volume21, number1 (year2011), pages93–131. [Huang and van der MeydenHuang and van der Meyden2014] Huang14symbolic-epist authorpersonX. Huang and personR. van der Meyden. year2014. Symbolic Model Checking Epistemic Strategy Logic. In booktitleProceedings of AAAI Conference on Artificial Intelligence. pages1426–1432. [Jamroga and DixJamroga and Dix2006] Jamroga06atlir-eumas authorpersonW. Jamroga and personJ. Dix. year2006. Model Checking ATL_ir is Indeed Δ_2^P-complete. In booktitleProceedings of EUMAS (seriesCEUR Workshop Proceedings, Vol. volume223). [Jamroga, Kim, Kurpiewski, and RyanJamroga et al2020a] Jamroga20Pret-Uppaal authorpersonWojciech Jamroga, personYan Kim, personDamian Kurpiewski, and personPeter Y. A. Ryan. year2020a. Towards Model Checking of Voting Protocols in Uppaal. In booktitleProceedings of E-Vote-ID (seriesLecture Notes in Computer Science, Vol. volume12455). publisherSpringer, pages129–146. <https://doi.org/10.1007/978-3-030-60347-2_9> [Jamroga, Knapik, and KurpiewskiJamroga et al2017] Jamroga17fixpApprox authorpersonW. Jamroga, personM. Knapik, and personD. Kurpiewski. year2017. Fixpoint Approximation of Strategic Abilities under Imperfect Information. In booktitleProceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). publisherIFAAMAS, pages1241–1249. [Jamroga, Knapik, and KurpiewskiJamroga et al2018a] Jamroga18Selene authorpersonW. Jamroga, personM. Knapik, and personD. Kurpiewski. year2018a. Model Checking the SELENE E-Voting Protocol in Multi-Agent Logics. In booktitleProceedings of the 3rd International Joint Conference on Electronic Voting (E-VOTE-ID) (seriesLecture Notes in Computer Science, Vol. volume11143). publisherSpringer, pages100–116. [Jamroga, Knapik, Kurpiewski, and MikulskiJamroga et al2019] Jamroga19fixpApprox-aij authorpersonW. Jamroga, personM. Knapik, personD. Kurpiewski, and personŁ. Mikulski. year2019. Approximate Verification of Strategic Abilities under Imperfect Information. journalArtificial Intelligence volume277 (year2019). [Jamroga, Konikowska, Penczek, and KurpiewskiJamroga et al2020b] Jamroga20mvATL authorpersonWojciech Jamroga, personBeata Konikowska, personWojciech Penczek, and personDamian Kurpiewski. year2020b. Multi-valued Verification of Strategic Ability. journalFundamenta Informaticae volume175, number1-4 (year2020), pages207–251. <https://doi.org/10.3233/FI-2020-1955> [Jamroga, Kurpiewski, and MalvoneJamroga et al2020c] Jamroga20natvoting authorpersonWojciech Jamroga, personDamian Kurpiewski, and personVadim Malvone. year2020c. Natural Strategic Abilities in Voting Protocols. journalCoRRvolumeabs/2007.12424 (year2020). [arxiv]2007.12424 <https://arxiv.org/abs/2007.12424> [Jamroga, Penczek, Dembiński, and MazurkiewiczJamroga et al2018b] Jamroga18por authorpersonW. Jamroga, personW. Penczek, personP. Dembiński, and personA. Mazurkiewicz. year2018b. Towards Partial Order Reductions for Strategic Ability. In booktitleProceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS). publisherIFAAMAS, pages156–165. [Jamroga, Penczek, and SidorukJamroga et al2020d] Jamroga20paradoxes-tr authorpersonW. Jamroga, personW. Penczek, and personT. Sidoruk. year2020d. Strategic Abilities of Asynchronous Agents: Semantic Paradoxes and How to Tame Them. journalCoRRvolumeabs/2003.03867 (year2020). [arxiv]2003.03867 [cs.LO] <https://arxiv.org/abs/2003.03867> [Jamroga, Penczek, Sidoruk, Dembiński, and MazurkiewiczJamroga et al2020e] Jamroga20POR-JAIR authorpersonW. Jamroga, personW. Penczek, personT. Sidoruk, personP. Dembiński, and personA. Mazurkiewicz. year2020e. Towards Partial Order Reductions for Strategic Ability. journalJournal of Artificial Intelligence Research volume68 (year2020), pages817–850. <https://doi.org/10.1613/jair.1.11936> [Kacprzak and PenczekKacprzak and Penczek2004] Kacprzak04umc-atl authorpersonM. Kacprzak and personW. Penczek. year2004. Unbounded Model Checking for Alternating-Time Temporal Logic. In booktitleProceedings of International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS). publisherIEEE Computer Society, pages646–653. <https://doi.org/10.1109/AAMAS.2004.10089> [Kurpiewski, Jamroga, and KnapikKurpiewski et al2019a] Kurpiewski19stv-demo authorpersonD. Kurpiewski, personW. Jamroga, and personM. Knapik. year2019a. STV: Model Checking for Strategies under Imperfect Information. In booktitleProceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems AAMAS 2019. publisherIFAAMAS, pages2372–2374. [Kurpiewski, Knapik, and JamrogaKurpiewski et al2019b] Kurpiewski19domination authorpersonDamian Kurpiewski, personMichał Knapik, and personWojciech Jamroga. year2019b. On Domination and Control in Strategic Ability. In booktitleProceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems AAMAS 2019. publisherIFAAMAS, pages197–205. [Lomuscio, Penczek, and QuLomuscio et al2010a] lomuscio10partialOrder authorpersonA. Lomuscio, personW. Penczek, and personH. Qu. year2010a. Partial Order Reductions for Model Checking Temporal-epistemic Logics over Interleaved Multi-agent Systems. journalFundamenta Informaticae volume101, number1-2 (year2010), pages71–90. <https://doi.org/10.3233/FI-2010-276> [Lomuscio, Penczek, and QuLomuscio et al2010b] LomuscioPQ10 authorpersonAlessio Lomuscio, personWojciech Penczek, and personHongyang Qu. year2010b. Partial order reductions for model checking temporal epistemic logics over interleaved multi-agent systems. In booktitle9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), Toronto, Canada, May 10-14, 2010, Volume 1-3. publisherIFAAMAS, pages659–666. [Lomuscio, Qu, and RaimondiLomuscio et al2017] Lomuscio17mcmas authorpersonA. Lomuscio, personH. Qu, and personF. Raimondi. year2017. MCMAS: An Open-Source Model Checker for the Verification of Multi-Agent Systems. journalInternational Journal on Software Tools for Technology Transfer volume19, number1 (year2017), pages9–30. <https://doi.org/10.1007/s10009-015-0378-x> [Lomuscio and RaimondiLomuscio and Raimondi2006] Lomuscio06mcmas authorpersonA. Lomuscio and personF. Raimondi. year2006. MCMAS : A Model Checker for Multi-Agent Systems. In booktitleProceedings of Tools and Algorithms for Construction and Analysis of Systems (TACAS) (seriesLecture Notes in Computer Science, Vol. volume4314). publisherSpringer, pages450–454. [PeledPeled1993] Peled93representatives authorpersonDoron A. Peled. year1993. All from One, One for All: on Model Checking Using Representatives. In booktitleProceedings of CAV (seriesLecture Notes in Computer Science, Vol. volume697), editorpersonCostas Courcoubetis (Ed.). publisherSpringer, pages409–423. <https://doi.org/10.1007/3-540-56922-7_34> [Pilecki, Bednarczyk, and JamrogaPilecki et al2014] Pilecki14synthesis authorpersonJ. Pilecki, personM.A. Bednarczyk, and personW. Jamroga. year2014. Synthesis and Verification of Uniform Strategies for Multi-Agent Systems. In booktitleProceedings of CLIMA XV (seriesLecture Notes in Computer Science, Vol. volume8624). publisherSpringer, pages166–182. [Pilecki, Bednarczyk, and JamrogaPilecki et al2017] Pilecki17smc authorpersonJ. Pilecki, personM.A. Bednarczyk, and personW. Jamroga. year2017. SMC: Synthesis of Uniform Strategies and Verification of Strategic Ability for Multi-Agent Systems. journalJournal of Logic and Computation volume27, number7 (year2017), pages1871–1895. <https://doi.org/10.1093/logcom/exw032> [RyanRyan2010] Ryan10atemyvote authorpersonP.Y.A. Ryan. year2010. The Computer Ate My Vote. In booktitleFormal Methods: State of the Art and New Directions. publisherSpringer, pages147–184. [SchobbensSchobbens2004] Schobbens04ATL authorpersonP. Y. Schobbens. year2004. Alternating-Time Logic with Imperfect Recall. journalElectronic Notes in Theoretical Computer Science volume85, number2 (year2004), pages82–93. [Tabatabaei, Jamroga, and RyanTabatabaei et al2016] Tabatabaei16expressing authorpersonM. Tabatabaei, personW. Jamroga, and personPeter Y. A. Ryan. year2016. Expressing Receipt-Freeness and Coercion-Resistance in Logics of Strategic Ability: Preliminary Attempt. In booktitleProceedings of the 1st International Workshop on AI for Privacy and Security, PrAISe@ECAI 2016. publisherACM, pages1:1–1:8. <https://doi.org/10.1145/2970030.2970039> [van der Hoek and Wooldridgevan der Hoek and Wooldridge2002] Hoek02ATEL authorpersonW. van der Hoek and personM. Wooldridge. year2002. Tractable Multiagent Planning for Epistemic Goals. In booktitleProceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-02), editorpersonC. Castelfranchi and personW.L. Johnson (Eds.). publisherACM Press, New York, pages1167–1174.
http://arxiv.org/abs/2310.18418v1
{ "authors": [ "Damian Kurpiewski", "Witold Pazderski", "Wojciech Jamroga", "Yan Kim" ], "categories": [ "cs.LO", "cs.MA" ], "primary_category": "cs.LO", "published": "20231027182248", "title": "STV+Reductions: Towards Practical Verification of Strategic Ability Using Model Reductions" }
[email protected] Department of Physics, University of Washington, Seattle, WA 98195, U.S.A. [email protected] Department of Physics, University of Washington, Seattle, WA 98195, U.S.A. Ultralight dark photons are compelling dark matter candidates, but their allowed kinetic mixing with the Standard Model photon is severely constrained by requiring that the dark photons do not collapse into a cosmic string network in the early Universe. Direct detection in minimal production scenarios for dark photon dark matter is strongly limited, if not entirely excluded; discovery of sub-meV dark photon dark matter would therefore point to a nonminimal dark sector. We describe a model that evades such constraints, capable of producing cold dark photons in any parameter space accessible to future direct detection experiments. The associated production dynamics yield additional signatures in cosmology and small-scale structure, allowing for possible positive identification of this particular class of production mechanisms. Detectable, defect-free dark photon dark matter Zachary J. Weiner January 14, 2024 =============================================== Evidence for cold dark matter abounds in astrophysical and cosmological observations <cit.>, but not for its fundamental nature—the mass and spin of its constituents or its interactions with the visible sector. Dark photons are among the best-motivated candidates for new light degrees of freedom and are common features of grand unified theories and string theory <cit.>. They may constitute all the dark matter in scenarios ranging from minimal gravitational production during inflation <cit.> to nonthermal mechanisms involving additional new degrees of freedom <cit.>. At low energies, a dark photon can interact with the Standard Model (SM) through kinetic mixing with the ordinary photon, yielding signatures in cosmology <cit.>, astrophysics <cit.>, and the laboratory <cit.>; numerous dark matter haloscopes are poised to probe a vast space of unexplored dark photon masses and kinetic mixing <cit.>. Such theoretical and observational promise demands understanding whether the parameter space within experimental reach also allows for consistent dark photon dark matter production.Recent work demonstrated a stringent upper bound on the kinetic mixing that allows for viable dark photon dark matter <cit.>: the dark photon backreacts on the Higgs responsible for its mass and, with large enough couplings at large enough energy density, can restore the dark U(1)_D gauge symmetry. The associated Goldstone boson winds about sites of symmetry restoration, seeding string vortices that deplete the energy in the cold, coherent dark electromagnetic fields. Such a defect network dilutes like radiation and cannot be the dark matter.The dark gauge coupling , which controls the strength of backreaction of dark gauge bosons onto the Higgs, is a free parameter in all production mechanisms and can simply be tuned small enough to avoid defect formation. But the dark photon's kinetic mixing with the Standard Model photon ε is generated by heavy fermions charged under both U(1)_Y and U(1)_D and is therefore also proportional to the dark gauge coupling  <cit.>. <ref> shows that, for known production mechanisms, the prospects for probing the kinetic mixing of dark photon dark matter are severely limited, if not entirely absent.Direct detection in most parameter space would therefore point to a nonminimal dark sector.In this letter we describe an extension of the Abelian-Higgs model that realizes cold dark photon dark matter with kinetic mixing detectable by any planned or proposed laboratory experiment. We discuss additional signatures in cosmology and small-scale structure that could corroborate the nonminimality of the model. A companion article <cit.> explores further generalizations thereof, discusses the implications of defect formation on existing dark photon production mechanisms in detail, and enumerates more ad hoc means to generate a hierarchy between the kinetic mixing and the dark gauge coupling. The Abelian-Higgs theory is described by the Lagrangian[ We use natural units in which ħ = c = 1 and the reduced Planck mass = 1/√(8π G), fix a cosmic-time Friedmann-Lemaître-Robertson-Walker (FLRW) metric d s^2 = d t^2 - a(t)^2 δ_i jd x^i d x^j with a(t) the scale factor, and employ the Einstein summation convention for spacetime indices. Dots denote derivatives with respect to cosmic time t, and the Hubble rate is H ≡ȧ / a. ]_AH = - 1/4_μν^μν + 1/2 D_μΦ( D^μΦ)^∗- V_Φ(Φ),whereis the dark photon, Φ is the dark Higgs field, D_μ = ∂_μ- _μ is the gauge covariant derivative, and the Higgs potential V_Φ is the usual symmetry breaking potential,V_Φ(Φ)= λ/4( Φ^2 - ^2 )^2.In the broken phase, the dark photon acquires a mass = and contributes ^2 |Φ|^2 _μ^μ / 2 to the Higgs's effective potential. If this contribution (which coincides with the dark photon's energy density ρ_ when it is nonrelativistic) exceeds λ v^4, then the dark photon backreacts strongly onto the Higgs and seed topological defects. If dark photon dark matter is produced at a Hubble rate H_⋆, evading defect formation requires[ This threshold assumes the dark photon is composed of nonrelativistic modes; we discuss its generalization in Ref. <cit.>. ]≲ 10^-14λ^1/4/H_⋆/^-3/8.In the minimal setup, kinetic mixing is generated by loops of a few heavy fermions with (1) charge <cit.> with ε∼ g_D e / 16 π^2, so this constraint strongly limits direct detection prospects, as illustrated in <ref>. The general considerations leading to <ref> motivate two possible solutions: either modulate the parameters of the Abelian-Higgs theory to raise the threshold for defect formation or delay production as late as possible, such that the dark photon never has enough energy density to exceed the defect formation threshold.These possibilities may be realized by extending the Abelian-Higgs theory <ref> with couplings to a singlet scalar ϕ asℒ = - W(ϕ)/4 F_μν F^μν + X(ϕ)/2 D_μΦ( D^μΦ)^∗ + Y(ϕ) V_Φ(Φ) + 1/2∂_μϕ∂^μϕ - V(ϕ).We discuss concrete choices of the coupling functions W, X, and Y below. The dark Higgs and photon are made canonical via the rescalings Ψ = √(X(ϕ))Φ and 𝒜_μ = √(W(ϕ)) A_μ. Written in terms of the canonical fields, the Higgs's potential isY(ϕ) V_Φ(Φ)= λ Y(ϕ)/4 X(ϕ)^2( Ψ^2 - X(ϕ) v^2 )^2,and its covariant derivative is√(X(ϕ)) D_μΦ = ∂_μΨ -g_D 𝒜_μΨ/√(W(ϕ)) - ∂_μ√(X(ϕ))/√(X(ϕ))Ψ.The form of <ref> motivates absorbing the ϕ dependence of the theory into its fundamental parameters as g_D(ϕ)= g_D / √(W(ϕ)),λ(ϕ)≡λ Y(ϕ) / X(ϕ)^2, v(ϕ)≡ v √(X(ϕ)). If the scalar is homogeneous, i.e., ϕ(t, 𝐱) = ϕ̅(t), then its cosmological evolution permits the theory's parameters to vary over cosmological history.The threshold for defect formation then depends on the scalar as λ(ϕ) v(ϕ)^4 = λ v^4 Y(ϕ). One might be tempted to arrange for Y(ϕ) ≫ 1 to simply raise the threshold for defect formation as high as needed. This solution is no different than simply taking large λ in the bare Abelian-Higgs Lagrangian, and it requires the Higgs to be a composite degree of freedom because fundamental Higgs scattering violates perturbative unitarity for λ≳ 4 π. Such issues aside, a still more appealing solution would utilize the scalar field's dynamics to not only prevent defect formation but also generate the dark photon relic abundance.We turn to dynamical mechanisms that evade defect formation by delaying production, illustrated by <ref>.In general, the mass sets a kinematic barrier for particle production; resonant production with rolling scalars effectively requires the scalar's mass m_ϕ≳ for efficient dark photon production. As the scalar starts rolling when H∼ m_ϕ≳, in these scenarios production typically occur no later than H∼. On the other hand, scalar couplings offer a means to suppress the dark photon's mass in the early Universe: via <ref>,(ϕ)≡√(X(ϕ) / W(ϕ)).In such a scenario, X(ϕ) and/or W(ϕ) must evolve so that the theory's parameters take on their bare values at the present day. However, the scalar couplings generate derivative interactions that cannot necessarily be neglected as ϕ̅ evolves. This feature is precisely what enables production of a relic abundance of dark photons via tachyonic resonance, familiar from other dark photon models <cit.>.In the presence of a homogeneous scalar, the linearized equation of motion for the transverse polarizations of the dark photon 𝒜_± is0= 𝒜̈_± + H 𝒜̇_± + ω_±^2 𝒜_±,with an effective squared frequencyω_±^2= k^2/a^2 + ^2X̅/W̅ - H/2Ẇ̅̇/W̅ - ∂_t^2 √(W̅)/√(W̅)[with the shorthand W̅ = W(ϕ̅) and X̅ = X(ϕ̅)]. Whenever ω_±^2 is negative (i.e., due to the coupling terms outweighing the mass and momentum contributions), the transverse dark photon modes grow exponentially. Time derivatives of W̅ are necessarily negative in <ref> as W̅ decreases from the large values that suppressat early times. Since the transverse modes are derivatively coupled only to the kinetic function W(ϕ), we set X(ϕ) = 1 and Y(ϕ) = 1 from here on out.[ The longitudinal mode has derivative couplings via X(ϕ), though efficient production requires that X decreases with time; we discuss this and other possibilities in Ref. <cit.>. ]To illustrate the production mechanism, we present a concrete example and compute the dark photon relic abundance, present the expanded parameter space accessible to direct detection experiments, and discuss other constraints on the model. We consider a so-called runaway potential <cit.> for the scalar whereV(ϕ)= M^2 f^2 e^- ϕ / f≡ m_ϕ^2 f^2 e^- ( ϕ - ϕ_0 ) / f.The latter equality defines m_ϕ as the scalar's effective mass at its homogeneous initial condition ϕ̅ = ϕ_0; the scalar remains frozen until H ≈ m_ϕ. An approximate solution to the scalar's homogeneous equation of motion, 0 = ϕ̈̅̈ + 3 H ϕ̇̅̇ + V'(ϕ̅), isϕ̅(t)= ϕ_0 + f ln[ 1 + (m_ϕ t)^2 ].Full solutions exhibit moderate oscillations in ln t about <ref>. Without loss of generality, we take ϕ_0 < 0 and assume a coupling of the formW(ϕ)= 1 + e^- βϕ / f,constructed such that W ≈ 1 after ϕ crosses zero, which occurs at m_ϕ t_d≈ e^- ϕ_0 / 2 f when the Hubble rate is H_d = 1 / 2 t_d.To compute the abundance of dark photons sourced by the rolling scalar, we may first approximate the solution to <ref> to ([k / m_ϕ]^2) as_±(t, k)/(0,k)≈[ 1 + (m_ϕ t)^2 ]^-β / 2[ 1 + (k / m_ϕ)^2 δ(t) ].<Ref> solves the k → 0 limit of <ref> when δ A = 0; plugging <ref> back into <ref> yields a leading-k correction ofδ(t)= ∫_0^m_ϕ t x (1 + x^2)^β_2F_1(1/4, β, 5/4, -x^2).We set a = 1 when H = m_ϕ. Taking x ≫ 1, the growth rate of the small-k modes scales as _±∝ (k / m_ϕ)^2 (m_ϕ t)^β + 1/2. Amplification continues roughly until any other term in <ref> becomes important, i.e., att_⋆(k)≈min{1/2 H_d(√(β^2 + β/2) 2H_d/)^1/β+1, 2β^2 + β/2 k^2 / m_ϕ}.Modes with larger wave number stop growing earlier, and the power spectrum of 𝒜_± peaks neark_⋆/m_ϕ≈ 2^β/2(β + 1)/√(H_d m_ϕ)H_d/√(ββ + 1/2)^2β + 1/2β + 2.As ≫ H_d in general, all amplified modes are highly nonrelativistic once W(ϕ) → 1. Provided β is greater than a few, the time of production <ref> reduces to t_⋆≈ 1/2 H_d. After this time, the mode amplitude of the nonrelativistic modes decreases as the mass continues to increase ∝ W^1/4 per the WKB approximation. Integrating the spectrum of 𝒜 over wave number up to k_⋆ to estimate ρ_(t_⋆) ≈^2 ⟨𝒜^2 ⟩ at production,ρ_(t_⋆)/H_d^4 ≡𝒩_β( m_ϕ/H_d)^2 β - 1( H_d/)^β - 5/β + 1,where the β-dependent coefficient is𝒩_β =(2 β^2 + β)^3 (3 β +1)/2 (β +1)Γ(5/4)^2 Γ(β - 1/4)^2 / 3 π^2 (1 + 4 β)^2 2^2 β^2 - β/β + 1 + 7/2Γ(β)^2 .This result reproduces full numerical solutions within one to two orders of magnitude for - ϕ_0 / f ≳ 5 [below which the errors are dominated by the approximation made in <ref>]. More detailed exposition is provided in Ref. <cit.>. Note that the success of the mechanism is not unique to these particular choices of W and V—the salient features are a substantial mass suppression at early times [so that (ϕ_0) ≪ m_ϕ] and a coupling function that evolves faster with ϕ than the potential (so that tachyonic resonance is efficient). We discuss other possibilities in Ref. <cit.>.The relic abundance of dark photon dark matter isΩ_γ'/Ω_DM = 𝒩_β( m_ϕ/H_d)^2 β - 1( H_d/)^β - 5/β + 1H_d^2/^2√(H_d/H_eq)where H_eq≈ 2.2 × 10^-28 eV is the Hubble rate at matter-radiation equality. Production may occur as late as desired by choosing m_ϕ or ϕ_0 to set t_d. To produce dark photons heavier than the scalar, their initial mass (ϕ_0) ≈ e^βϕ_0 / 2 must be smaller than m_ϕ, requiring βϕ_0 to be sufficiently large (and negative). Achieving the correct relic abundance turns out to place a more stringent requirement of - βϕ_0 / f in the range of 150 - 250, with a corresponding mass suppression more than enough to produce dark photons with any mass of interest. For cosmology to proceed as observed, the dark matter must exist by, say, a redshift z ∼ 10^6 or 10^7, making decoupling at H_d = 10^-22 eV a useful benchmark. As evinced by <ref>, such late production is sufficient for viable dark photon dark matter in reach of any future experiment. Other resonant production mechanisms (via oscillating pseudoscalars <cit.> or scalars <cit.>) require the system to reach a regime where the dark photon backreacts on the scalar sourcing it—otherwise, the dark matter would mostly comprise scalars rather than vectors. These nonlinear dynamics can only be understood with 3D simulations, and the energy exchange that occurs at backreaction often results in a rough equipartition between the dark photon and scalar. At attractive feature of scenarios involving runaway scalars is that they become energetically subdominant without relying on nonlinear dynamics. The runaway scalar solution uniquely tracks the background such that its relative abundance isΩ_ϕ(t)≡ρ_ϕ(t)/3 H(t)^2 ^2≈( 2 f/)^2in both the radiation- and matter-dominated epochs <cit.>. Comparing to the abundance of the dark photons Ω_γ' = ρ_γ' / 3 H^2 ^2 provides a reasonable proxy to assess whether backreaction is important. At production (when the two decouple), the dark photons have an abundance Ω_γ'(t_⋆) ∼[ H_eq / H_⋆]^3/2∼ 10^-10[ H_⋆ / 10^-21 eV]^-3/2. The scalar's decay constant need not be anywhere nearto arrange for its energy density to well above the dark photon's.At large enough f the scalar could have an observable impact on cosmology. In the radiation era, the scalar effectively increases the Hubble rate as small-scale CMB modes enter the horizon, enhancing diffusion damping of the photon-baryon plasma <cit.>. Bounds on extra radiation content from the CMB largely derive from this effect and currently amount to a bound Δ N_eff / N_eff≲ 5% to 10%.[ The scalar is not precisely equivalent to free-streaming neutrinos—genuine constraints would require a proper treatment of the dynamics of its perturbations. In addition, runaway scalars redshift like matter after matter-radiation equality, providing enhanced and distinctive phenomenology compared to pure radiation (which instead becomes energetically subdominant.) ] Current measurements therefore already limit f to be (roughly) below / 10, while CMB-S4, which projects sensitivity to Δ N_eff / N_eff∼ 1%, would probe yet smaller decay constants f / ∼ 0.03.Resonant production mechanisms in general feature dark matter with a density power spectrum sharply peaked at order unity on some characteristic scale. In the case of axion or scalar oscillations, the scalar mass is what sets this characteristic wave number <cit.>. On the other hand, the vector mass sets a kinematic barrier below which resonant enhancement is typically inefficient. The kinetic coupling suppresses the vector mass during production, allowing for peak scales of order m_ϕ which can be far below the present-day dark photon mass.Density fluctuations at the power spectrum peak collapse shortly after matter-radiation equality, forming dense small scale structure at astrophysically relevant scales <cit.>. The typical peak wave number is set by the Hubble rate at production [per <ref>], which in the large-β limit corresponds to structures with massM∼ 2× 10^9 M_⊙β/10^-3m_ϕ/10^-22^-3/2.That minihalos can be much more massive than expected from the dark photon's mass itself provides a signature that distinguishes this model from other resonant production mechanisms. While the presence of such massive substructure does not guarantee that the dark photon has kinetic mixing of any particular size, if an experiment measures a kinetic mixing larger than possible for other nonthermal production mechanisms, substructure in the dark matter halo would necessarily be so massive. <Ref> illustrates the mass-coupling parameter space for which upcoming astrometric <cit.> and photometric <cit.> surveys would be able to probe such extremal substructure.In the class of models we discuss, the kinetic mixing evolves in the early Universe, but there is no reason a priori that the evolution must have stopped before the present day. The constant term in <ref>, for instance, could itself be promoted to a slow function of ϕ, such that early-Universe probes would measure a kinetic mixing different from laboratory ones. Over long enough timescales, laboratory experiments could probe drifts in the kinetic mixing as well. The success of the scalar production mechanism effectively relies on the Abelian-Higgs theory being extremely weakly coupled at early times, which ostensibly would run afoul of so-called weak gravity conjectures (WGCs) <cit.>. While the application of the WGC in its various forms to effective field theory and Higgsed gauge symmetries is subtle <cit.>, arguments that gravity becomes strongly coupled at a scale Λ_UV∼^1/3 are considered robust <cit.>. Requiring that gravity remains weakly coupled at the highest energy scale probed by any production scenario therefore places a lower limit on , which, in conjunction with upper bounds from defect formation, can be quite constraining. Requiring the energy scale of inflation to be above Λ_UV limits viable inflationary production to ≳ 40.While it is unclear whether WGCs constrain gauge couplings with displaced moduli, were it so they would effectively constrain the initial condition of the scalar field via the combination βϕ_0 / f. Achieving the dark matter relic abundance for a given dark photon mass then requires increasing H_d [via <ref>], diminishing the extent to which the scenario evades defect formation. Combining the WGC's lower bound on H_d with the upper bound from defect formation <ref> places an upper bound on the present-day dark gauge coupling. These bounds are set by conditions in the early Universe, i.e., requiring the cutoff scale of quantum gravity to exceed the energy scales of Big Bang Nucleosynthesis, inflation (if measured), or of the SM plasma during dark photon production. Written in terms of the maximum Hubble scale H_max, which for BBN is ∼ 10^-15 eV,g_D/3 × 10^-18 ≲( /10^-15 eV)^25/22( H_max/10^-15 eV)^-9/22.<Ref> takes fiducial values β = 25 and λ = 1 (and is not particularly sensitive to either). This bound is weaker than the defect formation bound but would eliminate most (but not all) of the prospective parameter space if the inflationary Hubble scale is H_inf∼ 10^14 GeV [see <ref>].Weak gravity conjectures also motivate ultralight dark photons receiving their mass from the Higgs mechanism rather than the Stückelberg mechanism. In supersymmetry, Stuückelberg fields are accompanied by radial degrees of freedom with mass ∼ / just like the Higgs, a fact conjectured to hold in any theory of quantum gravity <cit.>. In this case, inflationary production of dark photons with a Stückelberg mass is just as constrained as that with a Higgs mass (since the radial modes are produced during inflation if they are too light). It would be worth understanding whether the radial mode plays an important role in other production mechanisms as well. Though dark matter's current phenomenological relevance resides only at low energies and late times, identifying its fundamental nature offers myriad opportunities to inform high-energy physics. Understanding the mechanisms underlying its mass and its early-Universe production reshapes the implications of direct detection of dark photons. We explore these implications more broadly in companion work <cit.>, with this letter highlighting a scenario whose nonminimality, aside from enabling direct detection in the laboratory, offers promising and complementary observational signatures. These results also motivate searching for signatures of dark photon dark matter from purely gravitational interactions, especially in the near-fuzzy regime 10^-22 eV≲≲ 10^-15 eV <cit.> where phenomenology can depend on the spin of dark matter <cit.> Further investigation of consistent dark photon cosmologies will deepen our understanding of the theoretical motivation for and implications of ultralight dark matter detection.We thank Peter Adshead, Benoit Assi, Masha Baryakhtar, Adrienne Erickcek, Isabel Garcia Garcia, Anson Hook, Junwu Huang, Justin Kaidi, Justin Khoury, and Mark Trodden for insightful discussions. D.C. and Z.J.W. are supported through the Department of Physics and College of Arts and Science at the University of Washington. This material is partially supported by a grant from the Simons Foundation and the hospitality of the Aspen Center for Physics. This work was completed in part at the Perimeter Institute. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. The dark photon parameter space limits and projections quoted above are compiled in Ref. <cit.>.
http://arxiv.org/abs/2310.18397v1
{ "authors": [ "David Cyncynates", "Zachary J. Weiner" ], "categories": [ "hep-ph", "astro-ph.CO" ], "primary_category": "hep-ph", "published": "20231027180001", "title": "Detectable, defect-free dark photon dark matter" }
Center for Optical Quantum Technologies, Department of Physics, University of Hamburg,Luruper Chaussee 149, 22761 Hamburg Germany The Hamburg Centre for Ultrafast Imaging, University of Hamburg, Luruper Chaussee 149, 22761 Hamburg, GermanyInstitute of Science and Technology Austria (ISTA), am Campus 1, 3400 Klosterneuburg, AustriaCenter for Optical Quantum Technologies, Department of Physics, University of Hamburg,Luruper Chaussee 149, 22761 Hamburg Germany The Hamburg Centre for Ultrafast Imaging, University of Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany We demonstrate the failure of the adiabatic Born-Oppenheimer approximation to describe the ground state of a quantum impurity within an ultracold Fermi gas despite substantial mass differences between the bath and impurity species. Increasing repulsion leads to the appearance of non-adiabatic couplings between the fast bath and slow impurity degrees of freedom which reduce the parity symmetry of the latter according to the pseudo Jahn-Teller effect. The presence of this mechanism is associated to a conical intersection involving the impurity position and the inverse of the interaction strength which acts as a synthetic dimension. We elucidate the presence of these effects via a detailed ground state analysis involving the comparison of ab initio fully-correlated simulations with effective models. Our study suggests ultracold atomic ensembles as potent emulators of complex molecular phenomena.Synthetic dimension-induced pseudo Jahn-Teller effect in one-dimensional confined fermions P. Schmelcher January 14, 2024 ============================================================================================§ INTRODUCTIONLandau's postulate, which emerged from a discussion with Teller in 1934, that the symmetry causing an energetically degenerate state, is spontaneously lifted, led to the Jahn-Teller effect statingthe configuration of any non-linear polyatomic system in a degenerate electronic state undergoes spontaneous distortions that remove the degeneracy <cit.>. This effect was demonstrated theoretically and experimentally in various areas of physics, such as solid state physics, molecular physics and material science, as well as in biology and chemistry <cit.>.An extension of the Jahn-Teller effect was found in pseudo-degenerate systems, in which strong vibronic couplings between any two electronic states with an arbitrary non-vanishing energetic gap cause an instability and distortion of the polyatomic system.This is known as pseudo Jahn-Teller effect <cit.>.Furthermore, it has been shown that the (pseudo) Jahn-Teller effect is the only origin for spontaneous symmetry breaking in those systems <cit.>. Ultracold quantum gases have proven to be a pristine platform for quantum simulation due to their high degree of versatility and controllability <cit.>.Furthermore, recently a lot of theoretical <cit.> and experimental <cit.> effort has been devoted to the understanding of the properties of impurities immersed in Bose and Fermi gases. The emergent properties of such ensembles are analogous to polarons. In the condensed matter setting these correspond to dressed states of electrons by the vibrations of the surrounding material, playing an important role in understanding electron transport of their host material <cit.>. Due to the excellent tunability of interaction strength in ultracold atoms Bose and Fermi polarons have been studied extensively in the strong interaction limit <cit.>, beyond the regimes available within material science. The above leads to the question whether ultracold atoms can be employed to elucidate the qualitative features of the electronic structure of molecular systems.Such investigations can possibly lay the groundwork for observing new phenomena or designing (artificial) molecules with desired properties.As a first step towards achieving this goal here we propose a one-dimensional system characterized by large mass imbalance in order to study effects associated with non-adiabaticity and the pseudo Jahn-Teller effect.The experimental realizability of large mass imbalanced systems has been demonstrated in <cit.> for ^6Li-^40K mixtures. Such a two species mixture provides the opportunity to tune the interaction between both components via Fano-Feshbach and confinement induced resonances <cit.> and apply species-selective trapping geometries <cit.>. In addition, the experiments of Ref. <cit.> have proven that the atom number in ultracold ensembles can be controlled on the single-particle level.Our setup consists of a few-body bath of fermions interacting with a single massive impurity. The confinement of the two species is controlled independently by a distinct harmonic trapping confinement.We examine the non-adiabatic physics in our system by comparing the results of the adiabatic Born-Oppenheimer (BO) approximation with the numerically exact ab initio Multi-Layer Multi-Configuration Time-Dependent Hartree method for atomic mixtures (ML-MCTDHX) <cit.>.Even on the basic level of the ground-state energy and one-body density, we point out large deviations among the two approaches evincing significant non-adiabatic contributions which become more prominent for increasing interaction strength.The presence of non-adiabaticity is further indicated by the correlation properties of the two-body bath-impurity densities and the inter-species entanglement captured by the von Neumann entropy. The decrease of these non-adiabatic effects is found to be more sensitive on a increase of the trapping frequency of the impurity as compared to an increase of its mass, assuming a common increment value.Given that the adiabatic BO approximation becomes exact for each infinite mass and infinite trapping frequency, this might not be the expected behaviour and indeed, this approximate approach shows a diverging behaviour from the exact one. The above results can be explained in terms of the pseudo Jahn-Teller effect. In particular, we demonstrate that the bath-impurity system is effectively described by a E ⊗ b system known to exhibit the pseudo Jahn-Teller effect. In addition a detailed symmetry analysis shows that a conical intersection emerges when the impurity position and the inverse of the interaction strength are employed as the slow coordinates of the system provided that the number of bath particles is odd.The inverse of the interaction strength in this context can be interpreted as an additional synthetic dimension. Up to now synthetic dimensions have been observed in various fields: such as Rydberg atoms <cit.>, optical lattices <cit.>, photonics <cit.>, the study of gauge fields <cit.> and quantum simulations <cit.>. Analyzing the potential energy curves for fixed interaction within a multi-channel BO approach reveals that more than one conical intersection might occur.The associated (quasi-)degeneracy points among the potential energy curves are explained in terms of resonant bath particle transport through the impurity when the impurity resides in the vicinity of specific positions. These positions can be identified by the sharp increase of the bath momentum in their vicinity and thus are captured by the Born-Huang correction of the lowest potential energy surface. Our work is organized as follows.Section <ref> introduces the underlying impurity setup, where we introduce a one-dimensional Hamiltonian describing the coupling of our few-body bath and the impurity via s-wave interaction. In Sec. <ref>, we present the ab-initio methods ML-MCTDHF and a multi-channel BO ansatz, which we apply to solve the Schrödinger equation. Especially, we point out the relation between the adiabatic BO approximation and our multi-channel BO ansatz.The basic ground-state analysis is performed in Sec. <ref>. Comparing the adiabatic BO approximation with the exact numerical result shows deviations for the impurity energy as well as for the one-body density. In section <ref> we focus on the correlation properties in terms of the von Neumann entropy and two-body density. The dependence of the observed effects on the impurity parameters is addressed in Sec. <ref>. The existence and implications of the pseudo Jahn-Teller effect in our system are discussed in Sec. <ref>. We finish with our conclusions and outline further perspectives in Sec. <ref>. In Appendix <ref>, the detailed derivation of the non-adiabatic couplings for the multi-channel BO ansatz is provided. This is followed in Appendix <ref> by an analysis of entanglement within the adiabatic BO approximation. Appendix <ref> contains a convergence analysis of our two ab initio methods: ML-MCTDHF and the multi-channel BO approach in combination with a configuration interaction method. Appendix <ref> contains a proof of an important theorem used in the main text. Appendix <ref> reviews the relevant for us properties of two interacting confined fermions. The final Appendix <ref> explicates our analysis of the emergence of a E ⊗ϵ conical intersection.§ SETUP AND HAMILTONIANWe consider a two-species setup of mass-imbalanced and spin-polarized fermions confined in a one-dimensional(species-dependent) parabolic trap. In particular, we focus on the particle imbalanced case where a lighter majority species, denoted as B interacts with a single heavy impurity I, via s-wave repulsion.The Hamiltonian of our system reads Ĥ = Ĥ_B + Ĥ_I + Ĥ_BI with the corresponding terms beingĤ_σ= ∑_j=1^N_σ[ -ħ^2/2m_σ(∂/∂ x_j^σ)^2+ V_σ(x_j^σ)], Ĥ_BI= ∑_k=1^N_B∑_j=1^N_Igδ(x_k^B-x_j^I),where σ∈{ B, I } and V_σ(x)= 1/2m_σω_σ^2 x^2 denotes the harmonic trapping potential for the associated species.In addition, N_B denotes the number of bath atoms (herewith we mainly focus on the N_B = 5 case) and N_I = 1 is the number of impurities. The effective interaction strength g corresponds to the 1D s-wave contact-interaction strength between the two distinct species <cit.>. Due to the Pauli principle, intra-species interaction is excluded since two fermions cannot occupy the same state.Let us note here that g is dependent on the experimental transverse confinement length and the 3D s-wave scattering length <cit.> and thus is tunable viaconfinement and Fano-Feshbach resonances <cit.>. Here we consider an impurity species I that is heavier and more tightly trapped compared to the bath species B, implying m_I ≫ m_B and ω_I ≫ω_B, motivating a BO-like approach <cit.>. In what follows, we mainly focus on the case m_I = 4 m_B and ω_I = 4 ω_B motivated by corresponding state-of-the-art experiments with ^6Li-^23Na mixtures <cit.>, which are representative of the qualitative behaviour in this regime. In Sec. <ref> we provide a more detailed analysis of the effects of varying m_B and ω_B in the ground state of the system.§ METHODOLOGY AND COMPUTATIONAL APPROACHIn the following, we present the methods we employ for the ground state study of our system. Therefore, we have to solve the stationary Schrödinger equation corresponding to the Hamiltonian of Eq. (<ref>). First, we will address the fully correlated numerical ML-MCTDHX approach. Second, we describe our multi-channel BO approach, which is motivated by the significant mass imbalance m_I/m_B in our system. Both methods are numerically exact ab-initio approaches suitable for the solution of multi-component fermionic systems <cit.>.§.§ The ML-MCTDHX methodML-MCTDHX is a variational, ab initio and numerically exact approach for the simulation of the non-equilibrium quantum dynamics of bosonic and fermionic particles and mixtures thereof, containing a single or both types of particles <cit.>.ML-MCTDHX relies on a multi-layered ansatz that variationally optimizes the involved quantum basis at different levels of the complex structure of the total many-body wavefunction.In particular, the total many-body wavefunction, |Ψ(t)⟩ is represented as a linear combination of j=1,2,...,D distinct orthonormal functions for each involved species, |Ψ_j^σ(t)⟩, with σ=B,I|Ψ(t)⟩ = ∑_j_B, j_I = 1^D A_j_B, j_I(t) |Ψ_j_B^B(t)⟩ |Ψ_j_I^I(t)⟩,where A_j_B, j_I(t) are the corresponding time-dependent expansion coefficients. This expansion is formally identical to a truncated Schmidt decomposition of rank D, given by|Ψ(t)⟩ = ∑_k=1^D √(λ_k(t))) |Ψ̃_k^B(t)⟩ |Ψ̃_k^I(t)⟩.The time-dependent expansion coefficients λ_k(t) are denoted as Schmidt weights and |Ψ̃_k^σ(t)⟩ are the corresponding Schmidt modes.Especially, λ_k and |Ψ̃_k^σ(t)⟩ represent the eigenvalues and eigenstates of the σ-species reduced density matrix, namelyρ_σ^(N_σ) (x_1, …, x_N_σ, x'_1, …, x'_N_σ, t)= ∫∏_j = 1^N_σ̅d x_j^σ̅ ×Ψ^*(x_1^σ=x'_1, …, x_N_σ^σ=x'_N_σ, x^σ̅_1, …, x^σ̅_N_σ̅, t) ×Ψ(x_1^σ=x_1, …, x_N_σ^σ=x_N_σ, x^σ̅_1, …, x^σ̅_N_σ̅, t),where σ̅≠σ. Notice that within ML-MCTDHX this density matrix can be expressed as ρ_σ^(N_σ)(x_1, …, x_N_σ, x'_1, …, x'_N_σ, t)=⟨ x_1, …, x_N_σ| ρ̂^(N_σ)_σ(t)| x'_1, …, x'_N_σ⟩, where the density matrix operator readsρ̂_σ^(N_σ)(t) = ∑_ j_σ,j'_σ = 1j_σ̅ = 1^DA_j_σ, j_σ̅^*(t) A_j_σ̅, j'_σ(t)_≡[ρ̂_σ^(N_σ)(t)]_j_σ, j'_σ| Ψ_j_σ^σ(t) ⟩⟨Ψ_j'_σ^σ(t) |.Therefore, λ_k and | Ψ̃_k^σ (t) ⟩ can be evaluated by diagonalizing the matrix [ρ̂_σ^(N_σ)(t)]_j_σ, j'_σ for j_σ,j'_σ=1, …, D. The truncated Schmidt decomposition of Eq. (<ref>) exhibits a finite bipartite entanglement of the system among the bath and impurity species, if at least two λ_k(t)'s are non-vanishing.In the case, λ_1(t) = 1 and λ_k(t) = 0 for k = 2, …, D the total wavefunction |Ψ(t)⟩ is a tensor product of the species states and the system is non-entangled. The expansion of Eq. (<ref>) can be thought of as an expansion in terms of entanglement modes with D controlling the maximum number of allowed entanglement modes of the system.The multi-layered structure of our ansatz stems from the fact that each species function, | Ψ_j^σ(t) ⟩ is expandedin terms of a time-dependent number-state basis set |n⃗(t)⟩^σ leading to|Ψ_j^σ(t)⟩ = ∑_n⃗ B_j, n⃗^σ(t) |n⃗(t)⟩^σ.On this level, |n⃗(t)⟩^σ could be determinants or permanents for a fermionic or bosonic species σ respectively. Further, B_j, n^σ(t) corresponds to the time-dependent expansion coefficients with a particular number state |n⃗(t)⟩^σ, which is built from d^σ time-dependent variationally optimized Single-Particle Functions (SPFs) given byϕ_l^σ(t), l=1,2,..., d^σ with n=(n_1,...,n_d^σ) corresponding to the contribution numbers.On the lowest layer of the ML-MCTDHX variational ansatz, the SPFs are represented on a time-independent primitive basis.For the underlying case of spinless fermions, this refers to a ℳ dimensional Discrete Variable Representation (DVR) represented by {|k⟩}.Hence, the SPF of the σ-species are given by |ϕ_j^σ(t)⟩ = ∑_k=1^ℳC_jk^σ(t)|k⟩.In our investigation we choose ℳ=150 grid points of a harmonic oscillator DVR. To determine the variationally-optimal ground state of the Hamiltonian (<ref>) corresponding to the (N_B+N_I)-body wavefunction |Ψ(t)⟩ of Eq. (<ref>)–(<ref>), the corresponding ML-MCTDHX equations of motion are derived by employing the Dirac-Frenkel <cit.> variational principle ⟨δΨ (t)| i ħ∂/∂ t-H|Ψ(t)⟩ = 0, for details see <cit.>.Hence, we have to solve numerically D^2 linear differential equations of motion for A_j_B, j_I(t), which are coupled to D((N_B+d^B-1)!/N_B!(d_B-1)!+(N_I+d^I-1)!/N_I!(d_I-1)!) and d^B+d^I non-linear integro-differential equations for the expansion coefficients of the species functions B_j, n⃗^σ(t) and the SPFs C_j, k^σ(t) respectively. Let us note here that calculating the ground state of Eq. (<ref>) can be achieved by performing propagation in imaginary time. Within this approach we perform a Wick rotation of the real time t which leads to the imaginary time τ = -i t.This substitution results in the energy of the propagated state of the corresponding equations of motion to decrease monotonically in time proportionately to ∝ e^-(E(t)-E_0)t, where E_0 is the ground state energy. Therefore the ground state is obtained in the limit of large propagation times τ→∞ if the initial state possesses a finite overlap with the ground state.The key feature of ML-MCTDHX is the expansion of the system's many-body wavefunction with respect to a time-dependent and variationally optimized basis, that can adapt to the relevant inter-particle correlations at the level of single-particle, single-species and total multi-species systems.The involved Hilbert-space truncation is characterised by the chosen orbital configuration space, which is characterized by C=(D;d^B;d^I). Furthermore, since we consider a single impurity on the I species, there are no intra-species interactions thus we can set d^I = D without loss of generality. In turn, the number of Schmidt modes is taken large enough D = 12 to account for inter-species entanglement.To account for the inter-species interactions of the bath we consider a large enough number of bath species orbitals d^B = 18 to properly account for the different states the bath atoms can occupy as a consequence of the bath-impurity entanglement. For more information regarding the convergence of the ML-MCTDHX method, see Appendix <ref>. §.§ Multi-channel Born-Oppenheimer approach Motivated by the assumption of a heavy, less-mobile impurity the comparison of our results with a BO-like approach is justified. A general variational formulation of this approach can be established in terms of the multi-channel BO ansatz,Ψ(x^B_1,…,x^B_N_B, x_I)=∑_j = 1^M Ψ_j,I(x_I) Ψ_j,B(x^B_1,…,x^B_N_B;x_I)_≡⟨ x^B_1,…,x^B_N_B | Ψ_j,B(x_I) ⟩.Here, we have introduced an orthonormal basis for the bath species, | Ψ_j,B (x_I) ⟩, with j=1,2,…, exhibiting a parametric dependence on x_I.The impurity-species wavefunctions Ψ_j,I(x_I) with the normalization condition ∑_j = 1^M ∫dx_I |Ψ_j,I(x_I)|^2 = 1 correspond to the expansion coefficients in the many-body basis of the coupled system.Using the multi-channel ansatz of Eq. (<ref>) and employing the variational principle δ⟨Ψ | Ĥ - E | Ψ⟩/δΨ^*_k,I(x_I) = 0,we derive the coupled set of equationsE Ψ_k,I(x_I)= - ħ^2/2 m_I∑_j,l = 1^M ( δ_kjd/dx_I -i A_kj(x_I) ) ×( δ_jld/dx_I -i A_jl(x_I) ) Ψ_l,I(x_I)+ ∑_l = 1^M ( ⟨Ψ_k,B(x_I) | Ĥ_B + Ĥ_BI | Ψ_l,B (x_I) ⟩ + δ_kl1/2 m_B ω^2_I x_I^2 + V^ ren_kl(x_I) ) Ψ_l,I(x_I),for k = 1, 2, …, M <cit.>.In this step, we introduce the non-adiabatic derivative couplings A_kj(x_I) = i ⟨Ψ_k,B(x_I) | ∂Ψ_j,B/∂ x_I(x_I) ⟩ as an effective gauge field. In addition, the last term in Eq. (<ref>) refers to the potential renormalization, which is given byV_kl^ ren(x_I) =ħ^2/2 m_I ⟨dΨ_k,Bdx_I(x_I) | 1 - 𝒫̂_M | dΨ_l,Bdx_I(x_I) ⟩,where the projector onto the subspace spanned by | Ψ_k,B (x_I) ⟩ is defined as𝒫̂_M = ∑_j = 1^M | Ψ_j,B (x_I) ⟩⟨Ψ_j,B(x_I) |.In general, it is possible to define | Ψ_k,B (x_I) ⟩ in terms of any complete wave-function basis.However, the convenient choice is to employ the eigenstates of Ĥ_B + Ĥ_BI for fixed x_I, which leads to the diagonal matrix elements ⟨Ψ_k,B(x_I) | Ĥ_B + Ĥ_BI | Ψ_l,B (x_I) ⟩ = δ_klε_k(x_I),where the eigenvalues ε_k(x_I) are the corresponding potential energy curves. In the limit of infinite channels, M →∞, the expansion of Eq. (<ref>) and the equations-of-motion of Eq. (<ref>) are exact for any mass ratio m_B/m_I. In particular, a mass ratio m_B/m_I ≪ 1 is favorable, since it suppresses the off-diagonal non-adiabatic coupling terms stemming from the gauge potential A_kj(x_I) and potential renormalization terms V^ ren_kl(x_I). In this case, only a few termscontribute significantly to the exact many-body wavefunction, and consequently a relatively small value M suffices for adequate convergence to the exact solution.§.§ Variational and non-variational adiabatic BOBefore proceeding let us comment on the reduction of the above to the adiabatic BO approximation widely employed in molecular physics <cit.>. If we restrict the ansatz of Eq. (<ref>) to M = 1 term, the variational equations of motion reduce to the adiabatic BO approximation incorporating the Born-Huang correction arising from V^ ren_11(x_I), which is therefore characterised as variational adiabatic Born-Oppenheimer (VABO) approximation. The usual adiabatic BO approach consists of dropping this additional term by considering V_11^ ren(x_I) = 0 and will be denoted as non-variational adiabatic Born-Oppenheimer (NVABO) approximation. It can be shown that the VABO approximation accounting for V^ ren_11(x_I) ≠ 0 possesses a variational character and thus yields an upper bound to the ground state energy. In contrast, the usual NVABO approximation, with V_11^ ren(x_I) = 0, yields a lower bound for the ground-state energy <cit.> [Since a term is dropped this leads to a lower bound, which is non-variational. A proof for this can be found in <cit.>:Herein, it is shown that the standard NVABO yields a lower bound.]. Thus, the value of the term V_11^ ren(x_I) relative to the other parameters of the system is a good indication for the correlations among distinct potential energy curves as it provides an order of magnitude estimate for the correlation energy related to the contribution of multiple potential energy curves, E_ corr≡lim_M →∞ E_M - E_M=1. An important caveat here is that the lower ground-state energy of NVABO does not imply a better quality of the corresponding many-body state, but an approximation in terms of the Hamiltonian. Indeed, the variational principle guarantees that the energy of the NVABO ground state will be larger than the VABO when the exact form of the Hamiltonian is considered. In the following, we will rely on the above analyzed methods to investigate the relevant ground-state properties of our system. § BASIC GROUND-STATE PROPERTIES Since, we consider the case of an impurity somewhat larger than the mass of the bath particles, m_I > m_B, at first glance it might seem sufficient to take into account the adiabatic BO approximation.Hence, in the following we will compare the ground state properties of Eq. (<ref>) within the fully correlated ML-MCTDHX approach and the adiabatic BO approaches (VABO and NVABO), which neglects the non-adiabatic contributions of the underlying Hamiltonian. This will give us an overview of the importance of the non-adiabaticity in our system in its most elementary ground state properties. §.§ Impurity-interaction energyWe start our ground-state analysis by considering the impurity interaction energy, which is given byE_ie=E_tot(g)-E_tot(g=0).Figure <ref> (a) reveals that all approaches show that stronger repulsions lead to higher impurity-interaction energies, as the interspecies interaction energy increases. In addition, all approaches demonstrate that E_ie shows a linear behaviour for weak interaction and a non-linear one, characterized by a decreasing slope with increasing g. This is due to the change of the many-body state of the system in order to reduce the associated interaction energy penalty stemming from the spatial overlap among the impurity I and bath B species. The VABO approximation follows this linear trend for a larger regime of g values leading to an overall stronger increase of E_ ie, when compared to the other approaches. The comparison of the variational with the NVABO approximation reveals that the reason for this divergence is the inclusion of the V_11^ ren(x) term which becomes important for g>1. This leads to a larger discrepancy between the lower energy bound given by the NVABO approximation and the upper (variational) bound, which is represented by NVABO incorporating the Born-Huang correction. The multi-channel BO and ML-MCTDHX approaches are able to correct the energy accounting for non-adiabatic effects, which is a first indication for their importance in our system.To get further insight regarding the mechanisms for the discrepancy between the adiabatic BO result and the two numerically exact ab-initio methods, we study the one-body density profiles provided by the above-mentioned approaches. §.§ One-body densityTo resolve the spatial behaviour for both the impurity and the bath we resort to the one-body density as a function of the interaction strength, see Fig. <ref>. Here for the sake of brevity, we only compare the VABO approximation and ML-MCTDHX. In this section, we do not show the results of the multi-channel BO approach, since they agree very well with the results of ML-MCTDHX. The results for the NVABO approximation will be discussed later on, since, as discussed in Sec. <ref>, the corresponding ground state stems from the approximation of the Hamiltonian and thus there are several nuances associated with it. For both the VABO and ML-MCTDHX, we observe an outward displacement of the majority species for increasing g, stemming from the gradual depletion of the density in the region around x = 0, see Fig. <ref>(a) and Fig. <ref>(b). This behavior is a result of the repulsive interaction between the two species, as the heavy and tightly trapped impurity remains in the trap center, pushing the bath particles outward. However, it is obvious that the spatial profiles of the majority species possess significant quantitative deviations among the two approaches. In particular, the VABO approach under-appreciates the density-depletion for x = 0 and, also, under-appreciates the development of the two density maxima at x = ± 0.5 evident within ML-MCTDHX, see Fig. <ref>(b). The comparison of the impurity densities also shows clear qualitative deviations. In particular, within ML-MCTDHX the profile remains essentially unchanged possessing a Gaussian shape for all g values, see Fig. <ref>(d), while within the adiabatic BO approximation, see Fig. <ref>(c), the profile increasingly flattens and spreads out spatially as g increases.The above allow us to attribute the substantially larger energy of the VABO approximation when compared to ML-MCTDHX, see Fig. <ref>(a), to the larger spatial overlap of the impurity and bath species in the former approach that increases the interaction energy. Additionally, the VABO approximation results in a larger spreading of the density of the impurity species, see Fig. <ref>(c) and <ref>(b_i) with i=1,2,3 resulting in additional contributions of potential energy when compared to ML-MCTDHX. The above implies that the adiabatic treatment is not able to adequately describe the state of the system and thus additional contributions stemming from the non-adiabatic couplings need to be introduced in order to properly account for it.To elucidate further the shortcomings of the adiabatic approach let us examine the behavior of the effective potential within the VABO and NVABO approximations. This effective potential readsϵ̅(x_I) = V_I(x_I) +ε_1(x_I) + V_1,1^ren(x_I)-ħω_B N_B^2/2.The first term denotes the harmonic trapping potential for the impurity, see Eq. (<ref>), while the second and third terms are the potential energy curve and the Born-Huang potential renormalization, see Eq. (<ref>). The last term removes from ϵ̅(x_I) the spatially constant energy offset stemming from the non-interacting energy of the bath species. Recall that NVABO does not contain the Born-Huang correction, i.e. V_1,1^ ren(x_I) = 0 in ϵ̅(x_I).Figure <ref> provides the one-body densities within the VABO, NVABO and ML-MCTDHX approaches in combination with ϵ̅(x_I) for three different interaction strengths. By comparing the effective potential within the VABO approximation, see Fig. <ref>(c_i), with i = 1, 2, 3, we observe that it deforms from a harmonic potential for g = 0, see Fig. <ref>(c_1), to a double well structure for g > 3, see Fig. <ref>(c_3). In contrast, NVABO does not show this effect with the effective potential being parabolic for all considered interaction strengths. This demonstrates that the Born-Huang term is responsible for the emergence of the double-well structure in Fig. <ref>(c_3).The above analyzed behavior of the effective potential explains the density discrepancies among the ML-MCTDHX and the VABO approach. The transition from the harmonic to a double-well one gives an explanation for the displacement of the impurity from the trap center <cit.>, see Fig. <ref>(b_i), with i = 2, 3, in contrast to the ML-MCTDHX approach. In the case, that the V_11^ ren(x_I) is completely dropped the impurity density agrees much better to the ML-MCTDHX result. However, notice that this term is accounted for in the exact case, which is a strong indication of the presence of sizable non-adiabatic effects that lead to the cancellation of this term within ML-MCTDHX. Indeed, notice that dropping the Born-Huang term does not fix the sizable quantitative deviation of ρ^(1)_B(x_B) within the variational BO approach when compared to the numerically-exact result, see Fig. <ref>(a_i) for i = 2, 3. This demonstrates the importance of non-adiabatic correlations in capturing the correct state of the system.§ CORRELATION PROPERTIESHaving analyzed the basic ground state properties of Eq. (<ref>), let us elaborate on the emergent correlation patterns in terms of the two-body density and the von Neumann entropy.§.§ Two-body density Figure <ref> addresses the bath-impurity two-body densities for the same interaction strengths as in Fig. <ref>. For both the VABO and ML-MCTDHX approaches, we detect the emergence of a correlation hole in ρ^(2)_BI(x_B, x_I) for x_B ≈ x_I as the interaction increases, see Fig. <ref>(a_i) and (b_i) with i=2,3. Besides the emergence of this structure, the two-body densities among the two distinct approaches are significantly different. Notice that ρ^(2)_BI(x_B, x_I) within the adiabatic BO approach appears to be more pronounced oscillatory for varying x_B but fixed x_I when compared to the ML-MCTDHX approach, see for instance Fig. <ref>(a_2) for x_I ≈ 0.1. This is explained as follows. The two-body density within the VABO and NVABO approximation readsρ^(2)_BI (x_B, x_I) = | Ψ_1,I(x_I) |^2 ×∫∏_j = 2^N_Bdx_j  | Ψ_1,B(x_B, x_2, …,x_N_B;x_I)|^2_≡ C_BI(x_B, x_I),with the two-body correlator C_BI(x_B, x_I) being the one-body density of the bath for a fixed delta-potential barrier at x_I.This implies that since the one-body density of the bath would exhibit Friedel oscillations as it corresponds to a state of spin-polarized fermions, so does C_BI(x_B, x_I) and thus ρ^(2)_BI(x_B, x_I) for fixed x_I. Thus the absence of these oscillations within ML-MCTDHX, see Fig. <ref>(b_2) and <ref>(b_3), provides another direct indication that more than a single bath eigenstate for fixed impurity position is involved in the exact many-body ground state. §.§ Von Neumann entropyThe observed difference in the correlations between the adiabatic BO and ML-MCTDHX approaches (identified in the interspecies two-body densities) carries over in the entanglement among the two species.To demonstrate this we evaluate the von Neumann entropy,S_ VN = -Tr_I[ ρ̂^(N_B)_B log(ρ̂^(N_B)_B) ] = -Tr_B[ ρ̂^(1)_I log(ρ̂^(1)_I) ],where ρ̂^(N_σ)_σ =Tr_σ̅[ | Ψ⟩⟨Ψ | ] refer the σ-species-density matrices resulting after the other species, σ̅≠σ, is traced out from the many-body wavefunction | Ψ⟩.The S_VN results are presented in Fig. <ref>(b) for all employed approaches.Clearly, the adiabatic BO approximations (of both the VABO and NVABO kind) overestimate the von Neumann entropy when compared to the ML-MCTDHX case. The underlying reason for this deviation can be traced back to the one-to-one mapping between the position of the impurity x_I and the state of the bath, | Ψ_1,B(x_I) ⟩, that Eq. (<ref>) implies for M=1. This results in large uncertainties for the state of either species when the other is traced out giving rise to entanglement. We show in Appendix <ref> that this entanglement depends on the impurity localization, with a more delocalized impurity resulting in larger interspecies entanglement, and on || ∂/∂ x_I| Ψ_1,B (x_I) ⟩|_x_I = 0 || parameterizing how much the bath state changes as the impurity spreads. While the former is rather constant for increasing interaction, see Fig. <ref>(b_1)–<ref>(b_3), the latter substantially increases as evidenced in Fig. <ref>(a_1)–<ref>(a_3), explaining the increase of S_VN within the adiabatic BO approaches observed in Fig. <ref>(b). Beyond the adiabatic approximation S_VN is shown to be substantially smaller.This is because the superposition of additional eigenstates of bath allows to relax the one-to-one relation among x_I and bath states imposed by Eq. (<ref>).Here, one should notice also that S_VN, within the multi-channel BO approach deviates remarkably from the ML-MCTDHX approach, see Fig. <ref>(b), which is the first discrepancy we observe among the two-above mentioned approaches. This deviation is not physical since both approaches are asymptotically exact when the corresponding truncation parameters increase to account for all possible states. This discrepancy is rather an indication of the slow convergence of the multi-channel BO approach with increasing number of configurations M (here M = 40 was employed), which is evident when quantities addressing the total (N_B + 1)-body wavefunction are concerned, see also Appendix <ref>. Nevertheless, notice that on a qualitative level the results of this approach are valid and thus it can serve as an important analysis tool, demonstrating how non-adiabatic effects modify the many-body wavefunction from the adiabatic BO approximation case to the numerically exact ML-MCTDHX case. § IMPACT OF IMPURITY PARAMETERSBefore proceeding to the analysis of the non-adiabatic coupling terms, the importance of which is highlighted in Sec. <ref> and <ref>, it is instructive to comment on the influence of the mass m_I and trapping frequency ω_I of the impurity. First, let us compare the impurity energy and von Neumann entropy in Fig. <ref> between the (N)VABO (a) and (b) and ML-MCTDHX (c) and (d) for different values of the impurity parameters ω_I and m_I. Figure <ref>(a) presents the energy bounds stemming from the adiabatic BO approximation to the exact energy. It can be seen that the lower bound provided by the NVABO approximation is approximately the same for all considered parameters. This result can be explained in terms of the effective potential, ϵ̅(x_I), see Eq. (<ref>). As it can be seen in Fig. <ref>(c_i) for i=1,2,3 ϵ̅(x_I), the NVABO approximation leads to a parabolic potential of frequency almost equal to ω^ eff_I ≈ω_I, with an additional energy shift stemming from the bath-impurity density-density interactions. This behaviour stems from the fact that the potential energy curve ε_1(x_I) hardly changes for different x_I since the characteristic length scale of its variation is determined by √(ħ/(m_B ω_B)) and is thus larger than the size of the impurity wavefunction ∼√(ħ/(m_I ω_I)). In particular, by least-square fitting we can verify that even in the case of strong interactions, g = 5, the potential energy curve induced shift to the trapping frequency is ω^ eff_I - ω_I ≈ -0.09 for m_I = 4 and ω_I = 4, with this deviation further decreasing when either impurity parameter is increased (not shown here for brevity). Therefore, within the NVABO approximation the effective Hamiltonian isE Ψ_1,I(x_I) =[ - ħ^2/2 m_Id^2/dx_I^2 + 1/2 m_I ω_I^2 x_I^2 + ε_1(0) ] Ψ_1,I(x_I),and thus E_ ie≈ε_1(0).To explain the behavior of the upper bound of the impurity energy we examine the influence of the potential energy peak associated to V^ ren_11(x_I), see Fig. <ref>(c_i) with i=1,2,3. Notice that the spatial variation of this term does not depend on the parameters of the impurity similarly to ε_1(x_I), however, its amplitude is inversely proportional to m_I. Therefore, the increase of ω_I leads to the focusing of the impurity density in the spatial extent where V^ ren_11(x_I) is large, without deteriorating the importance of this term.This explains the increase of the impurity-interaction energy when ω_I is twofold increased, see Fig. <ref>(a). In contrast, the twofold increase of m_I causes the same focusing effect to the impurity density as its ω_I counterpart, but it suppresses the amplitude of V^ ren_11(x_I) by a factor of two. This causes the energy increase associated to the Born-Huang term being roughly halved for m_I = 8, ω_I = 4 when compared to m_I = 4, ω_I = 8. This energy decrease is evidenced by the energy difference of the VABO from the NVABO approximation in the corresponding Fig. <ref>(a). In summary, the deviation of the energy of the VABO and NVABO approximation as expected reduces for massive impurities. However, as the confinement strength increases this impurity energy uncertainty, stemming from comparing the VABO and NVABO energy bounds, increases demonstrating that the adiabatic BO approximation becomes worse. In the VABO approximation the von Neumann entropy decreases by the increase of either m_I and m_B see Fig. <ref>(b). This effect can be attributed to the increased localization of the impurity within its parabolic trap with a characteristic length scale ℓ_I = √(ħ/(m_I ω_I)). As the discussion in Appendix <ref> and Sec. <ref> reveals, a more localized state implies a stronger weight for | Ψ_1,B (0) ⟩ and thus a reduction of the entanglement captured by S_VN. The fact that a twofold increase of either m_I or ω_I leads to the same value of S_VN, see Fig. <ref>(b), stems from the independence of | Ψ_1,B (x_I) ⟩ on the impurity parameters. Therefore, all correlation measures depend only on ℓ_I which is equal in both considered cases.In the numerically exact case of ML-MCTDHX the von Neumann entropy decreases similarly to the adiabatic BO approximation when either m_I or ω_I increases, see Fig. <ref>(d). However, the resulting increase of S_VN is not equivalent to the adiabatic BO case. This can be explained by the fact that increasing ω_I increases the energy gap among distinct impurity states thus the impurity is forced to occupy excited states more weakly. This means that according to the Schmidt decomposition the Schmidt weights, λ_k, with k > 1 should reduce and as a consequence also S_VN reduces. In contrast a reduction of m_I affects only the involved length scales and not the energy ones and thus we expect that S_VN is more sensitive to an increase of ω_I.Less entanglement also means less opportunities to reduce the bath-impurity interaction energy below its density-overlap contribution. Notice that the latter can be argued to be similar in both a two-fold decrease of m_I or ω_I since the corresponding length scale ℓ_I' = ℓ_I/√(2) is equal in both cases. Indeed, the impact of a twofold increase of the impurity mass m_I on the impurity energy is larger compared to a twofold increase in the ω_I, see Fig. <ref>(c) and its inset. Notably though the energy difference for different parameters is not as prominent as in the VABO approximation case, compare Fig. <ref>(c) to Fig. <ref>(a), demonstrating the important role of the non-adiabatic derivative couplings in cancelling the energy increase due to the Born-Huang term.§ PSEUDO JAHN-TELLER EFFECT The Jahn-Teller effect is known for giving rise to spontaneous symmetry breaking in molecular and condensed matter physics <cit.>.In the case, of degenerate states, non-adiabatic couplings lift the degeneracy, which results in a ground-state possessing a lower level of symmetry.In the pseudo Jahn-Teller effect, also in a non-degenerate system, the symmetry is reduced compared to the adiabatic approximation due to the non-adiabatic couplings among the fast (bath) and slow (impurity) degrees of freedom <cit.>.In this section, we work out the symmetry breaking processes and the impact of non-adiabatic effects, which we have pointed out in the above ground-state analysis. §.§ Origin of Pseudo Jahn-Teller effect in Fermi impurity systemsThe first step in identifying the (pseudo) Jahn-Teller effect in our setup is to identify the symmetries of our setup and how these are reduced by the interaction <cit.>. In the case g = 0 our system possesses a parity symmetry for each individual component 𝒫̂_B x_i^B = -x_i^B and 𝒫̂_I x_I = - x_I. However, for g ≠ 0 this symmetry ceases to hold since the interaction term couples the two species and consequently the application of either 𝒫̂_I or 𝒫̂_B alters the state of the system.Therefore, it is interesting to examine how this reduction of symmetry affects the impurity state and especially its correlation to the state of its environment.In the spirit of the original Jahn-Teller derivation <cit.>, it is instructive to calculate the state of the fast degree of freedom, being the bath state, at the high-symmetry point | Ψ_j,B(x_I = 0) ⟩ and then identify the leading order coupling to the slow coordinate x_I. The description of our system is simplified by recasting the Hamiltonian in terms of the shifted fast coordinates as r_i = x_i - x_I. This coordinate change is equivalent to the so-called Lee-Low-Pines transformation <cit.>. The transformed Hamiltonian reads Ĥ' = Ĥ_0r + Ĥ_P_ CM + Ĥ_I + Ĥ_ coup. The first term refers to the bath HamiltonianĤ_0r =∑_j = 1^N_B(-ħ^2/2 m_B∂ ^2/∂r_j^2+ 1/2 m_B ω^2_B r_j^2 + gδ(r_j) ).Notice that Ĥ_0r is independent of the x_I coordinate at the cost of introducing a derivative interaction term, Ĥ_P_ CM, proportional to 1/m_I, corresponding to the kinetic energy of the bath particles in the transformed frame. Namely this term readsĤ_P_ CM =-ħ^2/2 m_I( ∑_j = 1^N_B∂/∂ r_j)^2.This is a typical property of a system following the Lee-Low-Pines transformation <cit.>.[In contrast to the bosonic case, for fermions it is more convenient not to absorb the ∝∑_j = 1^N_B∂^2/∂ r_j^2 appearing in Eq. (<ref>) as a reparametrization of the bath mass m_B → m_B m_I/(m_B + m_I) in Eq. (<ref>) as such a choice avoids difficulties in calculations. This is because this term scales ∝ N_B^2 owing to the Pauli exclusion principle, in contrast to the overall ∝ N_B scaling of the center-of-mass momentum in Ĥ_P_ CM.]The impurity Hamiltonian corresponds to a harmonic oscillator with modified frequencyĤ_I = -ħ^2/2 m_I∂ ^2/∂x_I^2+ 1/2m_I ω^2_Ieff x_I^2,where ω_Ieff = ω_I √(1 + m_B ω^2_B/m_I ω^2_I). Finally, the coupling Hamiltonian contains a derivative and a linear coupling termĤ_ coup = ∑_j = 1^N_B( ħ^2/m_I∂/∂ r_j∂/∂ x_I + m_B ω^2_B r_j x_I ).The single-particle behavior (for N_B = 1) of Ĥ_0 r is well-known as this system admits an analytic solution <cit.>, see Appendix <ref>. Importantly, we know that its eigenspectrum for g →∞ features an equidistant in energy ladder of pairs of degenerate states. However, in the many-body bath case, N_B > 1, especially for finite m_I/m_r the structure of the eigenspectrum is more involved. Nevertheless, regarding the ground state of the Ĥ_0r + Ĥ_P_ CM system we can prove Theorem <ref>. The two lowest energy eigenstates of Ĥ_0 r are degenerate for odd N_B provided that g →∞ irrespectively of the values of the remaining system parameters, i.e. m_I/m_B and ω_I/ω_B.This theorem does not carry over to the case of even N_B where this degeneracy can appear or not depending on the values of the system parameters. We will show the proof of the theorem <ref> in Appendix <ref>.Let us now discuss the effect of the impurity on the bath state in the limit g →∞ in view of Theorem <ref>. Notice that according to Eq. (<ref>) the impurity lies in a harmonic oscillator potential and thus x_I is delocalized within a length scale ℓ = √(ħ / (m_I ω_I eff)). The spreading of the impurity within its confinement potential results in a back-reaction to the bath state owing to Ĥ_ coup, see Eq. (<ref>).According to the line of arguments in Appendix <ref> we can show that ⟨Ψ̃_k | ∑_j = 1^N_Br̂_j | Ψ̃_k'⟩ and ⟨Ψ̃_k | ∑_j = 1^N_B∂/∂ r_j | Ψ̃_k'⟩ are non-zero only in the case that the eigenstates | Ψ̃_k⟩ and | Ψ̃_k'⟩ of Ĥ_0r + Ĥ_P_ CM refer to the same Δ N. Δ N corresponds to a good quantum number of the underlying system referring to the particle difference of bath atoms in the r<0 and r>0 spatial regions.Furthermore, the bath-impurity interaction Hamiltonian Ĥ_ coup does not involve a coupling among the two degenerate ground states referring to Δ N and -δ N, see also Appendix <ref>, in the g →∞ limit independently of the position of the impurity, which takes non-zero values for odd N_B. Moving off from x_I=0 the coupling among the impurity and bath degrees-of-freedom lifts the degeneracy of the x_I = 0 ground-states owing to the different particle at r=x_B - x_I > 0 and r=x_B - x_I <0 associated with Δ N ≠ 0. Since this coupling is linear, see Eq. (<ref>), one of the potential energy curves decreases when the position of the impurity shifts from x_I = 0 to either positive or negative values. Thus the total many-body ground state obeys ⟨Ψ | x̂_I | Ψ⟩≠ 0 for either value of Δ N, which reduces the symmetry and corresponds to a manifestation of the Jahn-Teller effect. In the finite but large interaction range, g ≫ 1, the physical situation changes since the analytical continuations of the ψ_jL(r) and ψ_jR(r) states for finite g (see appendix <ref>) are not completely localized in their respective domains L, R but show a non-vanishing amplitude in the corresponding other domain. Therefore a weak transport across the barrier at r = 0 is allowed. This implies a finite “tunneling” integral t_i = ⟨ψ_iL | Ĥ_0 r | ψ_iR⟩ lifting the degeneracy of these two states. Notice also that this tunneling is amplified by the derivative interaction terms of Eq. (<ref>) and that Ĥ_ coup can also lead to coupling of these states. Precise derivations of t_i can be found in the Appendix <ref>. Therefore, the exact crossing for g →∞ outlined above becomes avoided for finite g. Provided that t_i is small enough, or equivalently g is large enough, the lifting of the x_I = 0 degeneracy does not, however, change the behavior of the system away from this high symmetry point, where the energies of the states predominantly shift due to the non-zero ⟨Ψ̃_k | x̂_I | Ψ̃_k⟩ terms. The lowest-lying potential energy curve possesses a double-well structure and consequently the impurity lies in both wells in its ground state. Thus strictly speaking ⟨Ψ | x̂_I | Ψ⟩ = 0 but we can still claim that the symmetry is broken since a small symmetry-breaking perturbation would lift the degeneracy among the wells leading to ⟨Ψ | x̂_I | Ψ⟩≠ 0. The above implies that for finite but strong enough g we can identify signatures of the presence of the Jahn-Teller effect (which occurs in a strict sense only in the g →∞ limit) and identify the reduction of the symmetry of the system, despite of the absence of degeneracy at the high symmetry point. This consists a manifestation of the so-called pseudo Jahn-Teller effect.The above can be interpreted as the emergence of a conical intersection <cit.> at the (0, 0) point of the parametric plane (x_I,1/g), its emergence has been explicated in Appendix <ref> within the perturbative regime of both coordinates. Due to the synthetic character of the 1/g effective coordinate, we denote this as synthetic conical intersection <cit.>. The detailed analysis of the geometric properties of this conical intersection (beyond the perturbative regime), e.g. its gauge structure and Berry phase, is left as an interesting future perspective. Below we will analyze the manifestation of the pseudo Jahn-Teller effect and its relation to the non-adiabatic processes emerging in our few-body system. §.§ Pseudo Jahn-Teller effect and potential energy curvesA first step in identifying the pseudo Jahn-Teller mechanism analyzed above is to identify its effect in the potential energy curves of the system. To achieve this, we expand the transformed Hamiltonian Ĥ' (<ref> – <ref>) in terms of the eigenstates of Ĥ_0 r+Ĥ_P_ CM obtaining the effective Hamiltonian⟨Ψ̃_n |Ĥ' | Ψ̃_m⟩ =-ħ^2/2 m_Iδ_n,md^2/d x_I^2 - i ħ/m_I P_n,md/d x_I + 1/2 m_I ω^2_Ieffx_I^2 δ_n,m + m_B ω^2_B X_n,m x_I + E_n δ_n,m,where X_n,m = ⟨Ψ̃_n | ∑_j=1^N_Br̂_j | Ψ̃_m ⟩ and P_n,m = - i ħ⟨Ψ̃_n | ∑_j=1^N_Bd/d r_j | Ψ̃_m ⟩. It can be shown that X_00 = X_11 = 0 and X_01≠ 0, since the eigenstates | Ψ̃_m ⟩ are parity symmetric for finite g, see also Appendix <ref>. This effective Hamiltonian within the manifold of the two energetically lowest states | Ψ̃_0⟩ and | Ψ̃_1 ⟩ realizes the so-called E ⊗ b model, which is known to exhibit the pseudo Jahn-Teller effect. This model refers to the so-called crude Born-Oppenheimer approximation <cit.> being also the main tool employed for the proof of Theorem <ref> and our arguments of Sec. <ref>. Despite the fact that this approach is not accurate enough for quantitative comparison with the multi-channel BO approach for reasons that will be explained later on, a qualitative comparison among the two will illustrate how our above-mentioned theoretical predictions materialize within accurate numerical descriptions of our system.The potential energy curves stemming from this model are the eigenvalues of the effective potential V(x_I) =( [ E_0 + 1/2 m_I ω^2_Ieffx_I^2m_B ω^2_B X_01 x_I;m_B ω^2_B X_01 x_I E_1 + 1/2 m_I ω^2_Ieffx_I^2 ]),for varying x_I. The resulting potential energy curves are presented in Fig. <ref>(a) and <ref>(b) for ω_I = 0.5 ω_B and ω_I = 4 ω_B respectively. In both cases m_I = 4 m_B and g = 5, while the two lowest-energy potential energy curves for g →∞ are also indicated by the dashed lines. Focusing especially in the case of a weaker parabolic potential, ω_I = 0.5, the development of an avoided crossing among the first two potential energy curves at x_I = 0 is evident, see Fig. <ref>(a). This crossing becomes an exact crossing for g →∞. In contrast to this behaviour, for higher trapping frequencies the development of this avoided crossing is not as pronounced due to the strong confinement of the impurity, see Fig. <ref>(b). In this case even for g →∞ the lowest potential energy curve is almost flat and thus we have a weak symmetry breaking in terms of the ⟨Ψ | x̂ | Ψ⟩ expectation values, even when a symmetry breaking perturbation is introduced.Figures <ref>(c) and <ref>(d) provide the potential energy curves for the same physical situations as for Fig. <ref>(a) and <ref>(b) respectively but within the multi-channel BO approach. We observe that for weak impurity confinement, ω_I = 0.5 ω_B, Fig. <ref>(c) showcases an avoided crossing among the first two potential energy surfaces for x_I = 0, similarly to the case of Fig. <ref>(a). However, in contrast to the two state E⊗ b model more structures reminiscent of avoided crossings appear for x_I ≠ 0. We conjecture that this occurs because additional degeneracies appear in the g →∞ case giving rise to additional instances of the pseudo Jahn-Teller effect. These are associated with many-body states where the bath atoms on the left and the right side of the impurity differ by one but possess equivalent energy (see also Sec. <ref>). This change in the confinement profile of the impurities leads to the density of the impurity being more localized in the multi-channel BO case, since the wells are narrower. Therefore, the symmetry breaking, which is associated to a doubly humped density structure as shown within Sec. <ref>, becomes less apparent than within the E ⊗ b approach. Finally, notice that due to the fact that more than two potential energy curves are considered in the multi-channel BO, additional avoided crossings emerge among the excited potential energy curves resulting to a more convoluted potential energy landscape. Similarly to the E ⊗ b case the increase of ω_I leads to less prominent avoided crossings among the involved potential energy curves, compare Fig. <ref>(d) and <ref>(b). Therefore, in this case we do not expect an apparent symmetry breaking in the densities in accordance with the singly-peaked impurity densities identified in Fig. <ref>(b_i). However, the influence of the pseudo Jahn-Teller effect can be identified by carefully studying the impurity state, as we will demonstrate in Sec. <ref>. Before proceeding let us elaborate on the shortcomings of the crude BO approach that allow only a qualitative comparison among the E ⊗ b and the multi-channel BO approaches which already at this level show important discrepancies. First notice that within the crude BO approximation the coupling among the quasi-degenerate states is linear on both x̂_I and p̂_I due to the structure of the coupling Hamiltonian (<ref>) and the truncation on the two-lowest lying states at x_I = 0 spanning the quasi-degenerate subspace. Indeed, the multi-channel BO approximation reveals that the coupling among the bath and impurity states is significantly more complicated (not shown here for brevity) stemming from the modification of the bath state for different x_I encoded in | Ψ_j,B(x_I) ⟩, see Eq. (<ref>). Since the crude BO approach involves states independent on x_I an increasingly larger number of such states is required for capturing the behaviour of the system as the displacement of the impurity from x_I = 0increases. Therefore, a quantitatively accurate description of the system becomes numerically challenging and difficult to intuitively interpret. For this reason the E ⊗ b model presented above is expected to be valid only in the region of x_I ≈ 0 and, indeed, as Fig. <ref> reveals qualitative deviations to the multi-channel approach emerge beyond this regime. In addition, even for x_I ≈ 0 the potential energy curves cannot be compared directly due to the different gauge structure of the crude and multi-channel BO approaches. The gauge is characterized by the gauge field, A_jk(x_I), appearing in the non-adiabatic couplings i.e. the prefactors of d/d x_I in Eq. (<ref>) and Eq. (<ref>). It is related to the selection of |Ψ_i,B(x_I)⟩ states in the multi-channel BO ansatz, see Eq. (<ref>). It can be easily verified that A_01(x_I) ≠ P_01 and even after the diagonalization of the effective potential, V(x) of Eq. (<ref>), the corresponding gauge field A'_01(x_I) = P_01 + ∑_j=0^1 U^*_0j(x_I) d/dx_I U_j1(x_I),where U_jk(x_I) are the matrix elements of the unitary matrix stemming from the diagonalization of Eq. (<ref>), is different than the multi-channel BO approach, i.e. A_01(x_I) ≠ A'_01(x_I).§.§ Indications of Jahn-Teller effect in the impurity stateThe coupling induced by the pseudo Jahn-Teller effect can be identified by analyzing the contributions to the impurity state. To achieve this, we evaluate the expectation values of the operators P̂^ HO_k = | ψ^ HO_k,I⟩⟨ψ^ HO_k,I | ⊗𝕀̂_B, which project the state of the impurity to the k-lowest eigenstate of the harmonic oscillator with ℓ_I = √(ħ/(m_I ω_I)), while acting as an identity operator, 𝕀̂_B, for the bath species. The corresponding expectation values are related to the impurity one-body density via ⟨Ψ | P̂^ HO_k | Ψ⟩ = ⟨ψ^ HO_k,I | ρ̂^(1)_I | ψ^ HO_k,I⟩ and are summarized in Fig. <ref>. In all cases the largest contribution is the ground state of the harmonic oscillator k = 0, which is expected due to the parabolic form of the potential energy curve even for strong g, see Fig. <ref>(d). Our ab initio results, see Fig. <ref>(a), further reveal that the most strongly occupied out of the remaining harmonic oscillator levels is the k = 1 mode, with the remainder of the states providing significantly smaller contribution. Notice that the k = 1 mode is parity odd and thus its simultaneous contribution with the k=0 implies a state that is slightly displaced from x = 0 (i.e. a coherent state) in accordance to our arguments for the pseudo Jahn-Teller effect. The contribution of the k = 2 modes can be explained in terms of an effective modification of the confinement frequency of the impurity due to its interaction with the bath which, as claimed in Sec. <ref>, is small but non-zero.By comparing the contribution of the k=0 and k=1 states within different levels of approximation, see Fig. <ref>(b), we can see that the depletion of the k = 0 mode and the contribution of the k = 1 mode decrease as the accuracy of the approach increases. This is because the adiabatic BO approaches are affected the most by the modification of the potential energy curves. Notice also that the effect of the Born-Huang correction term is small since the VABO and NVABO approaches yield almost indistinguishable results for g < 2. This implies that the increase of the k = 1 mode is not caused by the development of the double-well structure in the effective potential identified in Fig. <ref>(c_2) and  <ref>(c_3), but it rather originates from the pseudo Jahn-Teller effect. As the correlations among different potential energy curves are included even partially, within the multi-channel BO approach, the population of the k = 1 mode decreases while it obtains a minimum but finite value within the fully correlated ML-MCTDHX approach. This is caused by the non-negligible contributions of non-adiabatic effects that increase the population of the excited state. As it can be seen already within the E ⊗ b approach the excited potential energy curves are more strongly confining at x_I = 0, see Fig. <ref>(b) (this effect is more prominent in Fig. <ref>(a) albeit for different m_I than the one used here), and thus the shift from zero of the impurity state is smaller, decreasing the population of the k = 1 mode.Figure <ref>(c) compares the behaviour of P^HO_k for varying m_I and ω_I. We observe that an increase of either m_I or ω_I leads to a decrease of the depletion of ⟨Ψ | P̂^ HO_0 | Ψ⟩ and a decrease of the contribution of ⟨Ψ | P̂^ HO_1 | Ψ⟩. Both of these tendencies can be explained by the fact that the impurity confining potential becomes tighter since d^2V/dx^2∝ m_I ω_I^2 and as a consequence the impurity gets more localized in the center of the trap competing with the pseudo Jahn-Teller effect promoting its displacement. Since this effect is quadratic on ω_I the effect of this parameter is more crucial than m_I. §.§ The Born-Huang term as a probe of non-adiabaticityBefore concluding let us address the important information gained by studying the Born-Huang term, V_11^ ren(x_I). Its profile is provided in Fig. <ref>(a) for m_I = 4 m_B, ω_I = 4 ω_B and varying interaction strength g. For all considered interactions, V_11^ ren(x_I) possesses an inverted parabola shape with additional potential peaks at specific points denoted as x_k, with k = 0, ± 1, ± 2. The amplitude of these peaks increases strongly with increasing value of g, becoming the dominant feature of V_11^ ren(x_I) at strong g, see Fig. <ref>(a) for g= 5. As we have claimed in Sec. <ref> this is the origin of the double well structure of the effective potential in the VABO approximation, see Eq. (<ref>), and it can be verified that for g = 5 the amplitude of the x_0 peak is much larger than the gap among the two lowest energy potential energy curves, see Fig. <ref>(d). However, as claimed in Sec. <ref> the effect of these potential peaks does not appear in the impurity densities within the exact approaches and therefore it is compensated by the non-adiabatic couplings in the system.The examination of this term within an alternative viewpoint allows us to gain a deeper understanding regarding the non-adiabatic processes present in the system. As it can be seen by Eq. (<ref>) the Born-Huang term corresponds to the change of the kinetic energy of the bath depending on the position of the impurity. This implies that the strong peaks of V_11^ ren(x_I) at x_k indicate that if the impurity resides in this region the momentum of the bath particles increases. The fast motion of bath particles can be thought as a probe of non-adiabaticity in the system.To understand why this occurs Fig. <ref>(b_i), with i = 1, 2, …, 5, depicts the wavefunctions of the five occupied orbitals, ϕ_i(x_B; x_I) of | Ψ_1, B (x_I) ⟩ corresponding to the lowest energy potential energy curve where |Ψ_1,B(x_I)⟩ corresponds to a single Slater determinant (<ref>) owing to the fact that the bath is composed of spin-polarized fermions. By inspecting ϕ_1(x_B; x_I), see Fig. <ref>(b_1), we directly observe that within the region -3 < x_I ⪅ 0 the wavefunction gets localized on x_B > x_I, while close to x_I = 0 the x_B < x_I region starts to get occupied resulting to an equal superposition at exactly x_I = 0. As x_I increases the region x_B > x_I looses its population and at x_I = 0.5 only the region x_I < x_B is populated. This is exactly what is expected from our discussion in Sec. <ref> regarding the avoided crossing due to the pseudo Jahn-Teller effect at x_I=0. This is not a feature specific to ϕ_1(x_B;x_I) but it occurs for all of the considered single particle states, see the dotted boxes of Fig. <ref>(b_i), with i = 1–5 at x_I ≈ 0.Surprisingly, we can observe that this bath transport through the impurity does not occur only for x_I = 0 but it appears also for non-vanishing impurity displacements, see e.g. the boxes of Fig. <ref>(b_2) at x_I ≠ 0. In this case a state with one node for x_B > x_I is coupled to the state without nodes for x_B < x_I as x_I ≈ 0.7 is approached. This reveals that further exact crossings might be possible in the g →∞ limit giving rise to additional regions where the pseudo Jahn-Teller effect is exhibited for finite g. Thus it would be interesting to connect the avoided crossings exhibited in the potential energy curves, see Fig. <ref>(c) in terms of the above mentioned regions at x ≠ 0.The Born-Huang term can help us in this endeavor. In Fig. <ref>(b_5) we have indicated the positions of the peaks in V_11^ ren(x_I) on top of the profile of ϕ_5(x_B; x_I). It can be readily observed that these peaks correlate almost exactly to some of the points involving population transfer among the x_B < x_I and x_B > x_I regions. This of course makes sense since such a population transfer implies the motion of bath particles though the impurity and thus the increase of the kinetic energy of the bath component. However, not all points where transfer happens are associated with an increase of V_11^ ren(x_I). This can be understood as follows. Notice that each one of the cases for ϕ_5(x_B; x_I) where no peak in V_11^ ren(x_I) occurs aligns with a transfer process in ϕ_4(x_B; x_I) (see the dashed boxes in Fig. <ref>(b_4) and the associated dashed lines connecting them to the boxes of Fig. <ref>(b_5)). Finally, notice that such regions occur also among lower-lying orbitals, see the boxes and connecting lines in Fig. <ref>(b_i) with i = 1–4.The above discussed kinetic energy increase is a probe for non-adiabaticity in the system since at the regions of x_k, with k=0,± 1, ± 2, a strong non-adiabatic coupling among the ϕ_5(x_B; x_I) and ϕ_6(x_B; x_I) is exhibited giving rise to a large value of A_01(x_I) (not shown here for brevity). In addition, we can confirm that the points x_± 1 capture well the position of the x_I ≠ 0 avoided crossings of the first two potential energy curves in Fig. <ref>(c), being consistent with the presence of the pseudo Jahn-Teller effect in this spatial region.§ SUMMARY AND OUTLOOK We have performed a comprehensive ground-state analysis of a fermionic few-particle setup consisting of five light fermionic particles interacting with a single heavy impurity. This setup demonstrates the failure of the adiabatic BO approximation.In particular strong deviations are identified between the numerically exact ML-MCTDHX approach and the adiabatic BO approximation in the impurity energy and one-body density, as well as the correlation properties given by the two-body density and von Neumann entropy. These results indicate the presence of strong non-adiabatic effects in our system. In particular, we are able to interpret these results by introducing the inverse of the interaction strength as a synthetic dimension and analyzing the emergence of the Jahn-Teller effect in the strong interaction limit of our Fermi impurity system.Based on this we have shown that our system approximately maps to a E ⊗ b system and thus exhibits the pseudo Jahn-Teller effect for finite interactions, associated with the breaking of the parity symmetry of the impurity state.An increasing interaction strength between the bath and impurity atoms leads to strong “vibronic” couplings among the slow degrees-of-freedom of the impurity with the fast motion of the bath atoms, explaining the previously identified non-adiabatic effects. By examining the potential energy curves, we demonstrate at least one conical intersection at the trap center and for infinitely strong interactions.Especially, we have shown that the Born-Huang term of the lowest energy potential energy curve can be employed as a measure for the non-adiabaticity of the system indicating resonant transport of the bath particles through the impurity. Based on the above results several pathways of future research become evident. First, by considering the time propagation of the fermionic impurity system, the possibility arises of probing the pseudo Jahn-Teller effect during the impurity dynamics. This can be achieved by shifting the center of the harmonic trapping potential for the impurity and tracking the induced impurity dynamics, in terms of its dipole and breathing modes.Considering similar quasi-particle systems, the question arises about symmetry breaking effects in polarons, which can be investigated also for higher dimensional systems. One intriguing theoretical question in this context is the relation of the non-adiabaticity unveiled here with the well-known Anderson orthogonality catastrophe <cit.>. This work has been funded by the Cluster of Excellence “Advanced Imaging of Matter” of the Deutsche Forschungsgemeinschaft (DFG) - EXC 2056 - project ID 390715994. G. M. K. gratefully acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.  101034413. § DETAILS ON THE CALCULATION OF NON-ADIABATIC COUPLINGS WITHIN THE MULTI-CHANNEL BO APPROACHThe assumption of a spin-polarized fermionic impurity species greatly simplifies the complexity of evaluation of the non-adiabatic couplings and the potential renormalization terms appearing in Eq. (<ref>). In particular, since the species B does not possess interspecies interactions its eigenstates for fixed impurity position, | Ψ_k,B(x_I) ⟩, can be exactly represented in terms of a single Slater determinantΨ_k,B (x_1,…,x_N;x_I) =1/√(N!)∑_j = 1^N! sign(P_j )×ϕ_P_j(I^k_1)(x_1; x_I) …ϕ_P_j(I^k_N)(x_N; x_I),where ϕ_j(x_B; x_I) for j = 1, 2, … are the N_B = 1 eigenstates of Ĥ_B + Ĥ_BI for a fixed value of x_I with eigenenergy ε^(1)_j(x_I). I_j^k, with j = 1, …, N parameterize the particular set of orbitals corresponding to the k-th lowest in energy, ε_k(x_I) = ∑_j=1^Nε^(1)_I^k_j(x_I), eigenstate of the many-body system. In order for this orbital set to be unique, we demand that it is ordered in ascending order, i.e. I^k_j < I^k_j' for j < j'. Notice that the above is exactly equivalent to the standard prescription of how the many-body eigenstates of non-interacting fermions are mapped to the corresponding number states <cit.>.The Slater determinant of Eq. (<ref>), allows us to express the non-adiabatic couplings of the many-body bath species in terms of the N_B=1 orbitals ϕ_j(x_B;x_I). To make the presentation of this process clearer, let us introduce the notation where the many-body eigenstates | Ψ_k,B(x_I) ⟩ are expressed in terms of the occupied orbitals as | Ψ_{I^k_1, …, I_N^k }(x_I) ⟩. Within this notation the non-adiabatic derivative coupling matrix readsA_kl(x_I)= A_{I^k_1, …, I_N^k }{I^l_1, …, I_N^l }(x_I) =i ⟨Ψ_{I^k_1, …, I_N^k }(x_I) | dΨ_{I^l_1, …, I_N^l }/d x_I(x_I) ⟩.By employing Eq. (<ref>)we deriveA_{I_1,…,I_N}{J_1,…,J_N}(x_I) = i ∑_l = 1^N∑_k = 1^N (-1)^k+l⟨ϕ_I_k(x_I) | dϕ_J_l/d x_I(x_I) ⟩×δ_{I_1, …, I_k-1, I_k+1, …, I_N }{J_1, …, J_l-1, J_l+1, …, J_N },where the Kronecker delta symbol δ_S_1S_2 yields zero for two different sets S_1 and S_2, and one in the case they are equivalent. For a trapped system, such as the one described by Eq. (<ref>), it is known that the eigenbasis consists of real valued ϕ_j(x_B;x_I) states <cit.>. Then by considering that the momentum operator for the impurity species is Hermitian, we obtain that the single-particle eigenfunctions fulfill⟨ϕ_j(x_I) | dϕ_j/d x_I(x_I) ⟩ = 0.This fact allows us to simplify the expression of Eq. (<ref>) further. In particular, A_{I_1,…,I_N}{J_1,…,J_N}(x_I) vanishes except in the case where a single orbital in the sets {I_1,…,I_N} and {J_1,…,J_N} is different. The non-vanishing elements readA_{I_1,…,I_N}{I_1,…,I_k-1,I_k+1,…,I_l-1,J_l,I_l+1,…,I_N}(x_I)= i (-1)^k + l⟨ϕ_I_k(x_I) | dϕ_J_l/d x_I(x_I) ⟩,where J_l ≠ I_k.According to Eq. (<ref>) the renormalization potential V_kl^ren(x_I) can be written asV_kl^ren(x_I) = ħ^2/2 m_I[B_k l(x_I) - ∑_r = 1^M A_r k^*(x_I) A_r l(x_I) ],where the B matrix elements readB_k l(x_I) = ⟨dΨ_k,B/d x_I(x_I) | dΨ_l,B/d x_I(x_I) ⟩.An analogous procedure to the one applied above for the non-adiabatic derivative couplings (see Eq. (<ref>)) yields that the B matrix elements readB_{I_1,…,I_N}{J_1,…,J_N}(x_B)=∑_k=1^N ∑_l = 1^N (-1)^k + l⟨dϕ_I_k/d x_I(x_I) | dϕ_J_l/d x_I(x_I) ⟩×δ_{I_1,…,I_k-1,I_k+1,…,I_N}{J_1,…,J_l-1,J_l+1,…,J_N}- i ∑_k=1^N ∑_l = 1^N (-1)^k + l⟨dϕ_I_k/d x_I(x_I) | ϕ_J_l(x_I) ⟩× A_{I_1,…,I_k-1,I_k+1,…,I_N}{J_1,…,J_l-1,J_l+1,…,J_N}(x_I).Owing to the properties of the non-adiabatic derivative couplings, see Eq. (<ref>), we can distinguish three different cases where the B matrix elements are non-vanishing. First, if both sets are equivalent, we obtainB_{I_1,…,I_N}{I_1,…,I_N}(x_I)= ∑_k=1^N ⟨dϕ_I_k/d x_I(x_I) | dϕ_I_k/d x_I(x_I) ⟩- ∑_k=1^N ∑_l = 1l ≠ k^N ⟨ϕ_I_l(x_I) | dϕ_I_k/d x_I(x_I) ⟩^2.Second, in the case that the sets are different by a single orbital, we have B_{I_1,…,I_N}{I_1,…,I_k-1,I_k+1,…,I_l,J_l,I_l+1,…, I_N}(x_I) =(-1)^k + l[ ⟨dϕ_I_k/d x_I(x_I) | dϕ_J_l/d x_I(x_I) ⟩ -∑_r = 1r ≠ k^N⟨ϕ_I_r(x_I) | dϕ_I_k/d x_I(x_I) ⟩×⟨ϕ_I_r(x_I) | dϕ_J_l/d x_I(x_I) ⟩],where we demand J_l ≠ I_k. Finally, the last case that B is non vanishing arises when the two sets differ by two indices. Here, the B matrix elements readB_{I_1,…, I_N}{I_1,…,I_k-1,I_k+1,…,_I_l-1,I_l+1,…,I_r+1,J_r,I_r+2,…,I_s+1,J_s,I_s+2,…,I_N}(x_I) = (-1)^k + r + l + s× ( ⟨ϕ_J_r(x_I) | dϕ_I_k/d x_I(x_I) ⟩⟨ϕ_I_l(x_I) | dϕ_J_s/d x_I(x_I) ⟩-⟨ϕ_J_s(x_I) | dϕ_I_k/d x_I(x_I) ⟩⟨ϕ_I_l(x_I) | dϕ_J_r/d x_I(x_I) ⟩-⟨ϕ_J_r(x_I) | dϕ_I_l/d x_I(x_I) ⟩⟨ϕ_I_k(x_I) | dϕ_J_s/d x_I(x_I) ⟩+⟨ϕ_J_s(x_I) | dϕ_I_l/d x_I(x_I) ⟩⟨ϕ_I_k(x_I) | dϕ_J_r/d x_I(x_I) ⟩),where each of the indices J_r and J_s has to be different to both I_k and I_l. The representation of the non-adiabatic derivative coupling (<ref>) and the renormalization potential (<ref>) allow for the numerical solution of the effective Schrödinger equation (<ref>).§ INTER-SPECIES ENTANGLEMENT WITHIN THE ADIABATIC BORN-OPPENHEIMER APPROXIMATIONA perhaps not well-known fact regarding the adiabatic BO approximation is that it involves entanglement between the slow (nuclear) and fast (electronic) degrees of freedom. This aspect has also been noted in the recent quantum chemical literature <cit.>. The purpose of this section is to demonstrate that this entanglement effect also appears in our two-species one-dimensional system and provides a proper mathematical framework for our arguments in Sec. <ref>. Our starting point is the reduced bath-density matrix within the adiabatic BO approximation where the impurity degrees-of-freedom have been traced out,ρ̂^(N_B)_B = ∫dx_I |Ψ_1,I(x_I)|^2 | Ψ_1,B (x_I)⟩⟨Ψ_1,B (x_I)|.The above equation reveals that the impurity localization controls the degree of entanglement. In particular, it can be easily verified that for Ψ_1,I(x_I) = δ(x_I - x_I^0) the state of the bath is pure and thus there is no entanglement in the system.In the general case it is difficult to analyze the entanglement properties of the system. However, the assumption of a heavy m_I ≫ m_B and tightly confined ω_I ≫ω_B impurity enable us to employ the approximation that the bath state, | Ψ_1,B(x_I) ⟩, changes at much longer length scales than Ψ_1,I(x_I) defining the size of the impurity state. Indeed, the length scale associated to each species is ℓ_σ = √(ħ/(m_σω_σ)) and thus ℓ_B controlling the x_I-dependence of | Ψ_1,B (x_I)⟩ is much longer than the spatial region where the impurity localizes ∼ℓ_I. This allows us to expand the bath state in a Taylor series around the equilibrium position of the impurity x_I = 0 as| Ψ_1,B (x_I)⟩ = | Ψ_1,B (0)⟩ + x_I ∂/∂ x_I| Ψ_1,B (x_I)⟩|_x_I = 0+ 1/2 x_I^2 ∂^2/∂ x_I^2| Ψ_1,B (x_I)⟩|_x_I = 0 + 𝒪( x_I^3 ).This expansion allows us to evaluate the integral appearing in Eq. (<ref>) order by order. Since we operate in the regime ℓ_I ≪ℓ_B, it is reasonable to consider that ⟨Ψ_1,I | x̂_I^n | Ψ_1,I⟩/ℓ_B^n is a decreasing sequence in n and thus we consider that only its first three terms for n = 0, 1, 2 are non-negligible. Within this approximation the bath-density operator readsρ̂_B^(N_B) = | Ψ_1,B (0) ⟩⟨Ψ_1,B (0) | + ⟨Ψ_1,I | x̂_I | Ψ_1,I⟩×(| Ψ_1,B (0) ⟩⟨∂Ψ_1,B/∂ x_I (0) | . + . | ∂Ψ_1,B/∂ x_I (0) ⟩⟨Ψ_1,B (0) |)+ ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩( | ∂Ψ_1,B/∂ x_I (0) ⟩⟨∂Ψ_1,B/∂ x_I (0) |- 1/2 | Ψ_1,B (0) ⟩⟨∂^2 Ψ_1,B/∂ x_I^2 (0) | - 1/2| ∂^2 Ψ_1,B/∂ x_I^2 (0) ⟩⟨Ψ_1,B (0) | ) +𝒪( ⟨Ψ_1,I | x̂_I^3 | Ψ_1,I⟩).This expression allows us to determine the matrix elements of ρ̂_B^(N_B) in the basis defined by the eigenstates of Ĥ_B + Ĥ_BI (see Eq. (<ref>)) for an impurity fixed at x_I = 0, namely | Ψ_j,B (0) ⟩, for j = 1, 2, …. Notice that since this basis is complete and independent of the position of the impurity state enabling us to employ the usual definitions for calculating the Schmidt modes and von Neumann entropy. Within this prescription the above-mentioned matrix elements read⟨Ψ_1,B (0)| ρ̂_B^(N_B) | Ψ_1,B (0) ⟩ =1 - ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩∑_l = 1^∞ | A_1 l(0) |^2 +𝒪( ⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩),⟨Ψ_1,B (0)| ρ̂_B^(N_B) | Ψ_j,B (0) ⟩ =-1/2⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩∑_l = 1^∞ A^*_jl(0) A^*_l1(0) +𝒪( ⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩),⟨Ψ_j,B (0)| ρ̂_B^(N_B) | Ψ_k,B (0) ⟩ =⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩ A_j1(0) A_k1^*(0),+𝒪( ⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩),where j, k=2, 3, …. The non-adiabatic couplings appear in Eq. (<ref>) since by definition they are equal to A_kj(x_I) = i ⟨Ψ_k,B(x_I) | ∂Ψ_j,B/∂ x_I(x_I) ⟩. Moreover, we have used the completeness property of the | Ψ_j,B (x_I) ⟩ states to simplify ⟨Ψ_k,B(x_I) | ∂^2 Ψ_j,B/∂ x_I^2(x_I) ⟩ = - ∑_l = 1^∞ A_k l(x_I) A_l j(x_I) and the parity symmetry for the impurity species yielding ⟨Ψ_1,I | x̂_I^2n+1 | Ψ_1,I⟩ = 0, for all n. Within first order perturbation theory for the dominant Schmidt mode we obtainλ_1 =1 - ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩∑_l = 1^∞ |A_1 l(0)|^2 + 𝒪 (⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩).While degenerate first order perturbation theory for the remaining modes yieldsλ_2 = ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩∑_l = 1^∞ |A_1 l(0)|^2 + 𝒪 (⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩),λ_k = 𝒪 (⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩), for k=3, 4, …,It can be easily verified that higher order perturbative corrections yield higher order terms in ⟨Ψ_1,I | x̂_I^n | Ψ_1,I⟩ and are consequently negligible according to our arguments. Therefore, we can verify that the entanglement even in the case of an impurity density with very small non-zero width is finite, as the von Neumann entropy readsS_VN = ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩∑_l = 1^∞ |A_1 l(0)|^2 ×[ 1 - log( ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩∑_l = 1^∞ |A_1 l(0)|^2 ) ]+ 𝒪 (⟨Ψ_1,I | x̂_I^4 | Ψ_1,I⟩). Let us now comment on the above results. We see see that there are only two parameters that enter the von Neumann entropy for a narrow impurity density distribution, these are its width in terms of ⟨Ψ_1,I | x̂_I^2 | Ψ_1,I⟩ and the sum of non-adiabatic couplings ∑_l = 1^∞ |A_1 l(0)|^2. Notice that the latter quantity is related to the norm of the derivative of | Ψ_1, B (x_I) ⟩, namely || ∂/∂ x_I| Ψ_1,B (x_I) ⟩|| = √(∑_l = 1^∞ |A_1 l(x_I)|^2). Therefore, we can conclude that S_VN increases when the impurity width increases, see also Fig. <ref>(b), and when the bath state becomes more strongly dependent on x_I, i.e. for higher g, see Fig. <ref>(b). The latter can be independently verified by considering the Feynman-Hellmann theoremA_jk(x_I) =⟨Ψ_j, B (x_I) | ∂Ĥ_BI/∂ x_I | Ψ_k, B (x_I)⟩/ε_j(x_I) - ε_k(x_I).This shows that the non-adiabatic couplings should increase with interaction since, first, the nominator is ∝ g and, second, the energy differences in the denominator decrease due to the closing of the gaps among the even and odd single particle bath eigenstates as the g →∞ limit is approached, see also Appendix <ref>.§ CONVERGENCE BEHAVIOUR OF ML-MCTDHX VERSUS TBHTo elucidate the degree of convergence of the employed ab-initio variational approaches namely the ML-MCTDHX and the multi-channel BO methods, we present here a comparative analysis of the dependence of their results in the corresponding parameters determining their numerical accuracy.As discussed in Sec. <ref> the parameters that define the multi-layered truncation of the many-body wavefunction are given by the orbital configuration C = (D; d^B; d^I) and the choice of the primitive basis and its size ℳ. First, since the primitive basis size hardly affects the CPU time of the ML-MCTDHX calculations we have selected a large enough basis of ℳ = 150 grid points of the harmonic oscillator DVR with ω_ DVR = 0.72 ω_B, which is enough for convergence for the employed interaction scales. Second, due to the fact that we only consider a single impurity we can simplify the ansatz to C = (D; d^B; D) since at least D basis states for the impurity species are required to give rise to D distinct Schmidt modes, see Eq. (<ref>). Thus below we analyze the convergence of the ML-MCTDHX approach only in terms of the truncation in terms of entanglement modes, controlled by D and the truncation in terms of bath orbitals d^B.Regarding the multi-channel BO method, the main parameter that controls the truncation is the choice of M in Eq. (<ref>), which corresponds to the number of potential energy curves in Eq. (<ref>). Additionally, the quality of the calculated non-adiabatic derivative couplings A_kl(x_I) and potential renormalization V^ ren_k l(x_I) is dictated by the choice of the primitive basis. Here we have chosen an exponential DVR with ℳ = 1024 points permitting derivative evaluations by the fast Fourier transform. By detailed analysis of the corresponding A_kl(x_I) and V^ ren_k l(x_I) matrices we have deemed that this choice is adequate for the convergence of the corresponding matrices.Finally, we compare the results of both variational methods with the Configuration Interaction (CI) (or exact diagonalization) approach with energy pruning, that recently has attracted considerable attention in the few-fermion literature <cit.>.Within this approach we generate the set of all number states with non-interacting energy less than a given energy cutoff, E_ cut and then diagonalize the Hamiltonian, Eq. (<ref>), in the subspace spanned by them. In our case, the Hamiltonian in the space spanned by the non-interacting harmonic oscillator functions, ψ_n,σ(x), readsĤ = ∑_n = 0^∞ħω_B ( n + 1/2) ĉ^†_n, Bĉ_n, B+∑_n = 0^∞ħω_I ( n + 1/2) ĉ^†_n, Iĉ_n, I+∑_n,l,m,k = 0^∞ U_n l m kĉ^†_n, Bĉ^†_l, Iĉ_m, Iĉ_k, B,where ĉ^†_n,σ and ĉ_n,σ are the operators that create and annihilate a species σ particle in the n-th single particle eigenstate respectively and U_n l m k = ∫ dx ψ^*_n,B(x)ψ^*_l,I(x)ψ_m,I(x)ψ_k,B(x) which can be calculated efficiently via employing the Gaussian quadrature. As the basis of the many-body subspace we use the number states | n_1^B, n_2^B, …, n^N_B; n^I ⟩, that satisfy ⟨ n_1^B, n_2^B, …, n^N_B; n^I | Ĥ | n_1^B, n_2^B, …, n^N_B; n^I ⟩ = ( N/2 + ∑_j = 1^N n_j^B ) ħω_B + (n^I + 1/2) ħω_I < E_ cut.Notice that this choice implies that there is a maximum value of n_i^B and n^I that appears in this subspace and as a consequence we have to calculate a finite number of U_nlmk elements. The only non-diagonal terms in this basis correspond to the interaction terms of Eq. (<ref>). The matrix elements of ĉ^†_n, Bĉ^†_l, Iĉ_m, Iĉ_k, B can be evaluated by using the indexing rules of fermionic states described in Ref. <cit.>. From the above it is clear that the only approximation the energy pruned CI uses is the value of the energy cutoff E_ cut that determines the size of the corresponding subspace where the Hamiltonian of Eq. (<ref>) is diagonalized. To estimate the convergence pattern of the multi-channel BO we compare how the energy and von Neumann entropy converge to the exact value as the number of potential energy curves increases. We observe that already for M=2 the multi-channel BO approach possesses a significantly lower energy than the energy pruned CI with E_ cut = 70 ħω_B and on par with ML-MCTDHX with orbital configuration C = (10; 10; 10), see Table <ref>, for all interaction strengths we have studied, see Fig. <ref> (a_i) with i=1,2,3. Energy convergence is observed for M > 24, where it shows almost the same value of energy and ML-MCTDHX with orbital configuration C = (12, 18, 12). In contrast, even the less accurate versions of the ML-MCTDHX and CI possess a von Neumann entropy much closer to their converged results than the corresponding multi-channel BO result for M = 2 see Table <ref> and Fig. <ref> (b_i) with i=1,2,3. In particular, we observe that multi-channel BO does not show a von Neumann entropy convergence with the value of S_VN improving only by a factor of roughly two with respect to the other two approaches even for the largest M = 40 value we have studied.Therefore, we rely on ML-MCTDHF for our exact numerical results. Nevertheless, this comparison confirms the accuracy and convergence of both used methods against the same result for increasing accuracy control parameters.§ PROOF OF THEOREM <REF> The outline of the proof of Theorem <ref> is as follows. First we generate a complete many-body basis for the (N_B + 1)–body system. Then we show that the parameter Δ N characterizing this basis is a good quantum number for the Hamiltonian of Eq. (<ref>). Finally, by using the parity symmetry property of Eq. (<ref>) we demonstrate that all eigenstates with Δ N ≠ 0 are necessarily degenerate with at least one eigenstate with the opposite sign of Δ N. This proves Theorem <ref> for N_B odd since in this case Δ N = 0 is impossible.We begin the proof of Theorem <ref> by generating a complete many-body basis for the (N_B + 1)–body system. For g →∞ the single-particle system defined by Eq. (<ref>) for N_B = 1, splits into two subsystems referring to r > 0 and r < 0. These are separated by an impenetrable wall at r = 0, where the wavefunctions have to vanish. Therefore, the single-particle eigenstates of the system can be expressed in terms of the eigenstates of the individual subsystems, ψ_j L(r) ≠ 0 for r < 0 and ψ_j R(r) ≠ 0 for r > 0 with j = 0, 1, …. Subsequently, the Hamiltonian of the many-body system can be expanded in terms of the number-states (Slater determinants) spanned by ψ_j L(r) and ψ_j R(r). Each of these many-body states is characterized by a definite value of the particle imbalance among the subsystems, Δ N = N_L - N_R, where N_L and N_R are the number of particles in the left and right subsystem respectively. Of course, notice that N_L + N_R = N_B holds.Then we continue by showing that Δ N is a good quantum number for the Hamiltonian of Eq. (<ref>). It can be proven that for any two Slater determinants, |Ψ_k ⟩, involving the contribution of the single-particle states I^k_m ∈ (ℕ, {L, R}), with m = 1, 2, …, N_B, the derivative interaction term reads ⟨Ψ_k | Ĥ_P_ CM | Ψ_k'⟩ = -ħ^2/m_I∑_n = 1^N ∑_l = 1^N ∑_m = 1^n-1∑_r = 1^l-1 (-1)^n + m + l + r×(⟨ψ_I^k_l| ∂/∂ r| ψ_I^k'_n⟩⟨ψ_I^k_r| ∂/∂ r| ψ_I^k'_m⟩       - ⟨ψ_I^k_r| ∂/∂ r| ψ_I^k'_n⟩⟨ψ_I^k_l| ∂/∂ r| ψ_I^k'_m⟩) ×δ_ {I_1^k, …, I_r-1^k, I_r+1^k, …, I_l-1^k, I_l+1^k, …, I_N^k}{I_1^k', …, I_m -1^k', I_m+1^k', …, I_n-1^k', I_n+1^k', …, I_N^k'}-ħ^2/m_I∑_n = 1^N ∑_l = 1^N (-1)^n + m⟨ψ_I^k_m| ∂^2/∂ r^2| ψ_I^k'_n⟩×δ_ {I_1^k, …, I_m-1^k, I_m+1^k, …, I_N^k}{I_1^k', …, I_n-1^k', I_n+1^k', …, I_N^k'}, where the Kronecker delta symbol δ_S_1 S_2 yields zero for two different sets S_1 and S_2, and one in the case that they are equivalent. Notice that since ψ_j L(r) and ψ_j R(r) are non-vanishing in entirely separated spatial domains ⟨ψ_j L | ∂/∂ r |ψ_k R⟩ = 0 holds for all j and k. This fact can also be verified independently by evaluating the limits of the analytical solutions for finite g in the case g →∞, see Appendix <ref>. Therefore, the quantity inside the parenthesis of Eq. (<ref>) vanishes if there is a different number of L and R states among the {ψ_I^k_l(r), ψ_I^k_r(r)} and{ψ_I^k'_n(r), ψ_I^k'_m(r)} sets. This implies that if Δ N is different for | Ψ_k ⟩ and | Ψ_k'⟩ then the corresponding interaction matrix element, Eq. (<ref>), is zero. Thus a given number-state couples only with number states with the same value of Δ N and consequently Δ N is a good quantum number. This implies that eigenstates of Ĥ_0r + Ĥ_P_ CM can be characterized in terms of Δ N.Then by considering the symmetry properties of Eq. (<ref>) Theorem <ref> can be explicitly proven. Since, Ĥ_0r is parity symmetric [Ĥ_0r, 𝒫̂_r] = 0 holds, where 𝒫̂_r r_i = - r_i. Let us assume an eigenstate ( Ĥ_0r + Ĥ_P_ CM ) | Ψ̃_k ⟩ = E_k | Ψ̃_k ⟩ with definite value of Δ N = Δ N_k. Due to the fact that 𝒫̂_r ψ_j L(r) = ψ_j R(r) (for an appropriate choice of the overall phases of the involved single-particle states), the action of 𝒫̂_r on | Ψ̃_k ⟩ results in the shift of particle imbalance Δ N_k → - Δ N_k. This shows that for Δ N_k ≠ 0 the eigenstates | Ψ̃_k ⟩ and 𝒫̂_r | Ψ̃_k ⟩ are distinct and degenerate. Therefore, owing to the fact that Δ N = 0 can only hold for even N_B, the ground state of Ĥ_0r is always degenerate for odd N_B and in the g →∞ limit independently of all other system parameters, which proves the theorem. In the case of even N_B the above would hold only if the ground state possesses Δ N_k≠ 0, but since in this situation Δ N_k = 0 is possible, a degeneracy does not necessarily occur. § SINGLE-PARTICLE PROPERTIES OF Ĥ_0RThe symmetry properties of Ĥ_0r, Eq. (<ref>), were extensively analyzed in Sec. <ref> and Appendix <ref> where several theoretical insights were obtained without having to consider the precise form of its eigenspectrum. The purpose of this Appendix is to review the basic properties of the analytically-obtained single-particle (N_B=1) eigenspectrum of Ĥ_0r, which will be used in Appendix <ref> to illustrate the emergence of a conical intersection in the vicinity of 1/g=x_I=0.The eigenfuctions of Ĥ_0r for N_B=1 read <cit.>ψ^ B_2 n(r;g) = A_n(g) Γ(-ϵ_n(g))/2 √(πℓ_ B) U( -ϵ_n(g), 1/2,r^2/ℓ_ B^2) e^-r^2/2 ℓ_ B^2,ψ^ B_2 n + 1(r;g) = ψ^ B_2 n + 1(r) = (π l_B^2)^-1/2/√(2^2 n+1 (2 n + 1)!) H_2 n+1( r/ℓ_B) e^-r^2/2 ℓ_ B^2,where H_n(x) denotes the n-th degree Hermite polynomial, Γ(x) is the gamma and U(α, β, x) is the confluent hypergeometric function. The relevant length scale is ℓ_ B = √(ħ/(m_B ω_B)). The normalization factor of parity-even states readsA_n(g) =2 √(Γ( 1/2 - ϵ_n(g) )/Γ(-ϵ_n(g))1/ψ( 1/2 - ϵ_n(g) ) - ψ(-ϵ_n(g))),where ψ(x) is the digamma function. Finally, the effective order ϵ_n(g) satisfies the consistency equationΓ( 1/2 - ϵ_n(g) )/Γ(-ϵ_n(g)) = - g/2√(m_B/ħ^3 ω_B).This self consistency equation shows that n/2≤ϵ_n(g) ≤n+1/2 for n ≥ 1 and -∞ < ϵ_0(g) ≤ 1/2. The upper bound of these inequalities gets saturated for g → + ∞ while the lower saturates for g → - ∞. The energy of the system is a function of the effective order reads E_2 n^ B(g) = ħω_r (2 ϵ_n(g) + 1/2 ) for parity-even states, while E^B_2n+1(g) = E^B_2n+1= ħω_r (2 n + 1 +1/2 ) for parity odd states.The position, R_n,m(g) ≡⟨ψ^B_n(g) | x̂ |ψ^B_m(g) ⟩, and momentum, P_n,m(g) ≡⟨ψ^B_n(g) | p̂ |ψ^B_m(g) ⟩, single-particle matrix elements are non-vanishing only in the case that states of different parity are involved. The non-zero elements of these matrices readR_2λ + 1,2κ(g) = (-1)^λℓ_B A_κ(g)/√(2)π^1/2√((2λ +1)!)/2^λ + 1λ!×1/(λ - ϵ_κ(g))(λ + 1 - ϵ_κ(g)), P_2λ + 1,2κ(g) = i (-1)^λħ A_κ(g)/√(2)ℓ_B π^1/2√((2λ +1)!)/2^λ + 1λ!×2 λ + 1 - 2 ϵ_κ(g)/(λ - ϵ_κ(g))(λ + 1 - ϵ_κ(g)),for λ,κ = 0, 1, …. The final relevant for us property of | ψ_n^ B(g) ⟩ are their transformation properties for a shift of g → g' unveiled in Ref. <cit.>| ψ_2n^ B(g') ⟩ = ∑_m=0^∞A_n(g') A_m(g)/E_2m^ B(g)-E_2n^ B(g')(1/g' - 1/g) | ψ_2m^ B(g) ⟩, | ψ_2n+1^ B(g') ⟩ = | ψ_2n+1^ B(g) ⟩. Let us briefly focus on the regime of g →∞, which is particularly interesting for discussing the pseudo Jahn-Teller effect. In this case we can obtain an analytic asymptotic expression for the effective order that readsϵ_n + s(g) = 2 n + 1/2 - ℰ_n g_0/g + 𝒪( g_0^2/g^2),where ℰ_n = [2(n+1)]!/[2^2n+1 n! (n+1)! √(π)] and s = 0 for g>0, s = 1 for g<0. This shift in the index accounts for the continuous transformation of the ψ^B_2n(x) state to the ψ^B_2(n+1)(x) one as the g →∞ (Tonk-Girardeau <cit.>) limit is crossed from strong repulsive to attractive interactions. Notice that we do not provide an expression for ϵ_0 as g → -∞ since in this case the bound state energy diverges since ϵ_0 → -∞. Finally, by using Eq. (<ref>) we can show thatψ_2(n+σ)^B(r;g) = ψ^B_2 n + 1(|r|) + g_0/g∑_m = 0m≠ n^∞√(ℰ_n ℰ_m)/m - nψ^B_2 m + 1(|r|) + 𝒪( g_0^2/g^2).The description of the behavior of the system for strong interactions can be greatly simplified by transforming to a basis where the fermions are localized as much as possible on the left or right side of the x=0 barrier. This can be achieved by the following unitary transformationψ_κ L(r;g)= - ψ^B_2 κ + 1(r;g) + (-1)^κψ^B_2 κ(r;g)/√(2),ψ_κ R(r;g)= ψ^B_2 κ + 1(r;g) - (-1)^κψ^B_2 κ(r;g)/√(2).By the use of Eq. (<ref>) we can verify several important properties of these maximally localized states which will be elucidated further in Appendix <ref>, see Eq. (<ref>).§ SYNTHETIC CONICAL INTERSECTION AT 1/G=X_I=0 Based on the analytic properties of the N_B = 1 eigenstates of Ĥ_0r (see Appendix <ref>), characterizing the system at x_I = 1/g =0, and by employing perturbation theory we can demonstrate that our system maps to a E ⊗ϵ in the vicinity of x_I = 1/g =0 [We denote here the E⊗ϵ case, since we are considering here both x_I and 1/g as synthetic coordinates <cit.>. For a fixed 1/g the system reduces to the E⊗ b model as in Sec. <ref>.]. Such a proof is convoluted in the case that m_I is finite. However, in the infinite impurity mass case the physical situation is substantially simplified. Then by simple numerical arguments we can demonstrate that the finite m_I case behaves similarly to m_I →∞ provided that m_I > m_B.The main simplification for m_I →∞ is that Ĥ_P_ CM vanishes and as a consequence the eigenstates for g →∞ can be expressed as a single Slater-determinant constructed from the ψ_κ L(r;g →∞) and ψ_κ R(r;g →∞) states. In particular, the degenerate ground states at g →∞, | Ψ_L ⟩ and | Ψ_R ⟩ (guaranteed to exist for odd N_B owing to Theorem <ref>), are characterized by the occupation numbersI^L_j =(j, L) for j ≤N_B+1/2, (j - N_B+3/2, R) for N_B+1/2 < j ≤ N_B, I^R_j =(j, L) for j ≤N_B-1/2, (j - N_B+1/2, R) for N_B-1/2 < j ≤ N_B.Therefore, within first-order perturbation theory we just need to evaluate the matrix elements among these states, since contributions outside this degenerate manifold are at least second order in perturbation theory. It can be easily verified that couplings among the above states can be induced by Ĥ_ coup and Ĥ_0r. The relevant matrix elements among the localized single-particle states up to linear order in 1/g read ⟨ψ_n L | Ĥ_0r | ψ_n L⟩ = ⟨ψ_n R | Ĥ_0r | ψ_n R⟩=ħω_B ( 2 n + 3/2) - 1/2ħω_B ℰ_n g_0/g + 𝒪( g_0^2/g^2), ⟨ψ_n L | Ĥ_0r | ψ_n R⟩ =- 1/2ħω_B ℰ_n g_0/g + 𝒪( g_0^2/g^2), ⟨ψ_n L | x̂ | ψ_n L⟩ = - ⟨ψ_n R | x̂ | ψ_n R⟩= -ℓ( 𝒳^2_n + g_0/g∑_m =0m ≠ n^∞√(ℰ_n ℰ_m)/2(n-m)√(𝒳_n 𝒳_m)/(n-m)^2 - 1/4) + 𝒪( g_0^2/g^2) ⟨ψ_n L | x̂ | ψ_n R⟩ = ⟨ψ_n L | p̂ | ψ_n L⟩ = ⟨ψ_n L | p̂ | ψ_n R⟩ = ⟨ψ_n R | p̂ | ψ_n R⟩ = 𝒪( g_0^2/g^2), with 𝒳_n = √((2n+1)!)/(2^n-1n! √(π)). Hence, the perturbative Hamiltonian for the system readsĤ_ per = E_0 -δ E g_0/g + J σ̂_x g_0/g+ Δσ̂_z x̂_I + 𝒪( g_0^2/g^2),where E_0 = ħω_B (N_B + N_B +1)/2, δ E = 2^-(N_B+1)/3 √(π)(2N_B +1)!(N_B-1)!/( N_B-1/2)! ( N_B+1/2)!ħω_B, J = -1/2ħω_B ℰ_N_B+1/2 and Δ= - ℓ𝒳^2_N_B+1/2. In addition, we have mapped | Ψ_L ⟩ to the pseudo-spin-↑ and | Ψ_R ⟩ to the pseudo-spin-↓ states, with the Pauli matrices σ̂_μ, μ∈{x, y, z} acting in the standard way in the corresponding pseudo-spin-1/2 space. Notice that the | Ψ_L ⟩ and | Ψ_R ⟩ states define a well-behaved pseudo-spin-1/2 subspace, in particular each state of the corresponding Bloch-sphere, | θ, ϕ⟩, reads ⟨ x_1, …, x_N_B | θ, ϕ⟩ = 1/√(N_B!)∑_j = 1^N! sign( P_j ) ×ψ_0 L(x_P_j(1)) ψ_0 R(x_P_j(2)) …×ψ_N_B-1/2 L(x_P_j(N_B-2)) ψ_N_B-1/2 R(x_P_j(N_B-1)) ×[ cosθ ψ_N_B+1/2 L(x_P_j(N_B)) + e^i ϕsinθ ψ_N_B+1/2 R(x_P_j(N_B)) ].The above indicates that the system exhibits an E ⊗ϵ conical intersection at g_0/g = 0 and x_I = 0. All the related symmetry requirements are satisfied, since, first, the degenerate | Ψ_L ⟩ and | Ψ_R ⟩ states give rise to a two-dimensional representation of the SU(2) symmetry, see Eq. (<ref>). Second, the two vibrational coordinates x_I and g_0/g couple to this subspace so that to favor different superpositions of the degenerate states.In the case that m_I is finite, the degenerate ground states of the system guaranteed for N_B odd owing to Theorem <ref> do not consist of a single Slater determinant due to the correlations induced by Ĥ_P_ CM. Therefore, the argumentation above does not generalize straightforwardly in this case. However, as we show in Fig. <ref> the ground states of the system for finite m_I are almost equivalent to the case of m_I →∞ for a wide range of masses. More specifically, Fig. <ref> shows the contribution of the m_I →∞ eigenstates, Ĥ_0r| Ψ_k ⟩ = E | Ψ_k ⟩, belonging to different energetic classes characterized by their eigenenergy E, to the ground state of the system for finite m_I, ( Ĥ_0r + Ĥ_R_ CM )| Ψ̃_0(m_I) ⟩ = E_0(m_I) | Ψ̃_0(m_I) ⟩. This contribution readsC(m_I;E) = ∑_|Ψ_k⟩: Ĥ_0r |Ψ_k⟩ = E |Ψ_k⟩ |⟨Ψ_k | Ψ̃_0(m_I) ⟩|^2.Notice that, in all cases, N_B = 5 and 1/g = x_I =0 is considered and | Ψ̃_0(m_I) ⟩ is calculated via exact diagonalization (see also Appendix <ref>).Figure <ref> shows that states apart from | Ψ_L ⟩ and | Ψ_R ⟩ belonging to the class E = 14.5 contribute negligibly to the ground state of the system, since C(m_I; E = 14.5) > 0.8 even in the case m_I = m_B. In addition, the population of E ≥ 16.5 exhibit a behaviour ∝ m_I^-1/2 for m_I > 10 m_B. This indicates the fact that first-order perturbation theory, ⟨Ψ_k | Ψ_L ⟩≈⟨Ψ_k | Ĥ_ coup | Ψ_L ⟩/E_0(m_I →∞) - E_k∝1/m_I is adequate to account for their population. Motivated by these numerical evidence we can show that the effective E ⊗ϵ Hamiltonian of Eq. (<ref>) carries over within first order perturbation theory in 1/m_I, albeit with modified coefficients (not shown here for brevity).
http://arxiv.org/abs/2310.17995v2
{ "authors": [ "André Becker", "Georgios M. Koutentakis", "Peter Schmelcher" ], "categories": [ "cond-mat.quant-gas", "quant-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20231027091128", "title": "Synthetic dimension-induced pseudo Jahn-Teller effect in one-dimensional confined fermions" }
[ * January 14, 2024 ==================== Multilingual coreference resolution (MCR) has been a long-standing and challenging task. With the newly proposed multilingual coreference dataset, CorefUD <cit.>, we conduct an investigation into the task by using its harmonized universal morphosyntactic and coreference annotations. First, we study coreference by examining the ground truth data at different linguistic levels, namely mention, entity and document levels, and across different genres, to gain insights into the characteristics of coreference across multiple languages. Second, we perform an error analysis of the most challenging cases that the SotA system fails to resolve in the CRAC 2022 shared task using the universal annotations. Last, based on this analysis, we extract features from universal morphosyntactic annotations and integrate these features into a baseline system to assess their potential benefits for the MCR task.Our results show that our best configuration of features improves the baseline by 0.9% F1 score.[Our code and model are publicly available at <https://github.com/HaixiaChai/multi-coref>.] § INTRODUCTIONCoreference resolution is the task to identify expressions in a given text that refer to the same entity. While considerable progress has been made in coreference resolution for English <cit.>, extending this task to multiple languages presents significant challenges due to the linguistic diversity and complexity of different languages. The multilingual coreference resolution (MCR) task <cit.> focuses on developing a general and robust system that can effectively handle multiple languages and a wide range of coreference phenomena (e.g., pronoun-drop). Recently, <cit.> propose a new set of multilingual coreference datasets, Coref-UD, built upon the framework of Universal Dependencies[One of the benefits of Universal Dependencies is that it provides cross-linguistic guidelines for morphosyntactic annotation in a consistent and language-independent manner.] <cit.>, allowing coreference researchers to conduct cross-linguistic studies across 17 datasets for 12 languages. The datasets serve as resource for the CRAC 2022 shared task on multilingual coreference resolution <cit.>. Given the harmonized universal morphosyntactic and coreference annotations, we raise the question whether there are any universal features that are common to all languages and to what extent they can contribute to the development of an MCR system. In this work, we conduct an in-depth investigation into the MCR task by using universal annotations in CorefUD. First, we analyze ground truth data from different linguistic levels, including mention, entity and document levels, and across different genres, to gain an understanding of coreference across various languages. Second, we conduct an error analysis of the most challenging cases that MCR systems fail to resolve. Last, based on this analysis, we integrate several features extracted from universal morphosyntactic annotations into a baseline system to examine their potential benefits for the MCR task. To the best of our knowledge, our method represents the first attempt to leverage universal annotations for MCR.Our findings reveal: (i) There are indeed commonalities across languages. For example, we observe a common pattern where the closest antecedent of an overt pronoun mainly corresponds to the subject or object position. These commonalities are valuable for potential future research, such as linguistic investigations aimed at further comprehending the linguistic phenomenon of coreference. However, it is important to note that exploring universal features is a challenging task due to the inherent variability among languages, e.g., the expression of definiteness. (ii) A common issue encountered in all languages by MCR systems is the difficulty of correctly detecting nominal nouns within some two-mention entities. (iii) Our experimental results show that our best configuration of features improves the baseline by 0.9% F1 score.§ RELATED WORKAnalysis in Multiple Languages. Coreference is a complex linguistic phenomenon that requires linguistic expertise, even more so when studying it in a multilingual context. Oftentimes, researchers primarily focus on investigating coreference within a single target language in which they possess expertise, enabling them to gain valuable insights specific to that language <cit.>. However, a few studies have been conducted on coreference across multiple languages by using multilingual coreference datasets <cit.>. These studies include statistical analysis of the datasets <cit.>, as well as efforts to improve the performance and generalizability of MCR systems from a technical standpoint <cit.>. It is apparent that analyzing coreference across multiple languages is a challenging task due to the expertise required of each language. However, CorefUD helps such analyses by providing universal annotations. Our work is the first attempt to analyze cross-linguistic patterns and gain a broader understanding of coreference across different languages and language families in a comprehensive and comparative manner. In the field of MCR, there has been notable attention directed towards the research of two types of languages. One prominent area of investigation is around pro-drop languages, such as Chinese <cit.>, Italian <cit.> and Arabic <cit.>. Another research direction involves the study of morphologically rich languages, such as German and Arabic <cit.>. In contrast to the aforementioned work, which primarily focuses on enhancing the model's capabilities through technical analysis of specific linguistic phenomena, our research delves into gold annotations to explore multilingual coreference including phenomena like zero pronouns from a linguistic perspective, uncovering valuable insights to foster further research.MCR Systems. In the past decade, numerous MCR approaches have been proposed, including rule-based approaches, various training methodologies such as cross-lingual and joint training, and methods that leverage linguistic information: (i) Rule-based. It requires a complete redefinition of a set of rules to transform a monolingual coreference resolution system into a multilingual one, for example when using Stanford's multi‐pass sieve CR system <cit.>. The adaptation process is time-consuming and requires a language expert to develop the rules. (ii) Translation-based projection. This is a technique that involves the automatic transfer of coreference annotations from a resource-rich language to a low-resource language using parallel corpora <cit.>. The primary challenge of this approach is the occurrence of a large number of projected errors, such as a nominal phrase in English is translated as a pronoun in German. (iii) Latent structure learning. <cit.> and <cit.> use a latent structure perceptron algorithm to predict document trees that are not provided in the training data. These document trees represent clusters using directed trees over mentions. This approach has achieved the best results in the CoNLL-2012 shared task for English, Chinese and Arabic at that time. (iv) Joint training. This is a technique that finetunes multilingual word embeddings on the concatenation of the training data in multiple languages. It allows the model to learn shared representations and help in cases where the target language has limited training data.<cit.> (v) Methods with linguistic information. Several studies have incorporated syntactic and semantic information into their models <cit.>. These works either focus on coreference resolution within a single language or employ machine learning approaches to address the MCR task. Different from the above, our work incorporates universal morphosyntactic information into an end-to-end joint training method across multiple languages.§ LINGUISTIC ANALYSES ON COREFUD 1.1 CorefUD 1.1[See Appendix <ref> for the statistics of CorefUD 1.1.] is the latest version of CorefUD <cit.> for the CRAC 2023 shared task on multilingual coreference resolution, including 17 datasets for 12 languages.[<https://ufal.mff.cuni.cz/corefud/crac23>] In the following subsections, we conduct a linguistic study on it by using the ground truth from the training datasets, examining coreference phenomena from different linguistic levels, namely mention, entity and document perspectives, and across different genres, in multiple languages. §.§ MentionA mention is the smallest unit within a coreference relation, comprising one or more words (maybe even less than a word in some cases). Position of Head. The head of a mention typically represents the entity being referred to. The remaining words in the mention either provide additional information that precedes the head word (pre-modification, e.g., a highly radioactive element) or further specify the meaning of the head after it (post-modification, e.g., a car with leather seats.). Note that the modifying words can be dominant in the mention in some cases, e.g., the first floor, making resolution of those mentions harder sometimes.Table <ref> shows that Hungarian, Lithuanian and Turkish all have a high percentage of pre-modified mentions. They are from the Uralic, Baltic and Turkic language families that are considerably different from the other languages. Mention Types. To gain insight into how mentions represent and refer to entities, we categorize five types of mentions by the universal part-of-speech (UPOS) tags of the head words in gold mentions, namely , , ,and .Unsurprisingly, in Figure <ref>, we observe thatandare the two main categories of mentions in most of datasets. en_parcorfull <cit.>, de_parcorfull <cit.> and fr_democrat <cit.> are the datasets having most overt pronouns, around 46% of mentions. In contrast, resolving zero pronouns is more crucial in the Czech datasets <cit.> where the number of zero pronouns is higher than that of overt pronouns. Universal Dependency Categories. By using universal dependency (UD) relations between words in a sentence, we can understand the hierarchical structure of the sentence and identify the potential antecedents of referring expressions. We classify UD relations of heads of gold mentions into 12 categories according to the UD taxonomy[<https://universaldependencies.org/u/dep/index.html>], as illustrated in Table  <ref>. Anaphor-Antecedent Relation. Given mention types and UD categories presented above, we have a particular interest in analyzing the UD category of the closest antecedent to an anaphor based on its mention types (e.g.,- ). We consider all mentions in an entity as observed anaphors, but exclude the first mention.The results in Figure <ref> present the UD relations that are most frequently associated withand . We found that(e.g., oblique nominal),(e.g., numeric modifier, nominal modifier and appositional modifier),andare the primary UD relations of antecedents for , e.g., Sam, my brother, John 's cousin, arrived. In contrast, the closest antecedents ofmainly correspond to subjects or objects within core arguments.[See Appendix <ref> for the details ofand .] It is important that these findings are applicable across all languages, emphasizing their universal relevance in the context of the multilingual coreference resolution task. §.§ EntityIn a text, an entity can have multiple mentions all referring to the same identifiable object, such as a person or concept. Each gold entity in all datasets of CorefUD 1.1 has 3 to 4 mentions on average without considering singletons. First Mention. The first mention within a mention chain serves to introduce the entity into a context. Thus, this mention could be seen as the most informative expression in the entity. In ca_ancora <cit.>, for example, 97% of first mentions belong to mention types ofor , which convey a richer semantic meaning than pronouns. Furthermore, we observe a consistent trend across all languages that the ratio of entities with the first mention being the longest mention in the entity ranges from 70% to 90%.[See Appendix <ref> for the details.] The longer a mention is, the more information it represents, e.g., a person vs. a person that works at Penn. Overall, the first mention captures semantic meaning of an entity.Semantic Similarity. In addition to the first mention, an entity can accumulate information with each subsequent mention. The mentions can be identical, slightly different, or completely different when compared to other mentions within the same entity. To examine the semantic similarity of coreferent mentions, we compute the Euclidean distance between the embeddings of each gold mention pair encoded using mBERT <cit.>. In Figure <ref>, a greater distance indicates that the mentions have a bigger semantic distance, while still referring to the same entity. Conversely, a smaller distance suggests that the mentions are semantically more similar, if not identical. We speculate that the genres of the datasets have an impact on the analysis above. For example, in narrative texts such as EU Bookshop publications in en_parcorfull <cit.> and Hungarian Wikipedia in hu_korkor <cit.>, an entity can be realized with different expressions. Thus, the semantic similarity of mentions tends to be greater. Recall thatandare two main categories of mention types. So, it is challenging to resolve mentions that have bigger semantic distance. §.§ DocumentIn a document, there can be multiple entities, with some entities spanning the entire document while others appearing only in very few adjacent sentences. Occasionally, these entities may overlap within certain sections of the document, particularly in areas where complex relationships between entities are discussed. Table <ref> shows an example text. Competing Antecedents of Pronominal Anaphors. In a local context, the resolution of pronouns can become difficult due to their ambiguity caused by the presence of multiple potential antecedents from distinct entities or singletons. We focus on those ambiguous cases that have potential antecedents with gender and number agreement. Both the pronouns and their antecedents are located in the same or the immediately preceding sentence.Figure <ref> shows that in ca_ancora and es_ancora <cit.>, over 70% of overt pronouns satisfy the analysis conditions mentioned in the previous paragraph. This percentage is notably higher compared to the other datasets. Additionally, the average number of competing candidates in these two datasets is around six. This highlights the considerable difficulty in distinguishing the true antecedent(s) of the pronoun among a pool of antecedents. To address such complex scenarios, one heuristic and explainable approach is to leverage centering theory <cit.>. It suggests that a pronoun tends to refer to the center or the most prominent entity in the preceding context. Specifically, by tracking the center transitions, we can identify potential antecedents based on salience and continuity of the entity. Centering theory is applicable across all languages, as it is not dependent on any specific language. Besides analyzing overt pronouns, we also examine the competing antecedents for zero pronouns. In the Czech datasets <cit.>, the average number of competing antecedents is less than four, which is lower than that of ca_ancora and es_ancora.[See Appendix <ref> for the details.] This implies that identifying the true antecedents of zero anaphors is not very difficult in the Czech datasets. In pro-drop languages, a more coherent discourse tends to facilitate or encourage the use of zero pronoun especially in dialogue or social media contexts. We found that the nearest antecedents of some zero pronouns can either be overt pronouns or zero pronouns that are less informative. Hence, resolving anaphoric zero pronouns is a difficult subtask that requires contextual information. §.§ GenreA document can be different in types of discourse with respect to referring expressions. For example, authors may use diverse expressions (e.g., dog owners, owners, puppy owners and they) when referring to the same entity for the physical continuity of the text. In contrast, spoken discourse, especially in conversations, tends to have a higher density of referring expressions, including many pronouns and ellipsis, which contribute to the grammatical coherence within the discourse (e.g., Sue? Is not here.), and relies mostly on shared situational knowledge between speaker and listener (known as the 'common ground'). <cit.>In Figure <ref>, we present the frequency of personal pronouns usage per eight thousands words in each genres from the English corpus, en_gum <cit.>. The results show that vlog, as a type of web discourse, has the highest frequency of pronoun usage. Different from conversation, content creators record themselves on video for their audience without engaging in real-time interaction during the recording process. When they share their thoughts or experiences, they tend to use first-person pronouns (e.g., I and we) more frequently compared to other genres. We also observe that the frequency of pronouns in fiction is high, surpassing even that of speech, indicating a strong continuity in reference, particularly related to the story's characters. This finding is in line with the results of <cit.>.As for written non-fiction, particularly in academic, news and voyage (describing a journey or trip), there is a lower use of pronouns, with academic texts showing the lowest frequency. § ERROR ANALYSIS OF MCR SYSTEMS Apart from studying coreference on gold annotations solely, we also investigate the ground truth that the MCR systems failed to address. Our particular focus is on two-mention entities, which comprise over 80% of the gold entities where the recall is zero.[See Appendix <ref> for the details of the error analysis.] Here, we analyze the predictions of two MCR systems: BASELINE <cit.>, an end-to-end based system, and ÚFAL <cit.>, the winning system in the CRAC 2022 shared task on MCR.[The two system outputs from the development sets of CorefUD 1.0 are publicly accessible at <https://ufal.mff.cuni.cz/corefud/crac22>.] Figure <ref> presents the error analysis in a tree structure conducted on ÚFAL. §.§ Undetected MentionsThe primary factor leading to unresolved two-mention entities is the inability to detect one or both of the mentions. ÚFAL identifies 22% of the mentions, while BASELINE detects 19%. ÚFAL employs a pipeline approach, treating mention detection as a separate token-level classification task. The proposed tags for tokens can handle embedded and also overlapping mention spans. We speculate that the mention detection module contributes slightly more to the identification of mentions.We further analyze the mention types and length of the undetected mentions.error (i) More than 50% of the undetected mentions on average are nominal nouns, so we try to analyze the types of these noun phrases based on definiteness, such as demonstrative articles (e.g., that house) and proper noun-modified noun phrases (e.g., Barack Obama presidency). However, due to the highly variable nature of definiteness across languages and the lack of consistent annotations at this level of granularity, we encounter a challenge in implementing this analysis. For example, in Lithuanian, definiteness is encoded within adjectives or nouns, and possessive adjectives in Hungarian can only be inferred from word suffixes. Moreover, some languages, such as Slavic ones, do not have grammaticalized definiteness at all. (ii) Analyzing mention length, we observe that the majority of mentions in Hungarian (70%) and Lithuanian (80%) consist of only one or two words. One of the reasons is that Hungarian, for example, is an agglutinative language[Words are constructed by combining stem forms with multiple affixes to convey diverse grammatical features such as tense and number, for example, beleselkedtem (I look into) and Odafigyelhettél volna (You could have paid attention to it).]. When dealing with such languages,it is plausible to include a preprocessing stage to handle word splitting.§.§ Missing LinksWe also explore the relationship between the two mentions in the unresolved entities.error First, we notice that in BASELINE, more than 45% of the entities have both mentions located in the same sentence. To resolve those entities, syntax information that captures the grammatical relationships and dependencies between words within the sentences is beneficial. One approach is employing binding theory <cit.>. On the other hand, in ÚFAL, 39% of the entities have their two mentions spanning across multiple sentences. To address this issue, an approach is to use knowledge extracted from the discourse structure of the text. Second, for both systems, resolving cases where both mentions are nominal nouns presents difficulties across all languages. Additionally, our analysis in Section <ref> demonstrates that there are mention pairs referring to the same entities, but showing lower semantic similarity. These findings suggest that it is important to improve the capability of resolving noun phrases. Lastly, we examine the gold anaphor-antecedent relations between the two mentions of the unresolved entities. We found that the most frequent UD relation associated with the antecedents of nominal nouns are nominal dependents (e.g., nominal modifier and appositional modifier). For antecedents of overt pronouns, the subject in core arguments is the most common UD relation. § MODELING WITH UNIVERSAL ANNOTATIONS Based on the findings above, we can gain additional insights and clues regarding MCR. For example, we found that the closest antecedents of overt pronouns are always located in subject position. This pattern is common in nearly all languages as shown in Figure <ref> (b). Therefore, we use linguistic information extracted from universal annotations for the purpose of modeling and examine its effectiveness, in the following section. §.§ ModelBaseline. We adopt the model proposed by <cit.> as our base model, which is an end-to-end neural model inspired by the method introduced by <cit.>. It serves as the baseline for the CRAC 2022 shared task on multilingual coreference resolution. Incorporating Linguistic Information. Given an input document consisting of n tokens, we first generate a contextual embedding for each token using mBERT denoted as 𝐗 = (𝐱_1, ..., 𝐱_n).The tokenization is based on either word forms (wf) or lemmas (lem). Then we define the embedding of each candidate span c as:𝐞_c = [ 𝐱_c_start, 𝐱_c_end, 𝐱̂_c, ϕ(s_c) ]where 𝐱_c_start and 𝐱_c_end denote the embeddings of the boundary tokens. 𝐱̂_c is the addition of attentionally weighted token representations in the candidate. ϕ(s_c) is a concatenated feature vector that includes the width, UPOS tags, UD relations, mention types and UD categories of the span. We select the token with the maximum attention weight as the head of the candidate to compute the mention types and UD categories as discussed in Section <ref>.We measure how likely a candidate is a mention by using a mention score f_m(·):f_m(c) = 𝐅𝐅𝐍𝐍_m([𝐞_c, ϕ(u_c)])where ϕ(u_c) encodes the UPOS tag, UD relation, mention type and UD category of the candidate determined by its 'head' word as mentioned above.After extracting the top λ n mentions based on the mention score, we compute the likelihood of a candidate mention c being an antecedent of a query mention q by a scoring function f(c,q):f(c,q) = 𝐅𝐅𝐍𝐍_s([𝐞_c, 𝐞_q, 𝐞_c∘𝐞_q, ϕ(c,q)])ϕ(c,q) denotes the embeddings of some general features of the document: language and word order of the language[<https://wals.info/>]. For each query mention, our model predicts a distribution P̂(q) over its candidates, q ∈ Y(c):P̂(q) = exp (f(c,q))/∑_k∈ Y(c)exp(f(c,k))Note that if the query mention is a singleton, we set the scoring function to zero.[For more details, please refer to the original papers, <cit.> and <cit.>.] Training and Inference. Since ÚFAL <cit.> demonstrated that a multilingual model based on a multilingual language model outperforms monolingual models on the MCR task, we adopt a similar approach. Our model is jointly trained on a mixture of datasets of 10 languages from CorefUD 1.0 <cit.> using mBERT <cit.> as the pretrained language model. Then we use this trained model to predict mention clusters on the target language-specific datasets. §.§ Experiments Settings. We verify the effectiveness of our models on CorefUD 1.0 <cit.>. Because the test datasets are not publicly available, we partitioned approximately 10% of the training datasets to create our own test datasets. The results are reported using the CoNLL F1 score — the average of MUC <cit.>, B3 <cit.>, CEAFe <cit.>. The final ranking score is calculated by macro-averaging the CoNLL F1 scores over all datasets. To ensure a fair comparison, we keep all parameters the same as the baseline <cit.>. All our experiments are performed on a single NVIDIA Tesla V100 32G GPU. We examine two models, namely ours_wf and ours_lem, as discussed in Section <ref>, in comparison with the baseline model trained on our specific setting. Results. Table <ref> presents our results. Our model ours_wf shows a modest improvement over the baseline with a margin of 0.9% F1 score on average across all languages.The model performs best on Germanic datasets, whereas the lt_lcc <cit.> and ru_rucor <cit.> datasets present the greatest difficulties, indicating that these two Baltic and Slavic languages are particularly difficult to handle. In the ablation study, we observe that including general features like language and word order also yields positive effects on performance, in addition to incorporating universal annotations.In contrast, the performance of ours_lem shows a decline compared with BASELINE. The method is specifically designed to address data sparsity and handling out-of-vocabulary words in morphological-rich languages. However, lemmatization can result in different words being mapped to the same lemma and loss of valuable morphological information present in word forms. In order to handle multiple languages together, it is crucial to employ a trade-off strategy or to implementa preprocessing approach. Error Analyses. We employ the same analysis methodology as presented in Section <ref> for the error analysis of our model ours_wf and BASELINE in our setting. We found that ours_wf predicts more clusters correctly than BASELINE, either in full or partially (i.e., the rate of gold entities with a recall of zero is lower on average, 39.19% vs. 39.77%.). Two-mention entities are the most difficult cases for the two examined systems. In these unresolved two-mention entities, ours_wf has fewer undetected mentions on average especially in fr_democrat and de_parcorfull, as illustrated in Figure <ref>. Among those undetected, there are more mentions consisting of more than two words compared with BASELINE. For the missing links, the two systems produce similar results. Both mentions in two-mention entities are primarily nominal nouns. And the most frequent UD relation associated with the antecedent of nominal is still nominal dependents. Overall, our model ours_wf can resolve slightly more entities and shows a superior performance in mention detection compared with BASELINE. Nevertheless, there is still room for improvement. § DISCUSSION AND CONCLUSION It has become apparent that leveraging universal morphosyntactic annotations can be advantageous in various ways, like exploring underlying patterns of coreference, performing in-depth analysis and making a contribution to the development of an MCR system. However, there are still language-specific characteristics that hinder the comprehensive study of multiple languages together, particularly when it involves analyzing intricate aspects of the morphological layer, like definiteness and compound nouns in German. In addition, while multilingual datasets are harmonized to some extent, there are still cases where certain information, such as entity types, is only provided for a limited number of languages. This limitation prevents us from conducting further analyses, such as examining semantic class agreement across languages. We study MCR primarily focusing on identity coreference since it is the most important relation across all datasets. However, it is important to note that there exist various other anaphoric relations, such as bridging and discourse deixis <cit.>, that remain unexplored. In this work, we analyze coreference across multiple languages by leveraging the harmonized universal morphosyntactic and coreference annotations in CorefUD. This analysis provides valuable insights into common features and challenges in MCR. We demonstrate the benefits of incorporating linguistic features for enhancing the MCR system performance. § LIMITATIONSIn this work, our analyses are mainly corpus-based studies. The reliance on selected specific corpora may result in a focus on particular genres, domains, or time periods that may not be representative of other contexts. However, with the high number of datasets from diverse genres and domains, we believe the findings still can provide some valuable insights into MCR. The languages examined in our study belong to the European language group. It would be interesting to involve languages from other regions, like Arabic and Chinese. § ACKNOWLEDGEMENTSWe thank the anonymous reviewers for their helpful feedback that greatly improved the final version of the paper. We also thank Margareta Kulcsar for her early experiments contributing to this work. This work has been funded by the Klaus Tschira Foundation, Heidelberg, Germany. The first author has been supported by a HITS Ph.D. scholarship. acl_natbib§ LINGUISTIC ANALYSES ON COREFUD 1.1 We present the statistics of CorefUD 1.1 in Table <ref> to provide a basic understanding of all the datasets.§.§ Anaphor-Antecedent Relation Figure <ref> demonstrates the analysis of anaphor-antecedent relations where anaphors areand .§.§ First Mention Table <ref> shows the statistics of first mentions that are the longest mentions in entities. §.§ Competing Antecedents of Pronominal Anaphors Figure <ref> shows the analysis of competing antecedents of zero pronouns on three languages, Catalan, Czech and Spanish. § ERROR ANALYSISFigure <ref> presents the percentages of mention types of undetected mentions based on the predictions of ÚFAL.In Figure <ref>, we show the distances between mentions in the unresolved two-mention entities for BASELINE and ÚFAL. Table <ref> shows various analyses conducted to explore the underlying reasons of unresolved entities in both BASELINE and ÚFAL systems.
http://arxiv.org/abs/2310.17734v1
{ "authors": [ "Haixia Chai", "Michael Strube" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231026185004", "title": "Investigating Multilingual Coreference Resolution by Universal Annotations" }
lemmaLemma[section] theoremTheorem[section] corollaryCorollary[section] propositionProposition[section] assertionAssertion[section] remarkRemark[section] assumptionAssumption[section] exampleExample[section] testproblemTest Problem[section] problemProblem[section] definitionDefinition[section] algorithmAlgorithm[section]
http://arxiv.org/abs/2310.17856v1
{ "authors": [ "M. Akter", "M. M. Rizvi", "M. Forkan" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20231027021739", "title": "A New Algorithm for Optimizing Dubins Paths to Intercept a Moving Target" }
Radio outburst from a massive (proto)star R. Cesaroni1 L. Moscadelli1 A. Caratti o Garatti2,3 J. Eislöffel4 R. Fedriani5 R. Neri6 T. Ray3 A. Sanna7 B. Stecklum4Received date; accepted date ==================================================================================================================================================================================================== Recently, due to the popularity of deep neural networks and other methods whose training typically relies on the optimization of an objective function, and due to concerns for data privacy, there is a lot of interest in differentially private gradient descent methods.To achieve differential privacy guarantees with a minimum amount of noise, it is important to be able to bound precisely the sensitivity of the information which the participants will observe.In this study, we present a novel approach that mitigates the bias arising from traditional gradient clipping. By leveraging public information concerning the current global model and its location within the search domain, we can achieve improved gradient bounds, leading to enhanced sensitivity determinations and refined noise level adjustments.We extend the state of the art algorithms, present improved differential privacy guarantees requiring less noise and present an empirical evaluation. § INTRODUCTIONWhile machine learning allows for extracting statistical information from data with both high economical and societal value, there is a growing awareness of the risks for data privacy and confidentiality.Differential privacy <cit.> has emerged as an important metric for studying statistical privacy.Due to the popularity of deep neural networks (DNNs) and similar models, one of the recently most trending algorithmic techniques in machine learning has been stochastic gradient descent (SGD), which is a technique allowing for iteratively improving a candidate model using the gradient of the objective function on the data.A popular class of algorithms to realize differential privacy while performing SGD is thealgorithm <cit.> and its variants.Essentially, these algorithms iteratively compute gradients, add differential privacy noise, and use the noisy gradient to update the model.To determine the level of differential privacy achieved, one uses an appropriate composition rule to bound the total information leaked in the several iterations.To achieve differential privacy with a minimum amount of noise, it is important to be able to bound precisely the sensitivity of the information which the participants will observe.One approach is to bound the sensitivity of the gradient by assuming the objective function is Lipschitz continuous <cit.>. Various improvements exist in the case one can make additional assumptions about the objective function.For example, if the objective function is strongly convex, one can bound the number of iterations needed and in that way avoid to have to distribute the available privacy budget over too many iterations <cit.>. In the case of DNN, the objective function is not convex and typically not even Lipschitz continuous.Therefore, a common method is to 'clip' contributed gradients <cit.>, i.e., to divide gradients by the maximum possible norm they may get.These normalized gradients have bounded norm and hence bounded sensitivity.In this paper, we argue that gradient clipping may not lead to optimal statistical results (see Section <ref>), and we propose instead to use weight clipping, an idea suggestedin <cit.> but to the best of our knowledge not investigated yet in depth.Moreover, we also propose to consider the maximum gradient norm given the current position in the search space rather than the global maximum gradient norm, as this leads to additional advantages. In particular, our contributions are as follows: * We introduce an novel approach, applicable to any feed-forward neural network, to compute gradient sensitivity that applied ineliminates the need for gradient clipping. This strategy bridges the gap between Lipschitz-constrained neural networks and differential privacy.* We present a new algorithm, , that enforces bounded sensitivity of the gradientsWe argue that our approach, based on weight clipping, doesn't suffer from the bias which the classic gradient clipping can cause.* We present an empirical evaluation, confirming that on a range of popular datasets our proposed method outperforms existing ones. * We implemented our new algorithm in an open source library.The remainder of this paper is organized as follows.First, we review a number of basic concepts, definitions and notations in Section <ref>.Next, we present our new method in Section <ref> and present an empirical evaluation in Section <ref>.We discuss related work in Section <ref>.Finally, we provide conclusions and directions for future work in Section <ref>. § PRELIMINARIES AND BACKGROUNDIn this section, we briefly review differential privacy, empirical risk minimization (ERM) and differentially private stochastic gradient descent ().We will denote the space of all possible instances byand the space of all possible datasets by .We will denote by [N]={1… N} the set of the N smallest positive integers. §.§ Differential Privacy An algorithm is differentially private if even an adversary who knows all but one instances of a dataset can't distinguish from the output of the algorithm the last instance in the dataset.More formally: We say two datasets ,∈ are adjacent, denoted ∼, if they differ in at most one element.We denote bythe space of all pairs of adjacent datasets.Let ϵ>0 and δ>0.Let 𝒜:→𝒪 be a randomized algorithm taking as input datasets from . The algorithm 𝒜 is (ϵ,δ)-differentially private ((ϵ,δ)-DP) if for every pair of adjacent datasets (,)∈, and for every subset S⊆𝒪 of possible outputs of 𝒜, P(𝒜()⊆ S) ≤ e^ϵ P(𝒜()⊆ S)+δ.If δ=0 we also say that 𝒜 is ϵ-DP.If the output of an algorithm 𝒜 is a real number or a vector, it can be privately released thanks to differential privacy mechanisms such as the Laplace mechanism or the Gaussian mechanism <cit.>.While our ideas are more generally applicable, in this paper we will focus on the Gaussian mechanism as it leads to simplier derivations.In particular, the Gaussian mechanism adds Gaussian noise to a number or vector which depends on its sensitivity on the input. The ℓ_2-sensitivity of a function f:→ℝ^p is(f) =max_,∈f() - f()_2Let f:→ℝ^p be a function.The Gaussian mechanism transforms f into f̂ with f̂() = f() + b where b∼𝒩(0,σ^2 I_p)∈ℝ^p is Gaussian distributed noise.If the variance satisfies σ^2 ≥ 2ln(1.25/δ)((f))^2/ϵ^2, then f̂ is (ϵ,δ)-DP.§.§ Empirical risk minimization Unless made explicit otherwise we will consider databases ={_i}_i=1^n containing n instances _i=(_i,_i)∈× with =ℝ^p and ={0,1} sampled identically and independently (i.i.d.) from an unknown distribution on . We are trying to build a model f_θ: → (with ⊆ℝ) parameterized by θ∈Θ⊆ℝ^p, so it minimizes the expected loss ℒ(θ)=𝔼_z[ℒ(θ ; )], where ℒ(θ; )=ℓ(f_θ(),) is the loss of the model f_θ on data point . One can approximate ℒ(θ) byR̂(θ ; )=1/n∑_i=1^n ℒ(θ ; _i)=1/n∑_i=1^n ℓ(f_θ(x_i), y_i),the empirical risk of model f_θ.Empirical Risk Minimization (ERM) then minimizes an objective function F(θ,) which adds to this empirical risk a regularization term ψ(θ) to find an estimate θ̂ of the model parameters: θ̂ ∈θ∈ΘminF(θ; ):=R̂(θ; )+γψ(θ)where γ≥ 0 is a trade-off hyperparameter.*Feed forward neural networks An important and easy to analyze class of neural networks are the feed forward networks (FNN).A FNN is a direct acyclic graph where connections between nodes don't form cycles. A FNN f_θ: ℝ^n →ℝ^m is a function which can be expressed as= K∘…∘1where k : ℝ^n_k→ℝ^n_k+1. k is the k-th layer function parameterized by θ_k with input x_k and output x_k+1 for 1≤ k≤ K. Here, θ=(θ_1 …θ_K), n=n_1 and m=n_K+1.Common layers include fully connected layers, convolutional layers and activation layers. Parameters of the first two correspond to weight and bias matrices, θ_k = (W_k, B_k), while activation layers have no parameter, θ_k = ().§.§ Stochastic gradient descent To minimize F(θ,),one can use gradient descent, i.e., iteratively for a number of time steps t=1… T one computes a gradient = ∇ F(,) on the current modeland updates the model setting = - η(t) where η(t) is a learning rate.Stochastic gradient descent (SGD) introduces some randomness and avoids the need to recompute all gradients in each iteration by sampling in each iteration a batch ⊆ Z and computing an approximate gradient ĝ_t = 1/||(∑_i=1^||∇ℒ(,) + )+γ∇ψ(θ).To avoid leaking sensitive information, <cit.> proposes to add noise to the gradients. Determining good values for the scale of this noisehas been the topic of several studies.One simple strategy starts by assuming an upper bound for the norm of the gradient.Let us first define Lipschitz functions: Let >0 . A function f is -Lipschitz with respect to some norm · if for all θ, θ^'∈Θ there holds f(θ)-f(θ^')≤θ-θ^'. If f is differentiable and ·=·_2, the above property is equivalent to:∇ f(θ)_2 ≤, ∀θ∈ΘWe call the smallest valuefor which f is -Lipschitz the Lipschitz value of f. Then, from the model one can derive a constantsuch that the objective function is -Lipschitz, while knowing bounds on the data next allows for computing a bound on the sensitivity of the gradient.Once one knows the sensitivity, one can determine the noise to be added from the privacy parameters as in Lemma <ref>. The classicalgorithm <cit.>, which we recall in Algorithm <ref> in supplementary material <ref> for completeness, clips the gradient of each instance to a maximum value C (i.e., scales down the gradient if its norm is above C) and then adds noise based on this maximal norm C. g̃_t = 1/||(∑_i=1^||C∇_θ̃ℒ(f(x_i; θ̃)) + b_t) + γ∇ψ(θ)where b_t is appropriate noise and whereCv = v.min(1,C/v). § OUR APPROACH In this work, we leverage Lipschitz value estimation to determine sensitivity. While traditionalcontrols sensitivity via gradient sample clipping, our new method estimates cumulative gradient sensitivity. This is grounded in Lipschitz-constrained model literature, highlighting the connection between the Lipschitz value for input and parameter. Subsection <ref> demonstrates the use of backpropagation for gradient sensitivity estimation. Subsection <ref> delves into determining an upper Lipschitz bound, and in <ref>, we introduce , a novel algorithm ensuring privacy without gradient clipping. §.§ BackpropagationConsider a feed-forward network f_θ.We define ℒ_k(θ,(x_k,y))=ℓ((f_θ_K^(K)∘…∘ f_θ_k^(k))(x_k), y). For feed-forward networks, backpropagation relies on the subsequent recursive equations: k= k+1∂ x_k+1/∂ x_k = k+1k k= k+1∂ x_k+1/∂θ_k = k+1k.Note that θ_k and x_k are vectors, so also k, k and k+1 are vectors, and k and k are Jacobian matrices. In terms of 2-norms there holds k_2≤k+1_2k_2 k_2≤k+1_2 k_2We will use l_k to denote an upper bound of max_x_k,y∂ℒ_k(θ,x_k,y)/∂ x_k.In particular, we will ensure that l_K+1≥max_k_K+1,y∂ℓ/x_K+1(x_K+1,y) andl_k≥l_k+1max_x_kk_2Δ_k≥l_k+1max_x_kk_2Hence, l_k is an upper bound ofmax_x_kk. By definition <ref> and the triangle inequality, the sensitivity of the gradient k is upper bounded by twice max_x_kk, so Δ_k≥(k)/2.Note that we can easily provide such upper bounds l_k andθ_k as the layers ^(k) and the loss ℓ are Lipschitz. If so,since all ^(k) and ℓ are differentiable on any x_k, per Rademacher’s theorem <cit.>,k is bounded by its Lipschitz value. We only need to find a tight upper bound of this Lipschitz value. §.§ Estimating lipschitz valuesIn this section we bound Lipschitz values of different types of layers.Losses and activations. Examples of Lipschitz losses encompass Softmax Cross-entropy, Cosine Similarity, and Multiclass Hinge. When it comes to activation layers, several prevalent ones, such as ReLU, tanh, and Sigmoid, are 1-Lipschitz. We provide a detailed list in the supplementary material<ref>.Linear layers. If k is a linear layer, thenk_2 = ∂ (W_k^⊤ x_k + B_k)/∂ (W_k,B_k)_2 = (x_k,1)_2,k_2 = ∂ (W_k^⊤ x_k+B_k)/∂ x_k_2 = W_k_2. Convolutional layers. There are many types of convolutional layers, e.g., depending on the data type (strings, 2D images, 3D images …), shape of the filter (rectangles, diamonds …).Here we provide as an example only a derivation for convolutional layers for 2D images with rectangular filter. In that case, the input layer consists of n_k = c_in h w nodes and the output layer consists of n_k+1= c_out hw nodes with c_in input channels, c_out output channels, h the height of the image and w the width. Then, θ_k∈ℝ^c_in× c_out× h' × w' with h' the height of the filter and w' the width of the filter.Indexing input and output with channel and coordinates, i.e., x_k∈ℝ^c_in× h× w and x_k+1∈ℝ^c_out× h × w we can then writex_k+1,c,i,j = ∑_d=1^c_in∑_r=1^h'∑_s=1^w' x_k,d,i+r,j+sθ_k,c,d,r,swhere components out of range are zero. We can derive (see Appendix <ref> for details) that k_2≤√(h'w')θ_k_2k_2 ≤√(h'w')x_k_2We summarize the upper bounds of the Lipschitz values, either on the input or on the parameters, for each layer type in the supplementary material <ref>. We can conclude that networks for which the norms of the parameter vectors θ_k are bounded, are Lipschitz networks as introduced in <cit.>, i.e., they are FNN for which each layer function f_θ_k^(k) is Lipschitz.We will denote bythe set of all paremeter vectors θ for f_θ such that θ_k≤ C for k=1… K, and bythe set of all parameter vectors for which θ_k=C for k=1… K. LayerSensitivity. We observe that the upper bounds we have found above are either functions of the norm of the parameters, or functions of the norm of the input. Let's call ϕ_x_k and ϕ_θ_k these two functions. We can now introduce <ref> to compute the sensitivity Δ_k of layer k.Here we denote by X_k the maximal possible norm of x_k, i.e., for all possible inputs x_k, x_k=(f^(k-1)_θ_k-1∘…∘ f_θ_1^(1))(x_1)≤ X_k. It capitalizes on a forward pass to compute the maximal input norms X_k, and a backward pass applying Equation <ref>. §.§ We introduce a novel differentially private stochastic gradient descent algorithm, called , that leverages the estimation of the per-layer sensitivity of the model to provide differential privacy without gradient clipping.Provided a feed-forward modelcomposed of Lipschitz constrained operators, a Lipschitz loss ℓ and a bounded input norm X_1,is differentially private. Indeed,utilizes the Gaussian mechanism. The gradient's sensitivity is determined without any privacy costs, as it depends only on the current parameter values (which are privatized in the previous step, and post-processing privatized values doesn't take additional privacy budget) and not on the data.Privacy accountanting.adopts the same privacy accounting as . Specifically, the accountant draws upon the privacy amplification <cit.> brought about by Poisson sampling and the Gaussian moment accountant <cit.>. It's worth noting that while we utilized the Renyi Differential Privacy (RDP) accountant <cit.><cit.> in our experiments,is versatile enough to be compatible with alternative accountants.Requirments. As detailed in previous subsection <ref>, the loss and the model operators need to be Lipschitz and the norm of the input needs to be bounded. We've enumerated several losses and operators that meet these criteria in the supplementary material. While we use the spectral norm to characterize Lipschitzness <cit.><cit.> in our study <ref>, other methods are also available, as discussed in <cit.>.ClipWeights. The ClipWeights function is essential to the algorithm, ensuring Lipschitzness, which facilitates model sensitivity estimation. As opposed to standard Lipschitz-constrained networks <cit.><cit.> which increase or decrease the norms of parameters to make them equal to a pre-definied value, our approach normalizes weights only when their current norm exceeds a threshold. This results in adding less DP noise for smaller norms. Importantly, as θ is already made private by noise addition in the previous iteration, its norm is private too.Computation techniques. For both <ref> andClipWeights it's crucial to compute the greatest singular matrix values efficiently. A renowned technique is the power method <cit.>. If this isn't sufficiently fast,the power method can be enhanced using autograd <cit.>. Another idea is to use the Frobenius norm, which is faster to compute but may have drawbacks in terms of tightly bounding the norm. As computing spectral norms is relatively costly, we avoid to recompute them by storing them inin <ref>.§.§ Avoiding the bias of gradient clipping Ouralgorithm finds a local optimum (for θ) of F(θ,Z) inwhiledoesn't necessarily find a local optimum of F(θ,Z) in . In particular, we prove in Appendix <ref> the following Essentially, the effect of scaling weight vectors to have bounded norm after a gradient step is equivalent to projecting the gradient on the boundary of the feasible space if the gradient brings the parameter vector out of .Furthermore, <cit.> shows an example showing that gradient clipping can introduce bias.We add a more detailed discussion in Appendix <ref>. Hence,does not necessarily converge to a local optimum of F(θ,Z), even when sufficient data is available to estimate θ.Whilecan only find models inand this may introduce another suboptimality, as our experiments will show this is only a minor drawback in practice, while also others observed that Lipschitz networks have good properties <cit.>.Moreover, it is easy to check whetheroutputs parameters on the boundary ofand hence the model could potentially improve by relaxing the weight norm constraint.In contrast, it may not be feasible to detect thatis outputting potentially suboptimal parameters.Indeed, consider a federated learning setting (e.g., <cit.>) where data owners collaborate to compute a model without revealing their data.Each data owner locally computes a gradient and clips it, and then the data owners securely aggregate their gradients and send the average gradient to a central party updating the model.In such setting, no party would be able to evaluate that gradient clipping introduces a strong bias in some direction. § EXPERIMENTAL RESULTSIn this section, we conduct an empirical evaluation of our approach. §.§ Experimental setupWe consider the following experimental questions: Q1 How does , our proposed technique, compare against the conventionalas introduced by <cit.>?Q2 What is the effect of allowing θ_k < C rather than normalizing θ_k to C?This question seems relevant given that some authors (e.g., <cit.><cit.>) also suggest to consider networks which constant gradient norm rather than maximal gradient norm, i.e., roughly with θ in rather than . ImplementationWe implemented both theandmethods to ensure that comparisonswere made under consistent model structures and preprocessing conditions.To answer question Q2, we also implemented , a version oflimited to networks whose weight norms are fixed, i.e., ∀ k:θ_k=C, obtained by setting _k C in Line <ref> in Algorithm <ref>. Toolkit. We offer an open-source Python toolkit for implementingandon any feed-forward model structure, building on the Opacus <cit.> and PyTorch <cit.> libraries. See Appendix <ref> for more details. Hyperparameters. We selected a number of hyperparameters to tune for our experiments, aiming at making a fair comparison between the studied techniques while minimizing the distractions of potential orthogonal improvements.To optimize these hyperparameters, we used Bayesian optimization <cit.>. Appendix <ref> provides a detailed discussion. Datasets and models. We carried out experiments on both tabular data sets and data sets with image data.First, we consider a collection of 10 real-world tabular datasets (names and citations in Table <ref>).For these, we trained multi-layer perceptrons (MLP).Here, due to the inherent imbalance in many of these datasets, we report the AUC, the area under the ROC curve, as opposed to accuracy, ensuring a more informative performance metric. A comprehensive list of model-dataset combinations is available in the supplementary material <ref>.Second, the image datasets include MNIST <cit.>, Fashion-MNIST <cit.> and CIFAR-10 <cit.>. For these, we trained convolutional neural networks (CNN).Given that accuracy is a commonly adopted metric for these datasets, we opted for it to facilitate easy comparisons with prior research.Infrastructure. All experiments were orchestrated across dual Tesla P100 GPU platforms (12GB capacity), operating under CUDA version 10, with a 62GB RAM provision for Fashion-MNIST and CIFAR-10. Remaining experiments were performedon an E5-2696V2 Processor setup, equipped with 8 vCPUs and a 52GB RAM cache. The total runtime of the experiments was approximately 50 hours, which corresponds to an estimated carbon emission of 1.96 kg<cit.>. More details on the experimental setup and an analysis of the complexity can be found in Appendix <ref>. §.§ ResultsTable <ref> shows our results on the tabular data, comparingwithand .Figure <ref> shows our results for the image data sets. Due to our Bayesian optimization approach, not for all hyperparameter combinations an experiment is run, hence as the plots only show the pareto front (of privacy cost ϵ and accuracy), the plotted data points are not equidistant.§.§ Discussion MLP. As demonstrated in <ref>,consistently achieves better performance in terms of AUC compared to , with the exception of the Patient Survival dataset where the difference is not statistically significant. This observation holds true across datasets of varying numbers of instances and features, as well as for tasks involving imbalanced datasets such as the Dropout or Default Credit datasets. This table also shows the performance one can obtain by limiting the networks to Lipschitz networks whose norm of weights equals a given constant.The results of this approach are a bit inferior, while still outperforming . For an overall conclusion, we perform a Wilcoxon Signed-rank test, at a confidence level of 5%, on 10 measures of AUC for each dataset between thebased on the gradient clipping and thebased on our method, results are shown in <ref>.CNN. In <ref>,exhibits performance either on par with or superior tofor MNIST, Fashion-MNIST, and CIFAR-10. In summary, we can conclude that we can answer to our experimental questions thatoutperformson both tabular data sets with MLP and image data sets with CNN.Moreover, it is beneficial to not normalize the norm of the weight vector θ to a fixed value but to exploit cases where it becomes smaller. § RELATED WORKDP-SGD. DP-SGD algorithms have been developped to guarantee privacy on the final output <cit.>, on the loss function <cit.> or on the publishing of each gradient used in the descent <cit.>.To keep track of the privacy budget consumption, <cit.> relies on the strong composition theorem <cit.> while <cit.> is based on the moment accountant. The moment accountant indirectly leverages the Rényi differential privacy <cit.> and gives much tighter bounds on the privacy loss than <cit.>.This has opened an active field of research that builds upon <cit.> in order to provide better estimation of the hyperparameters e.g., the clipping norm <cit.>, the learning rate <cit.>, or the step size of the privacy budget consumption <cit.>. Gradient clipping however, remains the standard approach to scale the added noise.Lipschitz continuity. Lipschitz continuity is an essential requirement for differential privacy in some private SGD algorithms <cit.>. However, since deep neural networks (DNNs) have an unbounded Lipschitz value <cit.>, it is not possible to use it to scale the added noise. Several techniques have been proposed to enforce Lipschitz continuity to DNNs, especially in the context of generative adversarial networks (GANs) <cit.>. These techniques, which mainly rely on weight spectral normalization, can be applied to build DP-SGD instead of the gradient clipping method, as described in Section <ref>. While weight normalization for private SGD has been suggested as future work in <cit.>, to the best of our knowledge we are the first to derive guarantees, to present an empirical evaluation and to consider local bounds depending on the current position in the search space.§ CONCLUSION AND DISCUSSIONIn this paper we proposed a new differentially private stochastic gradient descent algorithm without gradient clipping.We derived a methodology to estimate the gradient sensitivity to scale the noise. An important advantage of weight clipping over gradient clipping is that it avoids the bias introduced by gradient clipping and the algorithm converges to a local optimum of the objective function.We showed empirically that this yields a significant improvement in practice and we argued that this approach circumvent the bias induced by classical gradient clipping.Several opportunities for future work remain.First, it would be interesting to better integrate and improve ideas such as in<cit.> tofind improved bounds on gradients of Lipschitz-constrained neural networks, as this may allow to further reducethe amount of noise needed.Second, various optimizations of the computational efficiency are possible. Currently one of the most important computational tasks is the computation of the spectral norm.Other approaches to more efficiently compute or upper bound it can be explored. One alternative direction would be to investigate the Frobenius norm which is less costly to compute but may have other disadvantages.Our current work is limited to the application of our proposed method on feed-forward models for classification tasks and regression tasks with Lipschitz loss function. Although our method can be easily applied to some other tasks, the field remains open to extend it to other classes of models.Finally, while our experiments have shown promising results, further theoretical analysis of , especially the interaction between sensitivity, learning rate and number of iterations, remains an interesting area of research, similar to the work of <cit.> on . An analysis on the interactions between hyperparameters would provide valuable insights into the optimal use of our method and its potential combination with other regularization techniques.plain § GRADIENT CLIPPING BASED DP-SGDFor comparison with Algorithm <ref>, Algorithm <ref> shows the classic DP-SGD algorithm based on gradient clipping. § ESTIMATING LIPSCHITZ VALUES We summarize the upper bounds of the Lipschitz values, either on the input or on the parameters, for each layer type in <ref>. It's important to mention that for the loss, the Lipschitz value is solely dependent on the output x_K+1.with Softmax(x_i) = exp(x_i)/∑_j=1^cexp(x_j), c the number of classes. For cross-entropy, τ an hyperparameter on the Softmax Cross-entropy loss also known as the temperature.For convolutional layers, h' and w' are the height and width of the filter.For multiclass hinge, m is a hyperparameter known as 'margin'. §.§ Details for the convolutional layerThe convolved feature map (θ∗·): ℝ^n_k × |x_k|→ℝ^n_k+1× n × n, with zero or circular padding, is Lipschitz and∇_θ_k (θ_k ∗ x_k) _2 ≤√(h'w')x_k_2and ∇_x_k (θ_k ∗ x_k) _2 ≤√(h'w')θ_k_2with w' and h' the width and the height of the filter.The output x_k+1∈ℝ^c_o u t× n × n of the convolution operation is given by:x_k+1, c, r, s=∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1 x_k, d, r+i, s+jθ_k, c, d, i, jThere follows:x_k+1^2_2= ∑_c=0^c_o u t-1∑_r=1^n ∑_s=1^n ( ∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1 x_k,d, r+i, s+jθ_k, c, d, i, j)^2≤ ∑_c=0^c_o u t-1∑_r=1^n ∑_s=1^n ( ∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1 x_k,d, r+i, s+j^2 ) ( ∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1θ_k, c, d, i, j^2) = ( ∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1∑_r=1^n ∑_s=1^n x_k,d, r+i, s+j^2 ) (∑_c=0^c_o u t-1∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1θ_k, c, d, i, j^2)≤h'w' ( ∑_d=0^c_i n-1∑_r=1^n ∑_s=1^n x_k,d, r, s^2 ) (∑_c=0^c_o u t-1∑_d=0^c_i n-1∑_i=0^h'-1∑_j=0^w'-1θ_k, c, d, i, j^2)=h'w' x_k _2 θ_k_2Since θ_k∗· is a linear operator:(θ_k ∗ x_k) - (θ_k^'∗ x_k)_2 = (θ_k - θ_k^') ∗ x_k_2 ≤θ_k - θ_k^'_2 √(h'w')x_k_2 Finally, the convolved feature map is differentiable so the spectral norm of its Jacobian is bounded by its Lipschitz value:∇_θ_k (θ_k ∗ x_k) _2 ≤√(h'w')x_k_2Analogously,∇_x_k (θ_k ∗ x_k) _2 ≤√(h'w')θ_k_2 § EXPERIMENTAL SETUPOptimization. For the tabular datasets, we performed a full grid search to optimize the hyperparameters. For the image datasets, as the computational cost of training is much higher, to reduce the number of hyperparameter combinations to try, we employed Bayesian optimization <cit.>. Configured as a multi-objective optimization program <cit.>, our focus was to cover the Pareto front between model utility (accuracy or AUC) and privacy (ϵ values at a constant level of δ, set to 1/n as has become common in this type of experiments). It is worth noting that due to our optimization approach, data points during the exploration are not uniformly distributed across the x-axis, as ϵ itself becomes an optimization target. Implementation details In our implementation we use an algorithm slightly different from Algorithm <ref> which takes as input a fixed number of epochs T rather than iterating until the privace budget is exhausted.Otherwise, our implementation is equivalent to Algorithm <ref>.§.§ Hyperparameters Hyperparameter selection. In the literature, there are a wide range of improvements possible over a direct application of SGD to supervised learning, including general strategies such as pre-training, data augmentation and feature engineering, and DP-SGD specific optimizations such as adaptive maximum gradient norm thresholds.All of these can be applied in a similar way to bothandand to keep our comparison sufficiently simple, fair and understandable we didn't consider the optimization of these choices.We did tune hyperparameters inherent to specific model categories, in particular the initial learning rate η(0) (to start the adaptive learning rate strategy η(t)) and (for image datasets) the number of epochs T, and hyperparameters related to the learning algorithm, in particular the (expected) batch size s and the threshold C on the gradient norm respectively weight norm.For the tabular datasets, both the number of epochs T and the privacy budget ϵ are fixed, and the noise multiplier σ is computed from T and ϵ (used in line <ref>).Otherwise, Algorithm <ref> is applied as described. For the image datasets, in order to easily perform Bayesian optimization as described below, we use as hyperparameters the number of epochs T and the noise multiplier σ. The privacy cost ϵ is then computed from T and σas an output to generate a data point for the Bayesian optimization. The initial learning rate η(0) is tuned while the following η(t) are set adaptively. Specifically, we use the strategy of the Adam algorithm <cit.>, which update each parameter using the ratio between the moving average of the gradient (first moment) and the square root of the moving average of its squared value (second moment), ensuring fast convergence.We also investigated varying the maximum norm of input vectors X_0 and the hyperparameter τ of the cross entropy objective function, but the effect of these hyperparameters turned out to be insignificant.Both the clipping threshold C for gradients inand the clipping threshold C for weights incan be tuned for each layer separately. While this offers improved performance, it does come with the cost of consuming more of the privacy budget, and substantially increasing the dimensionality of the hyperparameter search space. In a few experiments we didn't see significant improvements in allowing per-layer varying of C_k, so we didn't further pursue this avenue.<ref> summarizes the search space of hyperparameters. It's important to note that we did not account for potential (small) privacy losses caused by hyperparameter search, a limitation also acknowledged in other recent works such as <cit.>.§.§ Models<ref> shows details of the models we used to train on tabular and image datasets. We consider 10 tabular datasets: adult income <cit.>, android permissions <cit.>, breast cancer <cit.>, default credit <cit.>, dropout <cit.>, German credit <cit.>, nursery <cit.>, patient survival <cit.>, thyroid <cit.>, and yeast <cit.>.See Table <ref> for the number of instances and features for each tabular dataset.§.§ Runtime Our experiments didn't show significant deviations from the normal runtime behavior one can expect for neural network training.As an illustration, we compared on MNISTthe mean epoch runtime ofwith .We measure runtime against the logical batch size, limiting the physical batch size to prevent memory errors as recommended by PyTorch documentation <cit.>. <ref> shows howis efficient in terms of runtime compared to . It may be possible to further improveruntime as it currently heavily relies on the data sampler provided by Opacus, which processes data per instance, while applying batch processing techniques inspired on PyTorch would be more efficient.The staircase shape of the plot seems to be a result of PyTorch and Python memory management strategies.§LIBRARY We offer an open-source toolkit for implementing LipDP-SGD on any FNN model structure. This toolkit builds on the Opacus and PyTorch libraries. Drawing inspiration from Opacus, our library introduces the `LipPrivacyEngine` class to facilitate private training. This class is dependent on two main components: the `DataLoader`, which utilizes Poisson sampling to harness the advantages of privacy amplification <cit.>, and the `Optimizer`, responsible for sensitivity calculation, differential privacy noise addition, and parameter normalization during each iteration.`README.md`, provided in the supplementary materials, details how to run the library and how to reproduce the experiments.§ AVOIDING THE BIAS OF GRADIENT CLIPPINGWe show thatconverges to a local minimum inwhilesuffers from bias and may converge to a point which is not a local minimum of .We use the word 'converge' here somewhat informally, as in each iteration independent noise is added the objective function slightly varies between iterations and hence none of the mentioned algorithms converges to an exact point. We here informally mean approximate convergence to a small region, assuming a sufficiently large data set Z and/or larger ϵ such that privacy noise doesn't significantly alter the shape of the objective function.Our argument below hence makes abstraction of the noise for simplicity, but in the presence of small amounts of noise a similar argument holds approximately, i.e., after sufficient iterationswill produce θ values close to a locally optimal θ whilemay produce θ values in a region not containing the relevant local minimum.First, let us consider convergence.Theorem <ref>. We consider the problem of finding a local optimum in :[ minimize F(θ,Z); subject toθ_2 ≤ C ]We introduce a slack variable ζ: [minimizeF(θ,Z);subject to θ_2 + ζ^2 = C ]Using Lagrange multipliers, we should minimizeF(θ,Z) - λ (θ_2 + ζ^2 - C)An optimum in θ, λ and ζ satisfies∇_θ F(θ, Z) - λθ =0 θ_2 + ζ^2 - C=02λζ = 0From Eq <ref>, either λ=0 or ζ=0 If ζ>0, θ is in the interior ofand there follows λ=0 and from Eq <ref> that ∇_θ F(θ, Z) =0.For such θ,does not perform weight clipping.If the learning rate is sufficiently small, and if it converges to a θ with norm θ_2<C it is a local optimum. On the other hand, if ζ=0, there follows from Eq <ref> that θ_2=C, i.e., θ is on the boundary of . If θ is a local optimum in , then ∇_θ F(θ,Z) is perpendicular on the ball of vectors θ with norm C, and for such θwill add the multiple η(t).∇_θ F(θ,Z) to θ and will next scale θ back to norm C, leaving θ unchanged.For a θ which is not a local optimum in , ∇_θ F(θ,Z) will not be perpendicular to the ball of C-norm parameter vectors, and adding the gradient and brining the norm back to C will move θ closer to a local optimum on this boundary of .This is consistent with Eq <ref> which shows the gradient with respect to θ for the constrained problem to be of the form ∇_θ F(θ, Z) - λθ. Second, we argue thatintroduces bias. This was already pointed out in <cit.>'s examples 1 and 2. A simple situation where bias occurs anddoes not converge to an optimum of F is when errors aren't symmetrically distributed, e.g., positive errors are less frequent but larger than negative errors.Consider the scenario of simple linear regression. A common assumption of linear regression is that instances are of the form (x_i,y_i) where x_i is drawn from some distribution P_x and y_i=ax_i+b+e_i where e_i is drawn from some zero-mean distribution P_e.When no other evidence is available, one often assume P_e to be Gaussian, but this is not necessarily the case.Suppose for our example that P_x is the uniform distribution over [0,1] and P_e only has two possible values, in particular P_e(9)=0.1, P_e(-1)=0.9 and P_e(e)=0 for e∉{9,-1}.So with high probability there is a small negative error e_i while with small probability there is a large positive error, while the average e_i is still 0. Consider a dataset Z={(x_i,y_i)}_i=1^n. Let us consider a model f(x) = θ_1 x θ_2 and let us use the square loss ℒ(θ,Z)=∑_i=1^n ℓ(x_i,y_i)/n with ℓ(θ, x,y) = (θ_1 x + θ_2 - y)^2. Then, the gradient is∇_θℓ(θ, x,y) = ( 2(θ_1 x + θ_2 -y) x, 2(θ_1 x + θ_2 - y))For an instance (x_i,y_i) with y_i = ax_i+b+e_i, this implies∇_θℓ(θ, x_i,y_i) = ( 2((θ_1-a) x_i + (θ_2-b) - e_i) x_i, 2((θ_1-a) x_i + (θ_2-b) - e_i))For sufficiently large datasets Z where empirical loss approximates population loss, the gradient considered bywill approximate∇_θℒ(θ,Z)≈ ∑_e∈{10,} P_e(e) ∫_0^1 ∇_θℓ(θ, x, ax+b+e) dx = ∑_e∈{10,} P_e(e) ∫_0^1 ( 2((θ_1-a) x + (θ_2-b) - e) x, 2((θ_1-a) x + (θ_2-b) - e)) dx = ∫_0^1 ( 2((θ_1-a) x^2 + (θ_2-b)x - x𝔼[e]), 2((θ_1-a) x + (θ_2-b) - 𝔼[e])) dx = ( 2((θ_1-a)/3 + (θ_2-b)/2), 2((θ_1-a)/2 + (θ_2-b)))This gradient becomes zero if θ_1=a and θ_2=b as intended.However, if we use gradient clipping with threshold C=1 as in , we get:g̃ ≈ ∑_e∈{10,} P_e(e) ∫_0^1 clip_1(∇_θℓ(θ, x, ax+b+e)) dx = ∑_e∈{10,} P_e(e) ∫_0^1 clip_1(( 2((θ_1-a) x + (θ_2-b) - e) x, 2((θ_1-a) x + (θ_2-b) - e)) ) dxWhile for a given e for part of the population (θ_1-a)x+θ_2-b may be small, for a fraction of the instances the gradients are clipped.For the instances with e=9 this effect is stronger.The result is that for θ_1=a and θ_2=b the average clipped gradient g̃ doesn't become zero anymore, in particular g̃=0.7791.In fact, g̃ becomes zero for θ_1=a+0.01765 and θ_2=b+0.94221.Figure <ref> illustrates this situation. § RELATED WORKThis section provides some additional references next to those already in Section <ref>. §.§ Robustness for privacyTo address the computational challenges, robustness certification methods can be employed for Lipschitz estimation. Techniques like LipSDP <cit.> and Fast-Lip <cit.> are useful in estimating L_k, eliminating the need to compute matrix norms for both per-layer Lipschitz values and input bounds propagation. Similarly, training approaches for networks with guaranteed robustness, such as the LP-based method <cit.>, ensure the network remains Lipschitz constrained without requiring spectral regularization.Observe that techniques such as LipSDP offer Lipschitz value calculations based on decision variables determined either at the neuron level (as in LipSDP-Neuron) or at the layer level (as in LipSDP-Layer). This allows for the adjustment of the added privacy noise to the specified level. While using LipSDP-Neuron might offer finer granularity in scaling the DP noise compared to our approach, it introduces additional computational overhead. §.§ Estimating Lipschitz values of neural networksThree main techniques dominate the field when it comes to estimate the Lipschitz value of neural networks: automatic differentiation-based estimation <cit.>, robustness certification methods <cit.><cit.>, and probabilistic estimation strategies <cit.><cit.>.Our emphasis in this context is on automatic differentiation, primarily because it hinges on the relationship betweenand .
http://arxiv.org/abs/2310.18001v1
{ "authors": [ "Antoine Barczewski", "Jan Ramon" ], "categories": [ "cs.LG", "cs.CR" ], "primary_category": "cs.LG", "published": "20231027091715", "title": "DP-SGD with weight clipping" }
[email protected] Department of Chemistry, Stanford University, Stanford, CA, USA 94305 The fluctuations of a nonequilibrium bath enable dynamics inaccessible to any equilibrium system.Exploiting the driven dynamics of active matter in order to do useful work has become a topic of significant experimental and theoretical interest. Due to the unique modalities controlling self-assembly, the interplay between passive solutes and the particles in an active bath has been studied as a potential driving force to guide assembly of otherwise non-interacting objects.Here, we investigate and characterize the microscopic origins of the attractive and repulsive interactions between passive solutes in an active bath.We show that, while assembly does not occur dynamically for achiral active baths, chiral active particles can produce stable and robust assembly forces.We both explain the observed oscillatory force profile for active Brownian particles and demonstrate that chiral active motion leads to fluxes consistent with an odd diffusion tensor that, when appropriately tuned, produces long-ranged assembly forces.Microscopic origin of tunable assembly forces in chiral active environments Grant M. Rotskoff January 14, 2024 ===========================================================================§ INTRODUCTIONBecause activity induces dramatic changes in collective motility, many recent works have sought to explore <cit.> and characterize <cit.> the consequences of an active bath for self-assembly. It has been suggested that experimental active matter systems, including Janus particles <cit.> and active dumbbells <cit.>, could provide external environments that stabilize assemblies of passive particles that, in equilibrium conditions, have no propensity to self-assemble. Recent experiments <cit.> suggest that chiral active matter can accelerate assembly of passive solutes that aggregate in equilibrium but on slow timescales, and simulations have suggested that similar modalities may exist for achiral active matter.While there is evidence for activity driving self-assembly <cit.> in some settings, numerical observations of the force induced by activity show a highly oscillatory force profile between two parallel walls <cit.>. Because these systems are far from equilibrium, there is no well-established paradigm to relate this oscillatory force profile to assembly kinetics, if self-assembly even occurs in this setting.There are indeed strong attractive forces emerging from the bath degrees of freedom at some separation distances between fixed passive objects, but it is not clear how one would modulate or control these dramatic fluctuations in the environment to achieve robust assembly.Here, we assess the microscopic origin of the attractive and repulsive forces that arise between passive objects with purely repulsive inter-particle interactions in active environments. We model the solvent as a system of chiral active particles (CAPs), in which the ith solvent particle evolves according to a driven, nonequilibrium dynamics <cit.>,_i ( t ) = ξ^-1 [_i + ν_i ( t )] + √(2D_t)_i,_i ( t )= [cosθ_i ( t ), sinθ_i ( t )]^⊤,θ̇_i ( t ) = ω + √(2D_r)Γ_i ,where _i denotes the position of the ith particle, ξ is the translational drag coefficient, _i is the force,_i denotes the direction of its active velocity, ν is the magnitude of the active force, ω is the active torque, and D_t and D_r are the translational and rotational diffusion constants that are related by the formula D_r = 3σ^-2 D_t. Here, Λ and Γ are mean zero Gaussian random variables with δ-correlations in space and time. The particles interact via the Weeks-Chandler-Anderson (WCA) potential <cit.> with ϵ_ij = 40 and σ_ij = 1 unless otherwise specified (cf. Sec. <ref>). The corresponding force is _i = - U(t)_i. All simulations are conducted using HOOMD-blue <cit.>. When ω=0, the motion is achiral, and we refer to this case as active Brownian particles (ABPs).Furthermore, when the Péclet number νσ / D_t is sufficiently large, the local diffusivity depends strongly on density and motility induced phase separation (MIPS) occurs <cit.>.Because work is dissipated to the medium in the steady state for active particles <cit.>, several investigations have sought to exploit this energy for useful work. This requires analyzing the force exerted by an active bath on passive solutes, which has been studied in a variety of contexts <cit.>. Perhaps most relevant to our current work, the induced force between two parallel walls that arises from the nonequilibrium fluctuations of an active bath of ABPs has been dubbed an active Casimir effect <cit.>, though this analogy is perhaps misleading. As we show, the oscillatory force profile for ABPs can be entirely explained by packing effects related to the finite size of the particles (cf. Sec. <ref>).This is not a force that can be explained by density fluctuations in a continuous field at these length scales, as, for example, occurs in classical hydrophobicity <cit.>.In the case of chiral active matter, however, distinct mobilities emerge, creating opportunities for modulating interactions between passive particles. The emergent odd transport properties that arise from actively driven torques can drive stable and large-scale assembly.The presence of passive particles that constitute a boundary in the system breaks global translational invariance and yields non-vanishing fluxes in the vicinity of the passive objects due to odd diffusion.These fluxes, in turn, produce particle currents along the boundaries of passive objects and lead to stable and robust effective assembly forces. With straightforward theoretical arguments, a minimal continuum model of odd diffusivity, and extensive numerical simulations, we assess the microscopic origins of assembly forces between passive objects in two and three-dimensional active baths. We show that for achiral active baths, attractive forces do not arise except in the true “Casimir” regime, the limit of extremely low density.Remarkably, chirality, when appropriately tuned, can manifest long-ranged and stable assembly forces for passive particles, as illustrated in Fig. <ref>. What is more, this attraction appears to be driven not by collective fluctuations, but rather by fluxes induced by odd diffusivity. *Relation to prior workA number of investigations have shed light on active systems as a force for self-organization. While nonequilibrium self-assembly remains a widely studied topic <cit.>, the works mostly closely related to our investigation here include experiments and simulations by Grober et al. <cit.> demonstrate that clustering of sticky passive particles is accelerated by active matter and the emergent structures are strongly modulated by a chiral active bath.However, their work does not examine the microscopic flows of the bath particles in the vicinity of the passive objects, nor does it consider purely repulsive passive particles, both of which are the focus of our present work. Also closely related, a series of works by Mallory et al. <cit.> demonstrate that nonequilibrium perturbations arising from an active bath can provide a self-organizing force. They investigate, for example, a setting in which the active particles are designed with an inherent asymmetry that produces directional flows and hence kinetically induced aggregation <cit.>, which has been investigated separately by Baek et al. <cit.>. This mechanism differs from those investigated here, where the passive particles do not have any asymmetric interaction with the active bath.Yang et al. <cit.> reported assembly of passive particles driven by a high density bath of inertial chiral active particles, though the mechanism is not thoroughly characterized. Because inertia can have profound effects on the phase behavior of active systems <cit.>, it is not immediately clear that these effects should arise in the overdamped regime we consider here. Finally, the nature of the long-ranged interaction between two parallel walls that is mediated by an active bath has been studied with thorough numerical simulations <cit.>. The underlying microscopic dynamics are vastly different when the bath particles have chirality; articulating this difference precisely is the primary focus of the present work.§ ASSEMBLY IS NOT KINETICALLY STABLE FOR ACHIRAL ACTIVE MATTER §.§ Packing and microscopic repulsion To assess the underlying molecular fluctuations governing the activity-induced self-assembly depicted in Fig. <ref>, we first consider the minimal model depicted in Fig. <ref> (a). In this geometry, we consider two parallel walls of length l separated by a distance r. We define the total interaction force asF_ wall^( tot)[ρ()] = F_ wall^( int)[ρ()] - F_ wall^( ext)[ρ()]using the sign convention that if the force applied to the walls by particles in the interstitial region exceeds the force applied by particles outside this region, then the force is positive, and the walls will repel. For achiral ABPs, this force oscillates between attractive and repulsive (Fig. <ref> (b)). Moreover, the effective free energy Fig. <ref> (c) further indicates that when the particles are achiral, there is no attractive interaction and assembly is not accessible dynamically.The dominant contribution to forces both internal and external is the local enhancement of density around a passive solute.The large density gradient perpendicular to the walls forms through a mechanism similar to motility-induced phase separation (MIPS): particles orient into the direction of the wall and generate a force in proportion to the average local density a distance σ away from the wall, which we denote ρ_ wall. This force can be approximated as F(ρ_ wall) ≈ν l στρ_0 where τ is the characteristic rotational diffusion time. When the walls are separated by a distance larger than the length over which the density is enhanced, F_ wall^( int) = -F_ wall^( ext) and the interaction vanishes. In fact, any attractive force for achiral (and nearly achiral) active particles arises from density correlations in the interstitial region. This basic picture holds over a variety of conditions, including different points in the MIPS phase diagram (i.e., different choices of total density and active velocity) as shown in Figs. <ref>–<ref>.§.§ Effective nonequilibrium free energiesTo quantitatively assess the propensity for passive solutes to self-assemble in given nonequilibrium bath conditions, we computed an effective nonequilibrium free energy profile for the solute degrees of freedom.To do so, we draw inspiration from liquid state theory <cit.>, and quantify the effective interaction by measuring the radial distribution function, or g(r) for fluctuating passive solutes.Our solutes are non-spherical and hence evolve dynamically as rigid bodies composed of particles interacting under the WCA potential with other particles not in the same rigid body <cit.>. Following Ref. <cit.>, a rigid body b composed of N_b internal particles has its internal particles indexed by B_bk = [ B_b1, …, B_bN_b], with the overall rigid body position and quaternion given by _b and _b, respectively. Evaluating the net force and torque on the bth rigid body per_b= ∑_i ∈ B_bk_i,_b= ∑_i ∈ B_bk( _i - _b ) ×_i,the equations of motion for the rigid body are given by_b ( t ) = ξ_b^-1_b + √(2D_t,b)_b ,_b ( t ) = 0.5 (ξ_b,r^-1_b + √(2 D_r,b)_b _b_b^-1) _b,where ξ_b and ξ_b,r are the translational and rotational drag coefficients, respectively, D_t,b and D_r,b are the translational and rotational diffusion constants, respectively, and _b and _b are independent Gaussian white noises with zero mean and unit variance. As the system is two-dimensional, only the z-component of the torque and _b are non-zero. The relations between the drag coefficients and diffusion constants are given by the Stokes-Einstein and Stokes-Einstein-Debye relations in which for the solute one hasD_t,b = ξ_b^-1 , D_r,b = ξ_b,r^-1 = 3σ_H^-2 D_t,b ,where σ_H is the hydrodynamic diameter of the solute, and k_B and T are the Boltzmann constant and temperature, respectively. Analogous relations hold for the solvent. We choose the diffusion constants for the solvent and solutes to be equal with D_t = D_t,b = 1, take = 1, and set ϵ_ij = 40 and σ_ij = 1 unless otherwise specified.The radial distribution function is then computed between the centers of mass of the passive solutes. In Fig. <ref> (c), we show the effective interaction for ABPs in dark blue (ω=0).We plot -ln g(r) because, in equilibrium, this quantity would correspond to the reversible work required (or gained) when bringing two solutes into contact.Away from equilibrium, this thermodynamic interpretation is no longer valid, but a positive value of -ln g(r) as r→ 0 indicates that it is statistically unlikely for the two solutes to come together. For ABPs, this is the case, with repulsion at close distances being observed and holding across densities except for weak attraction at low density (Fig. <ref>). §.§ Minimal model of the repulsive forceA minimal model illustrates that the oscillatory force profile for achiral particles arises entirely due to packing constraints: the separation distances r at which the two walls accommodate a high-density hexatic packing lead to large repulsive forces, while the separation distances r that are not commensurate with a hexatic lattice lead to smaller values of F_ wall^( int), and consequently attractive forces. We verified that hexatic order was correlated with the location of the repulsive peaks in the force profile (Figs. <ref> and <ref>) by computing an average of the hexatic order parameterψ_6(_k) = 1/6∑_l∈𝒩(_k) e^i 6 θ_kl,where 𝒩() denotes the set of nearest neighbors of the particle at positionand θ_kl is the angle between the vector _l-_k and e_x. We denote by ψ̅_6 the average value of ψ_6 restricted to the region between the two parallel walls. The total force on the walls arises from active particles oriented into the walls aggregating and pushing inward. The forces generated by the active particles balance exactly when the separation between the walls is large: it is inter-particle correlations in the interstitial region that lead to the nontrivial force profile shown in Fig. <ref> (b). To capture the nature of the forces arising from these inter-particle correlations, we first define ρ̅(r) as the average density in the interstitial region as a function of the wall separation distance. At the Péclet numbers we consider here, the density adjacent to the internal and external walls remains close to the hexatic density. A minimal model for the density in the interstitial region is thenρ̅(r) = n_ hex(r) + Δ n_ WCA(r)/r ℓwhere n_ hex(r) is simply the number of particles accommodated by a hexatic packing ⌊ 2r/(√(3)σ) ⌋ℓ andΔ n_ WCA(r) = ℓ e^-r D_r/ν/1+exp[ - α ( δ r - δ_0 r)]with δ r = r - ⌊ 2r/(√(3)σ) ⌋ℓ, an offset δ_0 = c (2^1/6-1)σ, and a parameter α that accounts for the softness of WCA interaction. The exponential decay accounts for the decay in correlations with the boundary, and the rate is chosen to be the persistence length for ABPs. The principal contribution to the average internal force F^( int)_ wall at first order in r is a repulsive inter-particle force due to a strained hexatic packing. That is, by compressing the space available to the hexatic lattice, the particles in the interstitial region are strained.This additional contribution can be calculated easily, Δ F^( int)_ wall(r) ≈ 18 √(4)ϵσ l (ρ̅(r) - ρ_ hex)^2which uses a second-order expansion of the repulsive interaction. This model, despite its strong simplifying assumptions, predicts the location and magnitude of the oscillatory repulsive peaks with surprising accuracy (Fig. <ref>). For separation distances that are just above those that accommodate a hexatic lattice, depletion of achiral particles in the interstitial region leads to lower force generation in F_ wall^( int), and the walls are pushed together by the external active bath.This complex interplay of packing and finite-size effects does not yield a robust assembly force.To test the hypothesis that the repulsive force emerges from hexatic order, we conducted extensive numerical simulations under conditions in which this order could not emerge.First, at very low densities, there is not a sufficiently large relative enhancement of the local density to achieve a fully packed interstitial region, leading to a ψ̅_6 ≈ 0 for all wall separation distances.As shown in Fig. <ref>, there is no oscillatory force and, in fact, there is only a weakly attractive long-ranged force.Additionally, we varied the angle between the plates and offset the distance between the center of the plates in the y-direction, processes which disrupt the ordering between the plates, and found in both cases the force between the plates to be diminished (Fig. <ref>). In contrast, constraining the motion of the plates to be only in the x-direction and hence preserving the hexatic lattice is found to lead to attractive effective free energies between the plates (Fig. <ref>).At higher particle densities, we disrupted hexatic order by examining a system of continuously polydisperse active particles. The particle diameters were drawn with a power law decay P(σ) = A σ^-3. This model has been studied in the literature on glassy dynamics due to its resistance to crystallization, even when deeply quenched <cit.>.Without regular order, the system does not admit the high-density interstitial packings achieved when the bath contains only particles of a single diameter. Consistent with the minimal model, this effect eliminates the oscillations in the force profile and the effective nonequilibrium free energy -ln g(r) shows only a short-ranged repulsion as r→ 0 (Fig. <ref>; see Sec. <ref> for further details). § ROBUST ASSEMBLY IN A CHIRAL ACTIVE BATH Remarkably, chiral active matter drives reliable, dynamical assembly of passive solutes, but the microscopic mechanism leading to this phenomenon is utterly distinct.Fig. <ref> illustrates that self-assembly occurs for an appropriate combination of torque ω, activity ν, and active solvent volume fraction ϕ_A. We quantify the assembly by constructing histograms of the local density of passive solutes ρ_P; phase separation corresponds to the coexistence of a low density and high density region (see Sec. <ref> for further details).As shown in Figs. <ref> (c-e), a range of parameters support self-assembly when the torque is sufficiently large, along with larger square size driving assembly as shown in Fig. <ref>.Furthermore, assembly does not depend strongly on the shape or dimensionality of the object.In Fig. <ref>, we quantify the propensity of passive spherical objects to assemble in 3D, which occurs over a range of active torques, activities, solvent densities, and passive sphere radii. Similar trends hold for passive cubes immersed in a bath of chiral active particles in 3D (Fig. <ref>). We similarly studied passive disks (Fig. <ref>) and triangular passive particles (Fig. <ref>) to ensure that observations in 2D were not narrowly tailored to square geometries, where the analysis in terms of the inter-wall forces is most physically transparent. Examining force generation between two parallel walls, as in the case of achiral active particles, provides mechanistic insight into the forces governing assembly. As shown in Fig. <ref> (b), for sufficiently large torques, the oscillatory force profile is not maintained and a long-ranged attractive force sets in, though with a much smaller magnitude.Chiral motion does not lead to a long-lived density enhancement at the boundary of a passive wall, in fact, local microscopic fluctuations are much more subtle. §.§ Local density profile depends strongly on torque As discussed in Sec. <ref>, the oscillatory force profile between two walls results from changes in the typical local density at differing separation distances. Microscopically, the enhancement in local density both between and outside the walls results from the slow orientational relaxation of achiral active particles.In Fig. <ref> (a), we computed the density and orientation fields for ω=0 at a separation of 3.68 σ, a value at which the force is maximal.The average orientations of the particles in the region of enhanced density point towards the boundary of the passive object. This trend is robust across separation distances and force magnitudes, as shown in Figs. <ref>-<ref> (a).Mechanistically, the force generation depends strongly on large local density enhancements. Due to the nonzero torque, chiral active particles do not simply aggregate at the boundary of a passive object, but rather flow parallel to the boundary.This boundary flux is evident in Fig. <ref> (b), where ω=5, but also for all values of ω that we tested (cf. Figs. <ref>-<ref>). For sufficiently small torques relative to the Péclet number, the chiral active particles produce small boundary fluxes and have a force profile that is similar in magnitude and shape to that of the ABPs, as shown in <ref> (b). The similarity in the density and orientation fields for small torques (ω=0.3125) is evident in Figs. <ref>-<ref> (b) and Fig. <ref>-<ref> (b).For large torques, the fluxes at the boundary of a passive object are sufficiently large that density does not accumulate proximal to the passive object (Fig. <ref>-<ref> (e)).As a result, force generation is consistently near zero for ω=20, as shown in Fig. <ref> (b). In this regime, no assembly occurs, as shown in Fig. <ref> and quantified by the effective free energy profile in Fig. <ref> (c).In the intermediate regime, the interplay between boundary fluxes and density accumulation can lead to robust assembly forces.Fig. <ref> demonstrates that assembly does indeed occur when ω=5. This particular set of conditions for the chiral active bath leads to a long-ranged attractive interaction, which decays over roughly 15 particle diameters (Fig. <ref> (c)).Microscopically, it is evident from numerical simulations that the density accumulates asymmetrically, with more particles on the outside compared to the region between the two walls (Figs. <ref> (b),  <ref>, and  <ref>). This phenomenon results in higher applied forces on the outside, driving the passive objects together.§.§ Odd diffusivity drives assembly To assess the microscopic origins of the asymmetric density field and hence the attractive assembly forces for this narrow range of torque values, we construct a minimal model of the concentration profile.Chiral active liquids break time-reversal symmetry and parity with their single particle torques and lead to “odd” hydrodynamic response functions <cit.>. Most relevant to our setting, Hargus et al. <cit.> showed that chiral active particles can be modeled by a simple continuum description of “odd diffusivity”.The implications of an odd diffusion tensor manifest only in the presence of a boundary that breaks translational symmetry for the active bath, such as the presence of passive particles. Each passive particle leads to a non-vanishing steady-state mass current in its proximity. Fig. <ref> shows both density and the average orientation for achiral ABPs (a) and chiral ABPs (b).In the chiral case, that is, for all ω > 0, there is a net flux parallel to the walls, oriented in opposite y-directions on the -x and +x sides.Because there are no sources or sinks in our periodic system, the steady state density profile must be consistent with a divergence-free current due to the conservation of mass.In two dimensions, the active particle density ρ(x,y) satisfies a continuity equation∂_t ρ(x,y) = ∇·( 𝖣·∇ρ(x,y))where 𝖣_ij = 𝖣_ sδ_ij - 𝖣_ aϵ_ij, δ_ij is a Kronecker δ-function, and ϵ_ij is the antisymmetric Levi-Civita tensor.Without boundary conditions, the continuity equation results in a steady-state concentration profile that is independent of 𝖣_ a; however, when the density has Neumann boundary conditions, the antisymmetric contribution to the diffusion tensor can affect the steady-state. To determine if odd diffusivity plays a significant role in shaping the density field of the chiral active particles, we numerically solve (<ref>) in conditions where assembly does and does not occur.To do so, we first estimated the flux around the boundary of a passive object by computing(x,y) = 1/ξ T∑_i=1^N∫_0^T (_i + ν_i ( t )) k((x,y), _i) dt,where the integral is over a simulation of duration T in the steady state. We use a Gaussian kernel (described in detail in Sec. <ref>) to obtain a smooth flux field. The diffusion coefficients can be directly related to the velocities via a Green-Kubo relation <cit.>, 𝖣_ s = 1/2∫_0^∞ v_i(t) v_j(0) δ_ij dt,𝖣_ a = -1/2∫_0^∞ v_i(t) v_j(0) ϵ_ij dt.We compute these integrals with numerical quadrature after obtaining the time autocorrelation function using the Wiener-Khinchin theorem. These autocorrelation functions are shown in Fig. <ref>, with diffusion coefficients across ρ and ω shown in Fig. <ref>.With the boundary fluxes and the diffusion tensor determined, we then solve for the steady-state density profile using finite differences. We describe the numerical details in Sec. <ref>. As shown in Fig. <ref>, we compute the stationary density profile as a function of the ratio of the flux in the interstitial region J_ inside to the flux on the outside boundary J_ outside and also the ratio of the symmetric to antisymmetric part of the diffusion tensor, 𝖣_ s/𝖣_ a. We then computed the difference between the average concentration near the walls in the interstitial and a region of the same area on the outside; this is a proxy for the magnitude of the induced attractive force because the total force exerted on the walls is proportional to the local concentration.At short separation distances, we see (Fig. <ref> (a-b)) that this concentration difference is negative over a large range of J_ inside/J_ outside, provided that 𝖣_ a is appreciable relative to 𝖣_ s. As the separation between the walls grows, this effect persists but becomes considerably weaker, as correlations between the adjacent walls in the interstitial region decay, as shown in Fig. <ref> (c-d). The inset concentration profiles in Fig. <ref> are in good qualitative agreement with the density profiles obtained from direct numerical simulation in Figs. <ref>-<ref>.§.§ Minimal model of odd diffusivity captures effective attraction While the difference in concentration that we plot in Fig. <ref> is highly suggestive of an attractive force, to quantify the resulting force, we develop a simple mean-field model of the force profile using the computed concentration profiles. The excess force into the wall can be computed, F_ wall[ρ_ ss()] = ∫_wall [F() + v ()] ·_1 ρ_ ss() d,which is simply the projection of the total force onto the wall and ≡ (, θ); the unit vector _1 is aligned perpendicular to the wall. This force accounts for particle-particle interactions in addition to the wall-particle interactions.Making a mean-field assumption that the excess force into the wall arises not from inter-particle interactions, but instead from the larger scale particle flows and the active velocity, we obtainF_ wall^(int)≈∫_intf̅() ·_1 ρ_ ss() dwith f̅() = √(ν^2 - σ^2 J_y^2()).This expression allows us to estimate the force on the walls using the steady-state density obtained by solving (<ref>) with Neumann flux boundary conditions, imposed using the numerically measured fluxes.Integrating this expression over the regions “int” and “ext”, defined to be a rectangle of area ℓ×σ immediately abutting the wall on the interior and exterior, respectively,we obtain numerical values for F_wall^(int) and F_wall^(ext). As shown in Fig. <ref>, this minimal model captures the correct magnitude of the force and also is in good agreement with its spatial range. § ACKNOWLEDGEMENTSThe authors thank Huiting Liu for helpful discussions.This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award Number DE-SC0022917. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231, and the Sherlock cluster, operated by Stanford University and the Stanford Research Computing Center. *Data and Code Availability: Code and input scripts for all simulations are available on GitHub .apsrev4-2§ MINIMAL MODEL OF THE OSCILLATORY REPULSIVE FORCE § ODD DIFFUSION RESULTS§.§ Odd Diffusion Scheme DetailsWe solve the steady state form of (<ref>) using a finite difference scheme for a system with two walls. Far from the walls, the density is assumed to be the bulk density ρ_0. Around the walls, Neumann boundary conditions are imposed by the fluxes, which take the formJ_x = - 𝖣_s∂ρ/∂ x + 𝖣_a∂ρ/∂ y , J_y = - 𝖣_a∂ρ/∂ x - 𝖣_s∂ρ/∂ y ,where the flux perpendicular to the wall is zero by no-flux boundary conditions, and the flux parallel to the wall will be an input based on simulation data. Under steady-state conditions,  (<ref>) becomes𝖣_s∇^2ρ = 0,which we solve by discretizing ρ and the differential operators on a grid with equal spacing h in the x and y directions, yielding ρ_i,j in the bulk by a central difference scheme𝖣_sρ_i+1,j + ρ_i-1,j + ρ_i,j+1 + ρ_i,j-1 - 4 ρ_i,j/h^2 = 0.The Neumann boundary conditions are enforced by introducing fictitious grid points at the surface of the walls. We discretize the derivatives in either J_x and J_y to fourth and second order in the directions perpendicular and parallel to the wall, which yield the expressions-𝖣_s-ρ_i+2,j+8ρ_i+1,j - 8ρ_i-1,j + ρ_i-2,j/12 h + 𝖣_aρ_i,j+1 - ρ_i,j-1/2h = 0,-𝖣_a-ρ_i+2,j+8ρ_i+1,j - 8ρ_i-1,j + ρ_i-2,j/12 h - 𝖣_sρ_i,j+1 - ρ_i,j-1/2h = J_y ,for the areas to the left and right of the walls, with analogous expressions for the areas above and below the walls. It is found that expanding the perpendicular direction derivative to fourth order for both J_x and J_y leads to a singular matrix, and hence we expand to fourth order for only one of the fluxes. The fictitious nodes are eliminated by solving for them in terms of the equations given by fluxes and inputting the subsequent values into the governing equation (<ref>), in which the derivative in the direction to the wall is also expanded to fourth order, to yield a linear system of equations that can be solved for ρ_i,j. In the case of nodes on the left side of the walls in which we expand the fluxes parallel to the walls, J_y, to fourth order and the fluxes perpendicular to the walls, J_x, to second order, the fictitious nodes are given by-𝖣_sρ_i+1,j - ρ_i-1,j/2 h + 𝖣_aρ_i,j+1 - ρ_i,j-1/2h = 0,-𝖣_a-ρ_i+2,j+8ρ_i+1,j - 8ρ_i-1,j + ρ_i-2,j/12 h - 𝖣_sρ_i,j+1 - ρ_i,j-1/2h = J_y ,which yields for the fictitious nodes ρ_i+1,j and ρ_i+2,jρ_i+1,j = ρ_i-1,j + 𝖣_a/𝖣_s( ρ_i,j+1 - ρ_i,j-1),ρ_i+2,j = 12 h/𝖣_a J_y + ρ_i-2,j + (6 𝖣_s/𝖣_a + 8 𝖣_a/𝖣_s) ( ρ_i,j+1 - ρ_i,j-1),and upon input to the governing equation, we obtain𝖣_s h^-2 [ ( ρ_i,j-1 - 2 ρ_i,j + ρ_i,j+1) + ( -1/12ρ_i-2,j + 4/3ρ_i-1,j - 5/2ρ_i,j + 4/3ρ_i+1,j - 1/12ρ_i+2,j) ] = -J_y/h 𝖣_a - 1/6 h^2ρ_i-2,j + 8/3 h^2ρ_i-1,j + 6+3 𝖣_s/𝖣_a-4 𝖣_a/𝖣_s/6 h^2ρ_i,j-1 - 9/2 h^2ρ_i,j + 6-3 𝖣_s/𝖣_a+4 𝖣_a/𝖣_s/6 h^2ρ_i,j+1 = 0.A similar procedure is followed for the nodes on the right, top, and bottom sides of the walls. Rewriting (<ref>) slightly, we obtain for the governing equations around the wallsLeft: 𝖣_s 𝖣_aρ_i-2,j-16 𝖣_s𝖣_aρ_i-1,j+(-6 𝖣_s𝖣_a-3 𝖣_s^2+4 𝖣_a^2) ρ_i,j-1+27 𝖣_s𝖣_aρ_i,j+(-6 𝖣_s𝖣_a+3 𝖣_s^2-4 𝖣_a^2) ρ_i,j+1+6 h 𝖣_s J_y = 0, Right:-(6 𝖣_s𝖣_a-3 𝖣_s^2+4 𝖣_a^2) ρ_i,j-1 + 27 𝖣_s𝖣_aρ_i,j-16 𝖣_s𝖣_aρ_i+1,j+𝖣_s𝖣_aρ_i+2,j-(6 𝖣_s𝖣_a+3 𝖣_s^2-4 𝖣_a^2) ρ_i,j+1-6 h 𝖣_s J_y=0, Bottom: -(6 𝖣_s𝖣_a-3 𝖣_s^2+4 𝖣_a^2) ρ_i-1,j+𝖣_s𝖣_aρ_i,j-2-16 𝖣_s𝖣_aρ_i,j-1+27 𝖣_s𝖣_aρ_i,j-(6 𝖣_s𝖣_a+3 𝖣_s^2-4 𝖣_a^2) ρ_i+1,j-6 h 𝖣_s J_x=0, Top: (-6 𝖣_s𝖣_a-3 𝖣_s^2+4 𝖣_a^2) ρ_i-1,j + 27 𝖣_s𝖣_aρ_i,j +(-6 𝖣_s𝖣_a+3 𝖣_s^2-4 𝖣_a^2) ρ_i+1,j-16 𝖣_s𝖣_aρ_i,j+1+𝖣_s𝖣_aρ_i,j+2+6 h 𝖣_s J_x=0. Alternatively, one can expand the fluxes parallel to the walls to second order and the fluxes perpendicular to the walls to fourth order. In this case the fictitious nodes on the left side of the walls are given by-𝖣_s-ρ_i+2,j+8ρ_i+1,j - 8ρ_i-1,j + ρ_i-2,j/12 h + 𝖣_aρ_i,j+1 - ρ_i,j-1/2h = 0,-𝖣_aρ_i+1,j - ρ_i-1,j/2 h - 𝖣_sρ_i,j+1 - ρ_i,j-1/2h = J_y ,which yields for the fictitious nodes ρ_i+1,j and ρ_i+2,jρ_i+1,j = -2 h/𝖣_a J_y + ρ_i-1,j + 𝖣_s/𝖣_a( ρ_i,j+1 - ρ_i,j-1),ρ_i+2,j = -16 h/𝖣_a J_y + ρ_i-2,j - (8 𝖣_s/𝖣_a + 6 𝖣_a/𝖣_s) ( ρ_i,j+1 - ρ_i,j-1),and we obtain for the governing equation𝖣_s h^-2 [ ( ρ_i,j-1 - 2 ρ_i,j + ρ_i,j+1) + ( -1/12ρ_i-2,j + 4/3ρ_i-1,j - 5/2ρ_i,j + 4/3ρ_i+1,j - 1/12ρ_i+2,j) ] = -4 J_y/3 h 𝖣_a - 1/6 h^2ρ_i-2,j + 8/3 h^2ρ_i-1,j + 6+4 𝖣_s/𝖣_a-3 𝖣_a/𝖣_s/6 h^2ρ_i,j-1 - 9/2 h^2ρ_i,j + 6-4 𝖣_s/𝖣_a+3 𝖣_a/𝖣_s/6 h^2ρ_i,j+1 = 0.A similar procedure can be followed for the nodes on the right, top, and bottom sides of the walls, and upon rewriting (<ref>) we obtain for the governing equations around the wallsLeft:𝖣_s𝖣_aρ_i-2,j -16 𝖣_s𝖣_aρ_i-1,j +(-6 𝖣_s𝖣_a-4 𝖣_s^2+3 𝖣_a^2) ρ_i,j-1 +27 𝖣_s𝖣_aρ_i,j +(-6 𝖣_s𝖣_a+4 𝖣_s^2-3 𝖣_a^2) ρ_i,j+1 + 8 h 𝖣_s J_y = 0, Right:-(6 𝖣_s𝖣_a-4 𝖣_s^2+3 𝖣_a^2) ρ_i,j-1 + 27 𝖣_s𝖣_aρ_i,j - 16 𝖣_s𝖣_aρ_i+1,j + 𝖣_s𝖣_aρ_i+2,j- (6 𝖣_s𝖣_a+4 𝖣_s^2-3 𝖣_a^2) ρ_i,j+1 - 8 h 𝖣_s J_y = 0, Bottom:-(6 𝖣_s𝖣_a-4 𝖣_s^2+3 𝖣_a^2) ρ_i-1,j + 𝖣_s𝖣_aρ_i,j-2 - 16 𝖣_s𝖣_aρ_i,j-1 + 27 𝖣_s𝖣_aρ_i,j - (6 𝖣_s𝖣_a+4 𝖣_s^2-3 𝖣_a^2) ρ_i+1,j - 8 h 𝖣_s J_x = 0, Top:(-6 𝖣_s𝖣_a-4 𝖣_s^2+3 𝖣_a^2) ρ_i-1,j + 27 𝖣_s𝖣_aρ_i,j + (-6 𝖣_s𝖣_a+4 𝖣_s^2-3 𝖣_a^2) ρ_i+1,j - 16 𝖣_s𝖣_aρ_i,j+1 + 𝖣_s𝖣_aρ_i,j+2 + 8 h 𝖣_s J_x = 0.(<ref>)–(<ref>) differ from (<ref>)–(<ref>) by the coefficients of the terms perpendicular to the walls and the flux terms, leading to slight differences in the numerical results. The equivalent of the results shown in the main text, with Fig. <ref> demonstrating the density profiles and concentration differences and Fig. <ref> demonstrating the forces, are shown for the fourth order and second order in the parallel and perpendicular directions scheme in Figs. <ref>–<ref>. Figs. <ref> and <ref> have similar concentration differences, with the values in Fig. <ref> shifted to lower values of 𝖣_ s / 𝖣_ a relative to Fig. <ref>. The measured diffusion constants from simulation, shown in Fig. <ref>, indicate that the second and fourth order in the parallel and perpendicular directions scheme is more representative of the simulated CAPs system. Similarly, the force profile computed using the second and fourth order in the parallel and perpendicular directions scheme has a higher agreement with the simulated data than the other scheme. We also compute concentration differences between the internal and external regions of the walls for these parameters for both schemes in Fig. <ref>.As mentioned, the fluxes perpendicular to the walls are zero by no-flux boundary conditions. The fluxes parallel to the walls are measured in simulation. To do so, the flux field with respect to the solvent is evaluated using Gaussian kernels per Sec. <ref>. To yield the flux parallel to the walls, we average over the flux values at distances b/2 ( 1 + sin ( π/3 ) ) ± 0.4 perpendicularly away from the walls, where b is the hexatic lattice spacing for a given activity <cit.>. The difference between the external and internal densities and the ratio of the internal and external fluxes computed using this scheme are shown in Fig. <ref>. These measured profiles follow a sawtooth pattern, resulting in the patterns observed in the concentration and force profiles obtained from the finite difference schemes. § METHODS §.§ System and integration scheme For completeness, we repeat and combine the simulation methodology presented in Secs. <ref> and <ref>. The solvent and solutes evolve under overdamped Langevin dynamics using HOOMD-blue for a two-dimensional system <cit.>. The solvent is modeled as chiral active particles (CAP), in which the ith solvent particle evolves as <cit.>_i ( t ) = ξ^-1 [_i + ν_i ( t )] + √(2D_t)_i,_i ( t )= [cosθ_i ( t ), sinθ_i ( t )]^⊤,θ̇_i ( t ) = ω + √(2D_r)Γ_i ,where _i denotes the position of the ith particle, ξ is the translational drag coefficient, _i is the force,_i denotes the direction of its active velocity, ν is the magnitude of the active force, ω is the active torque, and D_t and D_r are the translational and rotational diffusion constants that are related by the formula D_r = 3σ^-2 D_t. Here, _i and Γ_i are independent Gaussian white noises with zero mean and unit variance. The particles interact with the Weeks-Chandler-Anderson (WCA) potential <cit.>, given by the sum U = ∑_i≠ j u(l_ij), where l_ij = | _i - _j | is the distance between particles i and j. The form of u(l_ij) is given byu ( l_ij ) = 4 ϵ_ij[ ( σ_ij/l_ij)^12 - ( σ_ij/l_ij)^6 + 1/4] θ( 2^1/6 - l_ij/σ_ij),where ϵ_ij and σ_ij are the energy and length scales set by the particle types, respectively, and θ is the Heaviside function. The corresponding force is _i = - U(t)_i. The solutes are represented as rigid bodies composed of particles interacting under the WCA potential with other particles not in the same rigid body <cit.>. Following Ref. <cit.>, a rigid body b composed of N_b internal particles has its internal particles indexed by B_bk = [ B_b1, …, B_bN_b], with the overall rigid body position and quaternion given by _b and _b, respectively. Evaluating the net force and torque on the bth rigid body per_b= ∑_i ∈ B_bk_i,_b= ∑_i ∈ B_bk( _i - _b ) ×_i,the equations of motion for the rigid body are given by_b ( t ) = ξ_b^-1_b + √(2D_t,b)_b ,_b ( t ) = 0.5 (ξ_b,r^-1_b + √(2 D_r,b)_b _b_b^-1) _b,where ξ_b and ξ_b,r are the translational and rotational drag coefficients, respectively, D_t,b and D_r,b are the translational and rotational diffusion constants, respectively, and _b and _b are independent Gaussian white noises with zero mean and unit variance. As the system is two-dimensional, only the z-component of the torque and _b are non-zero. The relations between the drag coefficients and diffusion constants are given by the Stokes-Einstein and Stokes-Einstein-Debye relations in which for the solvent one hasD_t= ξ^-1 , D_r= ξ_r^-1 = 3 σ_H^-2 D_t,where r is the hydrodynamic radius of the particles, and k_B and T are the Boltzmann constant and temperature, respectively. Analogous relations hold for the rigid body solutes. We choose the diffusion constants for the solvent and solutes to be equal with D_t = D_t,b = 1, = 1, and ϵ_ij = 40 and σ_ij = 1 unless otherwise specified. §.§ Local densitiesTo obtain the local densities of passive particles, ρ_P (Figs. <ref>, <ref>, and  <ref>–<ref>), a Voronoi tesselation is obtained on the positions of the passive particles via Freud <cit.>. The local density of a passive rigid body particle is then given by ρ_P = N_P / A_V, where N_P is the number of particles in the rigid body and A_V is the area of the associated Voronoi cells. For results involving passive circle and sphere particles, the local density is instead computed per ρ_P = V_P / A_V, where V_P is the volume of the particle. In both cases this rescaling of ρ_P is done to keep the axes comparable between different systems and sizes of passive particles. §.§ Force and effective free energyThe force on a fixed wall is obtained by summing over the forces from the solvent interacting with the wall. Denoting the force on a wall as _bp, the force on the wall is given by_bp = ∑_i ∈ B_bpk∑_j ∈ N_A - u ( l_ij )_ij .Denoting the force on the left and right walls by _L and _R, the effective force in the x direction is given byF = 1/2( F_R,x - F_L,x),which has the convention per Sec. <ref> that F > 0 corresponds repulsion and F < 0 corresponds to attraction.The effective free energy, taken as the logarithm of the radial distribution function - ln g ( r ), is obtained through simulating two passive walls. A tether potential of the form <cit.>U_tether ( r ) = k_tetherexp ( 1 / (l_0-r))/l_max-r, ifr > l_00, otherwiseis used to constrain the walls below a cutoff distance l_max.The radial distribution function is then obtained in the standard manner by binning the distances between the walls obtained in simulation and normalizing relative to the ideal gas distribution <cit.>. §.§ Gaussian kernelA Gaussian kernel is used to convert per-particle quantities into field quantities using Freud <cit.>. The Gaussian kernel is given by, for a quantity k,k () = C^-1∑_i ∈Nb k_iexp( - (-_i)^2/2 σ^2),where the summation denotes the particles aboutwithin a cutoff distance r_max. The normalization constant C in 2D is evaluated asC = ∫_0^r_max 2 π r exp( - r^2/2 σ^2) = 2 πσ^2( 1 - exp ( -r_max^2/2 σ^2 ) ).We take r_max = 0.5 and σ = 1 to yield statistics corresponding to local regions corresponding to single particles. The kernel is evaluated on a grid cell with σ/15 spacing.This procedure is used to evaluate the density, flux, and orientation fields. The density field ρ () is evaluated by setting k_i = 1. The flux field is evaluated per _i = _i + ν_i. For the orientation field, as the angle θ is a periodic variable, θ () is evaluated through a circular mean. This is done by evaluating cosθ () and sinθ () through k_i = cosθ_i and k_i = sinθ_i, respectively, and then obtain the orientation field asθ () = ( sinθ (), cosθ () ).§.§ Density and hexatic order parameter between wallsThe local density is obtained by enumerating the number of particles between the walls and dividing by the free area between the walls. The hexatic order parameter of a particle is obtained through Freud <cit.> by evaluating (<ref>). In the case of there being no particles between the walls, the hexatic order parameter is taken to be zero. §.§ Constraining the systemTo perform simulations of moving walls with fixed orientation and y-positions, the orientation is fixed by setting the rotational drag coefficient to be numerically infinite, and the y-position is reset to the initial y-position after each time step. §.§ Glass modelWe modify a model used in the simulation of glass formers due to the model's resistance to crystallization <cit.>. In this model, values of σ_i are drawn from a discrete probability distribution P ( σ ) = A σ^-3 with 51 bins between σ_min = 0.6 and σ_max = 2.29, with the value of σ_min set to a predetermined value and σ_max set to allow the particles to have the same volume fraction as a monodisperse system with σ = 1 and ρ = 0.4. The value of A is set to ensure that the distribution is normalized. The value of ϵ_ij is set to 40, and the value of σ_ij is set to the arithmetic mean of σ_i and σ_j. The WCA potential of (<ref>) is used to model the interaction between particles. This model defers from the original glass model of Refs. <cit.> in that the form of the potential is different along with the mixing rule for σ_ij, along with the polydispersity being drawn from a discrete distribution rather than a continuous one. § OTHER SUPPLEMENTARY DATA
http://arxiv.org/abs/2310.17763v1
{ "authors": [ "Clay H. Batton", "Grant M. Rotskoff" ], "categories": [ "cond-mat.soft", "cond-mat.stat-mech" ], "primary_category": "cond-mat.soft", "published": "20231026201233", "title": "Microscopic origin of tunable assembly forces in chiral active environments" }
thmTheorem problemProblemProblems definitionDef.Defs. theoremThm.Thms.
http://arxiv.org/abs/2311.16124v2
{ "authors": [ "Mintong Kang", "Dawn Song", "Bo Li" ], "categories": [ "cs.CR", "cs.AI" ], "primary_category": "cs.CR", "published": "20231027151750", "title": "DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification" }
13.510Ancillary Services in Power System Transition Toward a 100% Non-Fossil Future: Market Design Challenges in the United States and Europe [ January 14, 2024 ============================================================================================================================================= Recently, data augmentation (DA) has emerged as a method for leveraging domain knowledge to inexpensively generate additional data in reinforcement learning (RL) tasks, often yielding substantial improvements in data efficiency.While prior work has demonstrated the utility of incorporating augmented data directly into model-free RL updates, it is not well-understood when a particular DA strategy will improve data efficiency.In this paper, we seek to identify general aspects of DA responsible for observed learning improvements.Our study focuses on sparse-reward tasks with dynamics-invariant data augmentation functions, serving as an initial step towards a more general understanding of DA and its integration into RL training.Experimentally, we isolate three relevant aspects of DA: state-action coverage,reward density, and the number of augmented transitions generated per update (the augmented replay ratio).From our experiments, we draw two conclusions: (1) increasing state-action coverage often has a much greater impact on data efficiency than increasing reward density, and (2) decreasing the augmented replay ratio substantially improves data efficiency.In fact, certain tasks in our empirical study are solvable only when the replay ratio is sufficiently low. § INTRODUCTION R0.40.3< g r a p h i c s >Observed 0.3< g r a p h i c s >Translate 0.3< g r a p h i c s >Rotate < g r a p h i c s >Training curves Visualizations of two augmentations – translation (<ref>) and rotation (<ref>) – for a 2D navigation task in which an agent (black dot) must reach a goal (gold star). In <ref>, “xN policy data” corresponds to collecting N times as many transitions with the agent's current policy between updates, and “x2 via rotate/translate” corresponds to generating one augmented transition per observed transition.We increase the batch size and replay buffer sizes proportionally to the amount of extra data to keep the replay ratio and replay age fixed across all experiments. We plot the interquartile mean success rate over 50 seeds with 95% bootstrapped confidence belts.Reinforcement learning (RL) algorithms are often data inefficient and often produce policies that fail to generalize outside of a narrow state distribution.Recently, a number of RL algorithms and applications have been published that leverage data augmentation to enhance convergence and generalization <cit.>. Data augmentation (DA) is a technique where agents generate additional synthetic experience by transforming their observed experience.Since augmented data can be generated without the expense of additional interactions with the environment, it is an attractive technique for improving the data efficiency of RL algorithms (i.e., the number of environment interactions needed to solve a task).Much of the prior work in DA for RL <cit.> builds off of DA techniques used in computer vision (e.g., <cit.>).Other works have used domain-dependent DA strategies for non-visual tasks <cit.>, including DeepMind's AlphaTensor <cit.> which uses RL to discover more efficient matrix multiplications.These works introduce methods for generating augmented data and frameworks for integrating it into RL that demonstrably improve training performance.To the best of our knowledge, most prior work on DA has focused on introducing new types of data augmentation functions and demonstrating that they can boost the data efficiency of RL.What is missing from the literature is a clear understanding of which aspects of DA yield improvements. Rather than adding to existing work by introducing new DA strategies, our main contribution is an investigation into the following question: When and why does data augmentation improve data efficiency in reinforcement learning?DA has taken many forms in the RL literature <cit.>, and a comprehensive analysis of different DA frameworks, tasks, and data augmentation functions is beyond the scope of a single study. Thus, in this work, we instead aim to better understand the benefits of integrating dynamics-invariant augmented transitions directly into model-free off-policy RL updates.With this focus in mind, we must leave studies on DA frameworks with auxiliary tasks <cit.>, and studies on data augmentation functions that generate unrealistic data – such as visual data augmentations <cit.> – for future work.As a motivating example, consider a 2D navigation task (Fig. <ref>) in which an agent must reach a random goal position. In this task, transitions observed by the agent can be augmented through either random translations of the agent (Fig. <ref>) or random rotations of the agent and goal (Fig. <ref>).As shown in Fig. <ref>, if we double the agent's learning data via DA and double the batch size used for updates, we achieve significant improvements in data efficiency compared to learning without DA. Furthermore, agents that learn from extra augmented data even surpass the performance of agents that learn from an equal amount of extra real data collected through additional environment interactions.More concretely, we double the amount of policy data collected between updates and double the batch size used for updates so that non-augmented agents learn from the same amount of data and perform the same number of updates as the augmented agents.As shown in Fig. <ref>,additional augmented data leads to faster learning than simply collecting an equal amount of additional data from the agent's policy.In fact, doubling the learning data via the translation augmentation is nearly as good as learning from 8 times as much policy-generated data.Thus, these augmentations must offer benefits beyond what additional policy-generated data can offer.An understanding of which aspects of DA yield these benefits will serve as an initial step towards guiding practitioners on how to more effectively incorporate DA into RL.Our investigation focuses on three aspects of DA that we hypothesize influence learning in sparse-reward tasks: the amount of state-action coverage generated by DA, the amount of additional reward signal generated by DA (reward density), and the number of augmented transitions generated per update (augmented replay ratio).State-action coverage and reward density relate to how DA affects the distribution of the agent's learning data, whereas the augmented replay ratio relates to how augmented data is incorporated into RL training.We empirically ablate the effects of these factors using a simple and controllable DA framework similar to frameworks found in existing work <cit.>. In summary, our contributions are: * We introduce a framework for studying DA in RL that is amenable to analysis. * While it is widely understood that high state-action coverage and discovery of reward signal are critical to data efficient RL, our experiments show that increasing state-action coverage via DA often has a much greater impact on data efficiency than increasing reward density.* We show that the success of DA function depends strongly on the augmented replay ratio.In fact, certain tasks in our empirical study are solvable only when the augmented replay ratio is sufficiently low. § RELATED WORKIn this section, we provide an overview of data augmentation techniques and applications in RL. Dynamics-based Augmentation: Several prior works use data augmentation functions that affect the agent's current state, action, and next state. <cit.> stitch together locally independent features of different transitions to generate additional data and provides a method for identifying local independence. Many model-based algorithms learn from synthetic data generated by a learned dynamics model and can be viewed as DA methods  <cit.>.Hindsight Experience Replay:In goal-conditioned RL, Hindsight Experience Replay (HER) <cit.> counter-factually relabels the goal of a trajectory to generate additional data.This technique can be applied when transition dynamics are independent of the agent's goal, as is often the case.Follow-up work on HER has demonstrated thathindsight bias caused by changing the distribution of observed goals many hinder learning <cit.>.Applications of Domain-Specific Data Augmentation:Several recent works have leveraged domain-knowledge to create new data augmentation functions.  <cit.> and  <cit.> apply DA to locomotion problems in which an optimal policy has a symmetric gait, and  <cit.> focus on augmenting trajectories of poses and movable objects relevant in robot manipulation.  <cit.> consider DA in the context of differentiable simulation to generate additional approximately correct transitions (a method they refer to as sample enhancement). DeepMind's AlphaTensor <cit.> exploits two invariances: tensor decompositions are commutative, and tensor rank is invariant to the ordering of rows and columns. They exploit commutativity by generating additional augmented transitions and rank invariance using a network that disregards the row and column ordering of input tensors.State-based Augmentation:Much of the prior work in DA for RL focuses on augmentating visual observations <cit.>,leveraging the successes of DA incomputer vision. <cit.> train RL agents on multiple views of visual states (crops, recolorations, rotations, etc.).<cit.> introduce regularizers to ensure an agent's policy and value function are both invariant under augmentation.<cit.> ensure small perturbations of non-visual states state have similar state-action values. This line of work relates to domain randomization <cit.>, as agents are trained to be robust to randomized augmentations of observations.<cit.> learn a state representation that is invariant under augmentation rather than directly using the augmented data for policy optimization.  <cit.> identify sources of instability when performing visual DA.Our work focuses on augmentations that respect the environment's dynamics; these vision-based augmentations typically generate unrealistic data and are thus beyond the scope of our study.Invariant Model Architectures: DA often – though not always – exploits known invariances within the environment's state space and/or dynamics. In this case, an alternative to DA is to simply hard-code these invariances into the agent's policy model <cit.>.Residual Pathway Priors (RPPs) <cit.> capture invariances using a soft prior, biasing agents toward invariant policies without constraining them. While these prior works focus on developing augmentation functions or methods for incorporating augmented data into RL training, ourwork introduces a framework to investigate when and why DA improves learning.§ PRELIMINARIES In this section, we formalize the RL setting and the class of data augmentation functions we use.§.§ Reinforcement Learning We consider finite horizonMarkov decision processes (MDPs) <cit.> defined by (, , p, r, d_0, γ) whereanddenote the state and action space, respectively, p(' |, ) denotes the probability density of the next state ' after taking actionin state ,and r(,) denotes the reward for taking actionin state . We write d_0 as the initial state distribution, γ∈ [0, 1) as the discount factor, and H the length of an episode. We consider stochastic policies π_θ : ×→ [0,1] parameterized by θ.The RL objective is to find a policy that maximizes the expected sum of discounted rewards J(θ) = 𝔼_π_θ, _0∼d_0[∑_t=0^H γ^t r(_t,_t)].§.§ Data Augmentation FunctionsIn the literature, data augmentation functions (DAFs) have taken different forms and served different purposes.We introduce a few important definitions to help classify the DAFs we focus on.A transition (, , r, ') is valid if it is possible under the transition dynamics and reward function, i.e. p(' |, ) > 0, and r = r(, ).Let ⊂××× denote the set of possible transitions and let Δ() denote the set of distributions over .A data augmentation function (DAF) is a stochastic function f: →Δ() mapping a transition (, , r, ') to an augmented transition (, , r̃, ').A DAF is dynamics-invariant if it is closed under valid transitions and if f respects the stochasticity of p, i.e., if (, , ') ∈×× and (, , ') is obtained from f, then p(' | , ) = p(' | , ). We focus on dynamics-invariant DAFs so that augmentation preserves the underlying MDP.This focus includes methods such as HER <cit.> and CoDA <cit.> which provide domain-independent DAFs.However, we do not restrict ourselves to domain-independent DAFs as, in practice, domain-experts may be able to produce dynamics-invariant DAFs even though they cannot identify an optimal domain policy <cit.>.Our focus does exclude some recent works on DA – especially those focusing on visual augmentations <cit.> –that produce augmented states that would never be observed in simulation such that p(' |, ) = 0, since these augmentations do not satisfy our definition of valid.We elaborate on the widespread applicability of dynamics-invariant DAFs in Appendix <ref>.§ A FRAMEWORK FOR STUDYING DATA AUGMENTATION IN RL L0.55 0.55Prior work on DA in RL not only considers different DAFs but also various methods for integrating the augmented data into RL algorithms.To focus our study, we introduce a specific framework (Algorithm <ref>) for incorporating augmented data into the training loop of any off-policy RL algorithm.An off-policy algorithm is essential, as augmented data may not be distributed according to the state-action distribution of the current policy.We focus on the effects integrating augmented data directly into policy and/or value function updates.Within our framework, the agent observes a transition,applies a given DAF f to that transition to generate some number of augmented transitions, and then stores the observed and augmented transitions in separate replay buffers– the observed and augmented replay buffers, respectively. When performing policy and/or value function updates,the agent samples data from both buffers and combines the data for the updates.To control how much augmented data we generate and use in each update, we introduce a few parameters into the framework.The augmentation ratio, m, specifies the number of augmented transitions generated per observed transition. Some DAFs, such as the translation augmentation in Fig. <ref>, can produce multiple unique augmentations from the same input transition. Each time the agent observes one real transition, m augmented transitions are sampled from the DAF.We use this parameter to study whether it is beneficial to produce multiple augmentations to diversify the augmented replay buffer. When the augmentation ratio is increased, we increase the augmented replay buffer size proportionally such that the age of the oldest augmented transitions remains fixed.The update ratio, α, denotes the the ratio of augmented to observed data used for updates, e.g., α = 1 denotes that half of the data used for each update is augmented data. With access to large amounts of augmented data, it may be beneficial to increase the amount of augmented data used in updates. However, a large update ratio can exacerbate the tandem effect <cit.>, a decrease in performance when learning predominantly from data not collected by the agent. The augmentation ratio modulates a third relevant quantity: the number of updates per augmented transition generated, or augmented replay ratio. [Prior works <cit.> define the replay ratio as the number of updates per environment interaction, characterizing how much the agent learns from existing data versus new experience.However, augmented data is generated by a DAF and can produce multiple augmentations per observed transition. Thus, for our analysis, the number of updates per augmented transition generated is a more appropriate metric. ]In the absence of DA, <cit.> found that found that decreasing the replay ratio of observed data (the observed replay ratio) can improve data efficiency, though other works have improved data efficiency by developing techniques that enable learning with large replay ratios <cit.>. One can decrease the augmented replay ratio by increasing the augmentation ratio (m) while keeping the frequency of policy and/or value function updates fixed.Though a variety of methods exist for incorporating augmented data into RL, our framework offers several core benefits for our study:* Control: One can easily control the replay ratio and update ratio.Having control over the update ratio is especially important to ensure the agent uses a sufficient amount of observed data in each update to reduce the tandem effect.Moreover, since augmented data is stored in a replay buffer rather than being sampled online, it is possible to keep the replay ratio of the augmented data equal to that of the observed data. * Computational Efficiency: By storing and reusing augmented data, we improve computational efficiency.While some DAFs can produce multiple unique augmentations of the same input transition, many can only produce a single augmentation – such as a reflection – in which case it is more efficient to reuse augmented data rather than generate new samples online every update. * Lower Systematic Variance: Each update uses the same ratio of augmented to observed data (i.e. update ratio), and the same number of augmented transitions are generated for every observed transition, eliminating a possible source of training variation. Our framework is similar to those used in CoDA <cit.>, RAD <cit.>, and HER <cit.>, as all three incorporate augmented data directly into updates without auxiliary tasks. We note that RAD as well as popular implementations <cit.> of HER generate augmented samples during updates and discard them after use, whereas we save augmented data for reuse. Since we focus on using augmented data for model-free updates, Algorithm <ref> is not intended to capture methods that use augmented data for auxiliary tasks but easily extends to include such methods <cit.>. § DISENTANGLING PROPERTIES OF DATA AUGMENTATIONIn this section, we identify aspects of DA that we hypothesize may impact its effectiveness within our framework.State-Action Coverage: DAFs can generate data that the current policy otherwise might not observe, increasing state-action coverage.Greater state-action coverage via DA may aid exploration.However, it may also generate data that is very off-policy with respect to the current policy and hence increase the variance of learning. Reward Density: Long horizon, sparse reward tasks are notoriously difficult since an RL agent is unlikely to discover reward signal through random exploration. A DAF that can produce transitions with additional reward signal could improve data efficiency.However, it is also known that reward-generating DA strategies such as HER <cit.> can bias learning and lead to overestimation of state-action values <cit.> .For sparse reward tasks, we define reward density as the fraction of transition data in both observed and augmented replay buffers which successfully solve the task and thus contain reward signal.[With dense rewards, one may need to consider the full distribution of rewards in the replay buffer instead.]Augmented Replay Ratio: Some DAFs (such as the translation DAF in Fig. <ref>) can generate multiple augmented transitions given a single input transition, substantially increasing the amount of data available to the agent.We hypothesize that it may be beneficial to generate as many augmented transitions as possible to lower the augmented replay ratio <cit.>. We note that is widely understood that high coverage and discovery of reward signal are critical to solving sparse-reward RL tasks <cit.>, the degree to which increasing state-action coverage and reward density via DA individually contribute to data efficient RL is less clear. These factors are be difficult to completely isolate; since the reward function r(,) depends onand ,altering reward density necessarily changes state-action coverage.In our experiments, we attempt to isolate all three aspects of DA to determine how much each affects the benefit of DA.§.§ ExperimentsR0.27 < g r a p h i c s > PandaPush-v3 <cit.>. A robotic arm must push a block to a goal location. We focus our experiments on four sparse-reward, continuous actiontasks <cit.>: PandaPush-v3, PandaSlide-v3, PandaPickAndPlace-v3, and PandaFlip-v3 (Fig. <ref>), which we henceforth refer to as the Push, Slide, PickAndPlace, and Flip tasks, respectively.We consider two DAFs:* TranslateGoal: Relabel the goal with a new goal sampled uniformly at random from the goal distribution.* TranslateGoalProximal(p): Relabel the goal with a new goal sampled from the goal distribution. With probability p, the new goal is sufficiently close to the object to generate reward signal. We also consider a toy 2D navigation task Goal2D-v0 (Fig. <ref>) in which an agent must reach a random goal.The agent receives reward +1 when it is sufficiently close to the goal and reward -0.1 otherwise. Agent and goal positions are initialized uniformly at random.We consider three DAFs:* Translate: Translate the agent to a random position.* Rotate: Rotate the agent and goal by θ∈{π/2, π, 3π/2}.* TranslateProximal(p): Translate the agent to a random position. With probability p, the agent's new position is sufficiently close to the goal to generate reward signal. These DAFs offer an avenue to investigate the role of reward density and state-action coverage in DA.For instance, one can modify reward density through p in TranslateGoalProximal(p).We include full descriptions of each environment and DAF in Appendices <ref> and <ref>, respectively.We use DDPG <cit.> for Panda tasks and TD3 <cit.> for Goal2D. Further training details are in Appendix <ref>.We include experiments studying all three factors on an agent's generalization ability in Appendix <ref> and experiments studying the augmented replay ratio experiments for dense reward MuJoCo tasks <cit.> in Appendix <ref>.§.§.§ Benchmarking Data Augmentation We first benchmark the performance of DA against simply collecting more policy data to establish how much our chosen DAFs help data efficiency.Prior work has demonstrated that learning with augmented data is often more data efficient than learning without it, though it is unclear how learning from augmented data compares to simply learning from additional policy-generated data, as policy-generated data and augmented data are distributed differently in general.In these experiments, we increase the available learning data using DA or by having agents collect more data with their current policy between updates. We label these agents according to how many additional environment interactions they perform, e.g. “x2 policy data” corresponds to one extra environment interaction, and “x2 data via TranslateGoal” corresponds to generating one augmented transition per observed transition (m = 1). When collecting or generating extra data, we increase the batch size and replay buffer size proportionally so that all agents learn with the same amount of data, the same observed and augmented replay ratios, and the same replay age.Thus, observed data and policy-generated data are treated equally in training. From Fig. <ref>, we see that TranslateGoal offers significant improvement over no additional data. For instance, in Push, doubling the learning data via TranslateGoal doubles data efficiency.Though, additional policy-generated data generally yields equal or better performance.Having established that TranslateGoal improves data efficiency, in the following sections, we study the degree to which reward density and state-action coverage are responsible for these improvements.§.§.§ State-Action CoverageIn this section, we consider how increasing state-action coverage via DA affects data efficiency.Since reward is a function of the agent's state and action, it is difficult to completely isolate the effects of increased coverage; a change in coverage affects reward density.Nevertheless, we can better understand the effect of increasing state-action coverage by comparing agents trained using Translate and TranslateGoal with agents trained using TranslateProximal(0) andTranslateGoalProximal(0) – DAFs that increase state-action coverage without providing additional reward signal to the agent.Early in training when there is little to no reward signal present in the observed replay buffer, TranslateGoal(0) and TranslateGoalProximal(0) have little affect on reward density.As the policy learns and more reward signal is added to the observed replay buffer, the lack of reward signal in the augmented replay buffer reduces overall reward density.Thus, we can attribute any performance boost provided by TranslateProximal(0) and TranslateGoalProximal(0) to an increase in state-action coverage and/or a decrease in reward density.To better separate the effects of increased state-action coverage and decreased reward density, we double the agent's training data using different ratios of augmented data to policy-generated data (1:5, 1:2, and 1:1).Agents trained with a smaller split of TranslateProximal(0)/TranslateGoalProximal(0) data have less coverage but also a experience a smaller decrease in reward density. We report results for the ratios yielding the largest improvements to data efficiency (i.e. the ratio that best balances the increase in coverage with the decrease in reward density). As shown in Fig. <ref>,TranslateGoalProximal(0) in Slide, PickAndPlace, and Flip is as data efficient as TranslateGoal; increased coverage alone explains the benefits of TranslateGoal in these tasks.In Goal2D and Push, TranslateProximal(0) andTranslateGoalProximal(0) are more data efficient than no DA but less data efficient than Translate and TranslateGoal.Thus, although increased state-action coverage is the primary benefit in most tasks, we see that increased reward density can also play a role.In the following section, we further disentangle state-action coverage and reward density to assess how critical high reward density is in these tasks.§.§.§ Reward Density We now strive to further disentangle state-action coverage from reward density by studying how changes to reward density affect learning.In the following experiment, we modify reward density by varying the probability p of generating reward signal with TranslateProximal(p) and TranslateGoalProximal(p) while keeping the update ratio and augmentation ratio fixed at α = 1 and m = 1, respectively.This setup does not completely isolate reward density from state-action coverage; changing reward density (increasing p) also affects state-action coverage, as it increases the amount of learning data in which the goal is near the object.Nevertheless, this experiment enables us to answer the following question: how critical is it that DA generates data with high reward density?As shown in Fig. <ref>, changing p has little effect on data efficiency in Slide and Flip, further supporting that increased coverage is the primary benefit ofTranslateGoal in these tasks.In Push and PickAndPlace, p = 0.05 is most data efficient[In Fig. <ref>, TranslateProximalGoal(0) achieves an IQM success rate of 0.3 (red dashed line), outperforming all agents for PickAndPlace in Fig. <ref>; high reward density augmented data is not critical in this task.],while in Goal2D, data efficiency increases significantly as with p = 0.01, and increasing to p=0.1 offers marginal additional improvement.In these three tasks, the largest p values decrease data efficiency, since changes to the distribution of reward signal and can bias updates – similar to hindsight bias <cit.>.Since the most data efficient learning occurs when DA contributes no reward signal or a relatively small amount, we conclude that high reward density is not critical to successful DA.Collectively, our state-action coverage and reward density experiments suggest the following: DA should focus on increasing state-action coverage, not reward density.Thus, our results suggest that RL practitioners choosing among candidate DAFs in a given domain should focus on increasing state-action coverage more so than increasing reward density. §.§.§ Augmented Replay RatioOur previous experiments study how two properties of DAFs affect RL, though performance may be sensitive to how augmented data is incorporated into RL training. In this section, we study how the number of updates per augmented transition generated affects data efficiency.Existing DA strategies often incorporate multiple augmentations of the same observed transition into policy optimization. For instance, HER <cit.> generates 4 hindsight transitions per observed transition, and CoDA <cit.> generates up to 16 augmentations per observed transition.Generating additional augmentations decreases the augmented replay ratio, the number of updates per augmented transition generated.We hypothesize that decreasing the augmented replay ratio can provide a similar benefit to decreasing the replay ratio of observed data noted by <cit.>.Decreasing the replay ratio of observed data can be expensive – requiring more environment interactions between updates – while decreasing the augmented replay ratio is comparatively cheap. In this experiment, we decrease the augmented replay ratio β by generating more augmentations per observed transition (i.e., increasing m). We scale the augmented replay buffer size proportionally and keep the ratio of augmented to observed data used in updates fixed at α = 2.As shown in Fig. <ref>, a lower augmented replay ratio alone substantially improves data efficiency and overall performance across all panda tasks.Moreover, a low replay ratio is necessary to solve PickAndPlace within our training budget; 100% success rate can be achieved with β = 0.0625, while no learning occurs with β = 0.5. A lower replay ratio also increases data efficiency in Goal2D as well for both Translate and Rotate DAFs; due to space constraints, we include these figures in Appendix <ref>. Decreasing the replay ratio is a preferable alternative to increasing the amount of augmented data used in updates, as the latter may exacerbate the tandem effect <cit.> in which RL from passive data fails. We support this claim with additional experiments provided in Appendix <ref>. Empirically, we have demonstrated that a DAF's success may depend strongly on how we integrate its augmented data into training. To effectively apply DA, one must understand desirable properties of DAFs and relevant implementation details.§ CONCLUSIONS, LIMITATIONS, AND FUTURE WORKWhile prior work has demonstrated that incorporating augmented data in model-free off-policy reinforcement learning (RL) updates can substantially improve the data efficiency of RL algorithms,we lack a clear understanding of which aspects of data augmentation (DA) yield such improvements. In this paper, we isolated three aspects of DA in sparse reward RL tasks with dynamics-invariant data augmentation functions (DAFs) – state-action coverage,reward density, and the replay ratio of augmented data – to understand how each affects performance.Empirically, we showed how increasing state-action coverage often has a much greater impact on data efficiency than increasing reward density.Moreover, we demonstrated that the augmented replay ratio plays a significant role: certain tasks are unsolvable unless the replay ratio is sufficiently small.Our work has provided an initial study analyzing the benefits of DA.To better leverage DA, further work is needed in understanding (1) how other properties of DAFs influence RL training, such as relevancy  <cit.>, as well as (2) how relevant hyperparameters within an DA framework affect performance. Our analysis focused on relatively low-dimensional sparse reward tasks with continuous actions, and findings may be different for tasks with dense rewards, discrete actions, or high-dimensional visual observations.Moreover, we use an DA framework that treats augmented data as if it were observed data without auxiliary tasks. It would be beneficial to extend this analysis to frameworks that use augmented data for auxiliary tasks such as representation learning rather than – or in addition to – policy optimization <cit.>.neurips_2023§ DYNAMICS-INVARIANT DATA AUGMENTATION FUNCTIONS In this section, we further motivate our focus on dynamics-invariant data augmentation functions.Specifying a dynamics-invariant data augmentation function requires knowledge of domain-specific invariances or symmetries. While domain knowledge may seem like a limitation, we observe in the literature and real-world RL applications that such invariances and symmetries are incredibly common and often require very little prior knowledge to specify. We provided a few examples: * Transition dynamics are often independent of the agent’s goal state <cit.>. * Objects often have independent dynamics if they are physically separated <cit.>, which implies that objects exhibit translational invariance conditioned on physical separation.* Several works focus on rotational symmetry of 3D scenes in robotics tasks <cit.>, and many real-world robots are symmetric in design and thus have symmetries in their transition dynamics <cit.>. We include real-world tasks that exhibit one or more of these invariances in Fig. <ref>. We choose to focus on dynamics-invariant data augmentations because they have already appeared so widely in the literature. As RL becomes an increasingly widely used tool, we anticipate that domain experts will be able to identify new domain-specific augmentations and use them to further lower the data requirements of RL. These observations underscore the importance of identifying when and why different general properties of data augmentation will benefit RL.§ PRIMARY ENVIRONMENTS FOR OUR EMPRICAL ANALYSIS We use four tasks from  <cit.> as the core environments in the main paper. Fig. <ref> shows renderings for each task.* PandaPush-v3 (Push):The robot must push an object to a goal location on the table. The goal and initial object positions are sampled uniformly at random from (x, y) ∈ [-0.15, 0.15]^2, z=0.02.* PandaSlide-v3 (Slide):The robot must slide a puck to a goal location on the table. The initial object position is sampled uniformly at random from [-0.15, 0.15]^2, while the goal (x, y, z) is sampled from x ∈ [0.25, 0.55], y∈ [-0.15, 0.15], z = 0.015. * PandaPickAndPlace-v3 (PickAndPlace): The robot must pick up an objectand move it to a goal location. With probability 0.3, the goal is on the table (z=0.02), and with probability 0.7, the goal is in the air (z∈(0.02, 0.2]). * PandaFlip-v3 (Flip): The robot must pick up an object and rotate it to a goal orientation. The initial object position is sampled uniformly at random from [-0.15, 0.15]^2, while the goal is a uniformly sampled orientation expressed a quaternion. In Push, PickAndPlace, and Flip,the object is a cube with side length 0.04.In Slide, the object is a cylindrical puck with height 0.03 and radius 0.03. The object's z coordinate measures the distance between the center of the object and the table (e.g., In Push, Slide, and PickAndPlace, z=0.02 means the object is on the table).Push, Slide, and PickAndPlace share a similar sparse reward structure. The agent receives a reward of 0 if the object is within 0.05 units of the goal and a reward of -1 otherwise.In Flip, the agent receives a reward of 0 if the object's orientationis within 0.2 units from the goal orientation _g under the following angle distance metric:d(, _g) = 1 - (·_g)^2 = 1 - cos(θ)/2where θ is the angle of rotation required to rotateto _g.Otherwise, the agent receives a reward of -1. In the toy 2D navigation task Goal2D, an agent must reach a fixed goal within 100 timesteps. The agent's state (x, y, x_g, y_g) contains the coordinates of agent's positions (x, y) and the goal's position (x_g, y_g). At each timestep, the agent chooses an action (r, θ) and transitions to a new position:x_t+1 = x_t + 0.05rcos(θ) y_t+1 = y_t + 0.05rsin(θ)Thus, the agent moves at most 0.05 units in any direction. The goal position is fixed throughout an episode.The agent receives reward +1 when it is within 0.05 units of the goal and reward -0.1 otherwise.Agent and goal positions are initialized uniformly at random in [-1,+1]^2.§ AUGMENTATION FUNCTIONS In this section, we provide further details on the data augmentation functions introduced in Section <ref>. * TranslateGoal: Goals are relabeled using a new goal sampled uniformly at random from the goal distribution. Reward signal is generated when the new goal is sufficiently close to the object's current position.To approximate the probability of this augmentation generating reward signal in each task, we sample 10M object and goal positions uniformly at random and report the empirical probability of the goal being sufficiently close to the object to generate reward signal. * PandaPush-v3 (Push):Reward signal is generated with probability approximately 0.075.* PandaSlide-v3 (Slide):The probability of generating reward signal depends on the current policy. The initial object and goal distributions are disjoint, so this augmentation can only generate reward signal if the agent pushes the object into the region x∈ [0.25,0.55], y∈ [-0.15, 0.15]. If the object is in this region, this augmentation will generate reward signal with probability approximately 0.075.* PandaPickAndPlace-v3 (PickAndPlace): Reward signal is generated with probability approximately0.04.* PandaFlip-v3 (Flip): Reward signal is generated with probability approximately 0.04. * TranslateGoalProximal(p): Goals are relabeled using a new goal sampled from the goal distribution. With probability p, the new goal generates a reward signal, and with probability 1-p, no reward signal is generated. When generating an augmented sample with reward signal, the goal is set equal to the object's position plus a small amount of noise.and with probability 1-p, the object is moved to a random location sufficiently far from the goal that no reward signal is generated. All Panda augmentation functions relabel the goal and reward. For Goal2D, we consider three data augmentation functions:* Translate: Translate the agent to a random position in [-1,+1]^2. This augmentation generates reward signal with probability approximately 0.019. We obtained this approximation by sampling 10M agent and goal positions uniformly at random and then computing theempirical probability of the goal being with 0.05 units of the agent. * Rotate: Rotate both the agent and goal by θ∈{π/2, π, 3π/2}. When sampling multiple augmentations of the sample observed transition, it is possible to sample duplicate augmentations. * TranslateProximal(p): Translate the agent to a random position in [-1, +1]^2. With probability p, agent's new position iswithin 0.05 units of the goal and generates reward signal, and with probability 1-p, agent's new position ismore than 0.05 units from the goal and generates no reward signal. All Goal2d augmentations modify the agent's state. Translate and TranslateProximal(p) modify the agent's position and reward but do not modify the goal. Rotate affects the agent's position, the goal position, and agent's action, but does not change the reward.§ ADDITIONAL EXPERIMENTS§.§ Increasing the Update RatioIn Section <ref>, we demonstrate that generating more augmented data to decrease the augmented replay ratiocan drastically improve data efficiency. If we generated more augmented data by increasing the augmentation ratio, we could alternatively incorporate the additional augmented data by using more augmented data in each update (i.e., increasing the update ratio).Fig. <ref> shows agent performance as the augmentation ratio and update ratio increase proportionally. We additionally keep the replay age fixed by increasing the augmented buffer size proportionally.Learning is generally more data efficient with a larger update ratio, though it may harm performance as seen in Slide. Notably, the improvements in data efficiency from decreasing the replay ratio (Fig. <ref>) are similar or better than those produced from an increased update ratio and can be achieved at a much lower computational cost per update. §.§ Increasing the Batch SizeIn Fig. <ref>, agents with more training data available to them use larger batch sizes for updates, giving these agents a seemingly unfair advantage over agents that learn from less data. However, Fig. <ref> shows that increasing the batch size without increasing the amount of data available to the agent harms performance, due to an increase in the expected number of times a transition is sampled for a gradient update <cit.>. By scaling the batch size with the amount of available learning data in Fig. <ref>, we keep the expected number of gradient updates per observed/augmented transition fixed across all experiments, providing a fairer comparison.§.§ Goal2D Augmented Replay Ratio As in Section <ref>, we decrease the augmented replay ratio β by generating more augmentations per observed transition. We scale the augmented replay buffer size proportionally and keep the ratio of augmented to observed data used in updates fixed at α = 1.As shown in Fig. <ref>, a lower augmentation replay ratio increases data efficiency. § GENERALIZATION EXPERIMENTS In this section, we investigate how state-action coverage, reward density, and the augmentated replay ratio affect an agent's generalization ability. For Push, Slide, and PickAndPlace, we train agents over one quadrant of the goal distribution and evaluate agents over the full distribution. For Flip, An agent that generalizes well will achieve a high success rate over the full goal distribution. In general, our observations regarding data efficiency in the main body of this work also apply to generalization.§.§ State-Action CoverageState-action coverage results are shown in Fig. <ref>. An increase in state-action coverage via augmentation increases generalization. In the Panda tasks, using 50% TranslateGoalProximal(0) data yields similar performance compared to increased using a 50% TranslateGoal data, indicating that coverage alone can largely explain the generalization improvements with TranslateGoal. In Goal2D, increased coverage yields better generalization, though a considerable gap nevertheless exists between TranslateProximal(0) and Translate.Thus, reward density must play a larger role in Goal2D. We further investigate this point in the following section. §.§ Reward DensityReward density results are shown in Fig. <ref>. In Goal2D, a relatively small increase in reward density dramatically improves generalization; TranslateProximal(0) is roughly on-par with using twice as much policy-generated data, while TranslateProximal(0.05) outperforms agents with x8 as much policy-generated data. In the Panda tasks, increasing reward density has little effect on data generalization. Just§.§ Augmented Replay RatioAugmented replay ratio results are shown in Fig. <ref>. In Goal2D with Translate and both Panda tasks with TranslateGoal, reducing the augmented replay ratio βimproves generalization performance at convergence. Rotate achieves 100% success for all values of β.§ MUJOCO EXPERIMENTS In this appendix, we include additional experiments on the following dense reward, continuous state and action MuJoCo environments: Swimmer-v4, Walker2d-v4, Ant-v4, and Humanoid-v4. These experiments focus on state-action coverage and the augmented replay ratio, since reward density is more relevant to sparse reward tasks. With dense reward tasks, we may need to consider the full distribution of rewards in the replay buffer, not just the average.§.§ Environment ModificationsSome of the common MuJoCo environments do not exhibit symmetries that should exist intuitively (e.g. reflection symmetry, gait symmetry, etc.).We found two causes:* Asymmetric physics are explicitly hard-coded in the robot descriptor files. We believe these to be typos, and simply update values such that intuitive symmetries exist.* Symmetry-breaking numerical optimization algorithms are sometimes used to compute constraint forces and constrained accelerations.In particular, some environments use the Projected Gauss-Seidel (PGS) algorithm which performs sequential updates and therefore breaks symmetries where physics should be symmetric.To address this issue, we use Newton's method,which performs parallel updates and therefore preserves symmetric physics. We note that while Newton's method is MuJoCo's default algorithm,some robot descriptor files originally specify the use of PGS.Table <ref> describes all modifications made to environments to ensure intuitive symmetry.To simplify the creation of augmentation functions, we additionally exclude constraint forces and center-of-mass quantities from the agent's observations, since their interpretations are not well-documented and difficult to ascertain.§.§ Augmentation FunctionsWe consider the following dynamics-invariant augmentations:* Swimmer-v4 * Reflect: Reflect joint angles and velocities about the agent's central axis. The reward is unchanged. * Walker2d-v4 * Reflect: Swap the observations dimensions of the left and right legs. The reward is unchanged.* Ant-v4 * Reflect: Swap the observations dimensions of the front and back legs. Unlike the other reflections, this augmentation affects the reward. In particular, if the observed transition moves forward with velocity , the reflected transition will move backwards with velocity -. Thus, this reflection flips the sign of the “forward progress" reward term in the reward function. * Rotate: Rotate the agent's orientation by θ sampled uniformly at random from [-π/6, π/6]. * Humanoid-v4 * Reflect: Swap the observations dimensions of the left and right arms/legs, and reflect torso joint angles, velocities, and orientation about the agent's central axis. The reward is unchanged.* Rotate: Rotate the agent's orientation by θ sampled uniformly at random from [-π/3, π/3]. §.§ Augmented Replay Ratio We repeat the same augmented replay ratio experiments detailed in Section <ref> for MuJoCo tasks. We decrease the replay ratio β by generating more augmentations per observed transition while keeping the amount of augmented data used in policy/value function updates fixed, i.e., we increase the augmentation ratio m while fixing the update ratio α. We consider all augmentations listed in Appendix <ref>.For the Rotate augmentation, we fix the update ratio at α=1 and generate m=1,2,4 augmented transitions per observed transition, corresponding to augmented replay ratios β = 1, 0.5, 0.25, respectively. Even though Reflect can only generate a single augmented transition per observed transition, we can nevertheless study the augmented replay ratio as follows. Rather than generating an augmenting every observed transition, we instead augment transitions with some probability p such that the augmented replay ratio is 1/p in expectation. We can think of p as a fractional augmentation ratio. We fix the update ratio at α=0.25 and consider p=0.25, 0.5, 1, again corresponding to augmented replay ratios β = 1, 0.5, 0.25, respectively. Results are shown in Fig. <ref>. A lower augmented replay ratio with Reflect yields slight improvements in data efficiency for all environments considered. Rotate improves data efficiency in Ant-v4. We observe much larger improvements with a lower augmented replay ratio in our core sparse reward tasks (Section <ref>). § TRAINING DETAILS We use the Stable Baselines3 <cit.> implementation of DDPG <cit.> and TD3 <cit.> with modifications to incorporate augmentation into the RL training loop.In Panda tasks, we use DDPG since we found that it performs substantially better than TD3. This observation was also made by  <cit.>.All Panda experiments use the default hyperparameters presented in Table <ref>. These parameters are nearly identical to the those used by <cit.> and <cit.>.The augmented replay ratio experiments in Fig. <ref> and update ratio experiments in Fig. <ref> use different values for two hyperparameters specified below: * Random action probability: 0* Update frequency: Every timestep (observed replay ratio of 1) We ran all experiments on a compute cluster using a mix of CPU-only and GPU jobs.This cluster contains a mix of Tesla P100-PCIE, GeForce RTX 2080 Ti, and A100-SXM4 GPUs.Due to limited GPU access, we only used GPUs for the augmented replay ratio experiments in Section <ref>, since these were our most computationally demanding experiments.We ran state-coverage, reward density, and generalization experiments on CPU only. CPU jobs took 12-36 hours each depending on the training budget, and GPU jobs took up to 16 hours each.
http://arxiv.org/abs/2310.17786v1
{ "authors": [ "Nicholas E. Corrado", "Josiah P. Hanna" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231026212850", "title": "Understanding when Dynamics-Invariant Data Augmentations Benefit Model-Free Reinforcement Learning Updates" }
[NO \title GIVEN] [NO \author GIVEN] January 14, 2024 ====================== Text-to-SQL benchmarks play a crucial role in evaluating the progress made in the field and the ranking of different models. However, accurately matching a model-generated SQL query to a reference SQL query in a benchmark fails for various reasons, such as underspecified natural language queries, inherent assumptions in both model-generated and reference queries, and the non-deterministic nature of SQL output under certain conditions. In this paper, we conduct an extensive study of several prominent cross-domain text-to-SQL benchmarks and re-evaluate some of the top-performing models within these benchmarks, by both manually evaluating the SQL queries and rewriting them in equivalent expressions. Our evaluation reveals that attaining a perfect performance on these benchmarks is unfeasible due to the multiple interpretations that can be derived from the provided samples. Furthermore, we find that the true performance of the models is underestimated and their relative performance changes after a re-evaluation. Most notably, our evaluation reveals a surprising discovery: a recent GPT4-based model surpasses the gold standard reference queries in the Spider benchmark in our human evaluation. This finding highlights the importance of interpreting benchmark evaluations cautiously, while also acknowledging the critical role of additional independent evaluations in driving advancements in the field. § INTRODUCTION Significant progress has been made in translating natural language text to SQL statements over the past few years. The execution accuracy on the hold-out test of Spider <cit.>–a large-scale cross-domain text-to-SQL benchmark– has improved from 53.5 in May, 2020 <cit.> to 85.3 in March, 2023 <cit.>. The exact set match accuracy, without considering database cell values, on the same benchmark and over the same period has improved from 65.6 <cit.> to 74.0 <cit.>. Measuring such progress is hinged on reliable benchmarks and evaluation metrics. Two standard metrics for evaluating the performance in this domain have been exact set match accuracy and execution accuracy. The former measures if a model-generated SQL query lexically matches a reference SQL query, whereas the latter measures if a model-generated SQL query produces the same output as a reference query ( <ref>).Consider the example in Figure <ref>, which consists of a model-generated query (shown on the left) and a reference query (shown on the right). Both SQL queries return the id and name of makers that have more than 3 models. However, the model-generated query returns the column FullName, which gives the full name of a maker (e.g., “Ford Motor Company”), whereas the reference query given in the benchmark returns the column Maker, which gives the short common name of a maker (e.g., “Ford”). The model-generated query fails an exact set match since the column names in the select clause are different. The query outputs are also different and the model-generated query fails the execution accuracy as well. The natural language utterance is not specific about the type of name to be returned, and a human evaluator tags both queries correct. As the models improve, these types of failures make up most of the errors, and the performance metrics become less relevant, as shown in our evaluation. In particular, we re-evaluated all development set queries of Spider on which two top-performing models, one using a fine-tuned model <cit.> and another using a large language model <cit.>, failed. We found out that 25% of the queries generated by one model and 87% of the queries generated by the other model were indeed correct but were wrongly evaluated by the benchmark. For the same set of queries, our re-evaluation of the ground truth queries found 33% of the SQL queries incorrect, which was more than the number of incorrect queries generated by one of the models. This evaluation places one of the models above the ground truth queries in this re-evaluation.We further re-evaluated two well-knownbenchmarks, Spider <cit.> and Spider-DK <cit.>, and a newly released benchmark, BIRD <cit.>, and found similar problems in all three benchmarks that affect the evaluation. Our evaluation reveals that 18% of the queries in the train sets and 20%-23% of the queries in the dev sets of these benchmarks are subject to ties in the dataset and which one of the tied rows are returned. This means a model-generated query will be deemed incorrect if it does not return the same row, among tied rows, as the ground truth query. This can severely impact the evaluation, especially when there is a tight race among models. Considering these observations, it is crucial to emphasize the significance of additional independent evaluations when utilizing these benchmarks. To enhance the evaluation process further, a potential solution is to incorporate multiple SQL queries as the ground truth, each representing a different interpretation that may be valid.Our objective in this paper is to provide a comprehensive evaluation of existing Text-to-SQL benchmarks, underscoring the inherent issues they possess. We refrain from introducing a new dataset due to several considerations. First, addressing the identified issues by updating these benchmarks requires considerable human effort. Additionally, benchmarks in the Text-to-SQL domain, like Spider and BIRD, have holdout test sets used for official leaderboards and comparisons of text-to-SQL methodologies. We only have access to the development and training sets of these benchmarks, which limits our capability to alter the test sets. As a result, making changes only to the development and training sets would not completely address the benchmark's inherent problems, given that final performance is gauged using the problematic test sets.§ RELATED WORK Limited research has been dedicated to assessing the reliability and effectiveness of Text-to-SQL benchmarks. The authors of SQL-PaLM <cit.> note in their qualitative analysis of their model that some queries, labelled as incorrect by execution accuracy, were considered correct by human annotators.Similarly, <cit.> conduct an analysis highlighting the discrepancy between automatic evaluations and human annotations. They emphasize that certain queries produced by the models were labeled as incorrect SQL queries but human annotators labelled them as correct queries. Generally, a query that is equivalent (but not identical) to ground truthmay be mistakenly classified as incorrect by automated evaluation metrics. Another study by <cit.> identifies limitations within the Spider benchmark, such as issues with ties and certain syntactic problems. Their analysis is primarily focused on a subset of Spider, without quantifying the extent or impact of these limitations or conducting an assessment of other benchmarks.§ TEXT-TO-SQL BENCHMARKSBenchmarks have played a crucial role in advancing the field and providing a platform for evaluation. WikiSQL <cit.> consists of over 24,000 tables from Wikipedia with SQL queriesgenerated based on some predefined rules and templates.The queries in this dataset are considered easy since they are all single-table queries.Spider, introduced by <cit.>, consists of 200 database schemas of which 160 schemas are published as train and dev sets and 40 schemas are kept hidden for testing. The queries are written on those schemas by Computer Science students without using templates. This is considered a challenging dataset.Some other benchmarks are developed based on Spider, including Spider-Syn <cit.>, which replaces schema-related words with synonyms and eliminates explicit mentions between NLQ and schema, and Spider-DK <cit.>, which introduces rarely observed domain knowledgeinto the Spider development set. Other benchmarks include FIBEN <cit.>, created for the financial domain and BIRD <cit.>, which comprises 12,751 queries over 95 databases spanning 37 professional domains. Our study in this paper focuses on cross-domain large-scale benchmark Spider, its variants Spider-DK and Spider-SYN, and a more recent cross-domain large-scale benchmark BIRD. The selection of these benchmarks stems from their resemblance to real-world datasets, which is a crucial factor in conducting comprehensive research and analysis. One notable advantage of these benchmarks is the availability of a large training set, which plays a pivotal role in training and fine-tuning large-scale models. The inclusion of a substantial amount of training data enables the development of more robust and powerful models that can better handle the complexities and nuances present in real-world databases. § EVALUATION METRICS The performance evaluation of text-to-SQL systems involves comparing them to a reference system, typically a gold standard set of known correct SQL queries. Generating a reference can be challenging due to multiple interpretations of natural language questions, while SQL queries are based on logic and tend to cover only one interpretation. Even if an interpretation is fixed, detecting if a model-generated query is equivalent to a reference query is challenging, due to the halting problem which is undecidable <cit.>. Nonetheless, to assess progress, proxy measures of performance have been developed in the literature. As two such metrics, we review exact set match accuracy and execution accuracy in this paper.Under exact set match accuracy, SQL queries are evaluated by matching the query clauses and components independently, such as the select, where, having, group by, and order by clauses. The matching is based on comparing columns and predicates, disregarding the ordering of columns and predicates. An exact matching of literals can be challenging since predicates such as andwill not match. However, accurately generating those literals without accessing database contentmay not be possible. Under exact set matching without values, which is used in Spider <cit.>, a matching of literals is not required. Two equivalent SQL queries can have different expressions and may not match under an exact set match.An alternative metric that can reduce the number of false negatives is the execution accuracy. Under execution accuracy, the equivalence between a model-generated query and a reference query is established if they both produce the same results on all possible databases instances <cit.>. While testing all instances is impractical, running queries on a subset of instances can help identify candidates that are not equivalent to the reference query. Although execution accuracy can detect queries that are equivalent but not identical, it may mistakenly identify queries as equivalent if they produce the same result on tested instances. Therefore, an effective execution-based evaluation requires finding instances that cover various edge cases and can detect queries that are not equivalent to the reference. Test suite accuracy <cit.>, which is simply referred to as execution accuracy in Spider benchmark and in our work, aims to minimize false positives by evaluating queries on a carefully selected collection of database instances, known as a test suite. Nevertheless, an execution-based accuracy cannot capture all correct SQL queries, highlighting the limitations and the continued importance of human evaluation for reliable assessment.§ EXECUTION ACCURACY FAILURES A model-generated query can be correct but still fail the execution accuracy. We classify these failures into three categories:(1) failures due to ties in output, (2) ambiguity in schema matching, (3) wrong assumptions made about database content.§.§ Failures Due to Ties in OutputSQL queries can lead to ties and a subset of the tied rows may be returned. The selection of tied rows can vary between queries and this can affect the execution accuracy.We identify a few sources for such ties, as discussed next, and study their impact on benchmark evaluations in Section <ref>.Table <ref> provides a detailed breakdown of the number of queries that can potentially yield tied rows in both train and development set of Spider, Spider-DK, and BIRD benchmarks.§.§.§ Top with TiesSometimes the query asks for top rows that satisfy some conditions (e.g., the student with the highest GPA, or the youngest student). When there is a tie for the top position, and the query in natural language is not specific on how the ties should be handled, the corresponding SQL query may return all ties or only one. This becomes a problem in evaluation if a model-generated query and the reference query treat the ties differently.Figure <ref> provides a concrete example from the Spider dataset, illustrating this issue, where the reference SQL query in the benchmark fails to account for ties and returns only one of them using the LIMIT keyword.§.§.§ LIMIT NThe problems associated with using the LIMIT n clause in SQL queries is not limited to the top position, as discussed above. The use of this clause is problematic for evaluation in general.Firstly, without an explicit ordering, the result of a SQL query is expected to be a set. Two equivalent (but not identical) queries can return the same set of results, each listed in different orders, but selecting the first n rows from one ordering will not necessarily match the same selection from a different ordering. Secondly, with query results sorted, there can be a tie on row n with multiple rows having the same values. The ordering among tied rows can vary between two queries, and so is the first n rows that are returned. All benchmarks studied in this paper (Spider, Spider-DK, Spider-SYN, BIRD) use the limit keyword and suffer from the aforementioned problems associated with ties. §.§.§ GROUP BYMany text-to-SQL benchmarks encounter a different type of issue associated with ties, particularly arising due to incorrect usage of non-aggregated columns in both the SELECT clause and the GROUP BY clause. Within the benchmarks, these ties manifest in two scenarios: 1) a column appears in the SELECT clause without being inside an aggregation function and without being included in the GROUP BY clause; 2) the SELECT clause contains a mix of aggregated and non-aggregated columns without utilizing a GROUP BY clause. In both cases, multiple records can be associated with the same grouping column or aggregation value, whereas each group can only return one record. Some database systems including Oracle and DB2 prevent these cases by treating them as syntax errors. However, other database systems such as SQLite and MySQL take a more lazy approach (sometimes for efficiency reasons) and allow these cases to happen. Many text-to-SQL benchmarks follow SQLite syntax and suffer from this issue. The affected queries in our benchmarks were identifiedafter migrating from SQLite to PostgreSQL, as detailed in Section <ref>, and checking for queries that failed during PostgreSQL execution.Figure <ref>, illustrates one example of such a problem from the Spider dataset.§.§.§ ORDER BYAnother subtle ambiguity with tied values arises in queries where the SELECT clause incorporates the "distinct" keyword, paired with an ORDER BY clause referencing a column absent in the SELECT clause. Consider the exemplary query from Spider train set: . The ordering of the output, as well as the result of a comparison with a reference query, becomes uncertain if a single 'district_name' value maps to multiple 'city_area' values. Similar to GROUP BY, the affected queries in the benchmarks were identified through a SQLite to PostgreSQL migration( <ref>). §.§ Ambiguity in Schema Matching Schema matching refers to the task of establishing the correspondence between a natural language question and the tables, columns, and cell values in the database (<cit.>. Ambiguities arise when there are multiple columns in the database that can represent the same semantic meaning, and the information needs of a query may be satisfied using any of those columns.As a result, there exist multiple SQL queries that can produce the correct answer, yet most benchmarks only provide one query among the many possible correct answers. Figure <ref> illustrates an example question that can be satisfied by two different SQL queries, both of which are valid responses to the question at hand. §.§ Wrong Assumptions on DB ContentLastly, one type of limitation in text-to-SQL benchmarks stems from incorrect assumptions regarding cell values.It is common to make assumptions about database content and constraints when writing SQL queries, but those assumptions may not be supported by the database schema or content. This issue arises when the database content is created under assumptions that do not align with those in queries, leading to potential failures in the evaluation process.Text-to-SQL models often lack access to full database content due to limitations such as the context window problem and the inability to pass all cell values to the models for reasons such as privacy and cost. These models typically rely on the provided database schema and a selected sample of database rows to represent potential values<cit.>. Consequently, the assumptions made by these models may not align with the actual ground truth, resulting in SQL queries that are correct under the assumption made but do not match the reference query in the benchmark. One observed case is when certain conditions (e.g., PetType=`dog') are omitted from SQL queries due to the erroneous assumption that the condition holds for all rows in the database.Figure <ref> exemplifies this issue using an example from the Spider dataset, where both queries yield the same answer on a specific database instance. However, changing the database values could result in failure, especially when evaluating performance using test-suite accuracy, which involves querying different database instances. Another case observed in the benchmarks is when the ground truth SQL queries assume a specific column has unique values, but in reality, that column does not possess that unique constraint. Figure <ref> depicts an example of this problem from the Spider dataset. § EXPERIMENTSTo understand the extent at which the aforementioned problems affect the benchmarks, our evaluation and the ranking of the models, we conducted three types of evaluations on three benchmarks: Spider, Spider-DK, BIRD. Our findings here apply to the Spider-SYN dataset as well, which employs the same SQL queries as in the Spider dataset. For the same reason, we did not conduct a separate analysis of that benchmark. §.§ Evaluation Through Query Rewriting In this experiment, our focus is on ties and how a tie breaking strategy affects the benchmarks and our evaluation. This is done through query rewriting. Automating query rewriting faces inherent challenges, particularly when dealing with failures stemming from schema ambiguity, erroneous assumptions about the database content, and the ambiguity of natural language utterances. These challenges arise because there is no specific structure to address the failures systematically. Successful query rewriting in these cases necessitates a deeper understanding of table and column semantics to identify ambiguities and erroneous assumptions. In cases of ambiguity, human expertise is essential to disambiguate the context, as these situations often lack clear guidelines. Detecting erroneous assumptions often involves introducing new data to the database and meticulously reviewing and correcting failed queries on a case-by-case basis. Therefore, our efforts have been channeled towards rewriting queries concerning tied values, which adhere to a specific syntax structure, and the problems associated with the ambiguity in schema matching and wrong assumptions on database content are studied in the next section. Many benchmark queries use “LIMIT 1” to find top rows that satisfy some conditions. If there are ties on top, one arbitrary row among ties is returned. An alternative is to return all ties. We rewrote all queries that used “LIMIT 1” to return all ties. This was done by introducing min() and max() aggregation functions within nested queries to accurately identify extreme values. An example of such rewriting is shown in Figure <ref>. Breaking ties for queries that used “LIMIT n” for n>1 was not straightforward, and those queries were left unchanged.For resolving ties introduced by an incorrect usage of GROUP BY in benchmark queries, we included all non-aggregated columns from the SELECT clause in the GROUP BY clause. For example, if the SELECT clauses included id and name, but the GROUP BY clause only included name, we added id to the GROUP BY clause. This change will not affect queries where there is a one-to-one mapping between id and name, but it will resolve the ambiguity when such mapping does not hold. With these two changes, 16% to 20% of the reference queries in our benchmarks were affected. Under a perfect evaluation scheme, the accuracy should not be affected with these changes that simply resolve the uncertainty.Table <ref> displays both the execution accuracy and the exact set match accuracy for the reference queries from the BIRD, Spider, and Spider-DK benchmarks after our modifications. It's important to highlight that the performance metrics provided in this table encompass the entire development set of these benchmarks, combining both modified and unaltered queries. For clarity, in the Spider dataset, out of 1034 queries, 206 were modified. The performance assessment took into account a mixed set of predicted queries: 206 that were adjusted and 828 that remained as originally presented. This culminated in an execution accuracy of 92.3 percent.It can be noted that the execution accuracy is not as adversely affected as the exact set match accuracy. We hypothesize that this could be attributed to the absence of ties in the test data used for these benchmarks. An evidence of this is the following two queries, , and, labelled as a correct match by the test scripts of Spider.§.§ Human Evaluation To gain a deeper understanding of the limitations within the benchmarks, we conducted an experiment focused on the widely-used text-to-SQL benchmark, the Spider dataset. Specifically, we evaluated two top-performing methods from the Spider leaderboard: DIN-SQL <cit.> and T5-large + PICARD <cit.>. This experiment involved running these methods on the development set of Spider, which comprised 1034 question-query pairs. From the results obtained, we extracted the questions for which both methods failed to produce a correct answer, based on the execution accuracy, resulting in 102 pairs. We then presented these questions, along with the SQL queries generated by the methods as well as the ground truth SQL queries (treating them the same as model-generated queries), to two annotators [The human annotators are the authors of this paper.] for labelling. The annotators had access to the database schemas and were tasked with identifying the queries they deemed correct for each question, without knowing which model generated which query or if the query was from the ground truth queries.Annotators could also create databases and validate queries, ensuring a thorough evaluation.Following our initial labelling process, we wanted to minimize the potential impact of human errors in our evaluation. For this, we identified queries with inconsistent labels among the annotators and presented them to the annotators. Each annotator was asked to provide an explanation for their assigned labels. In the final stage of evaluation, each annotator was presented the inconsistent queries and the explanations provided by the other annotator. They were then asked if they would revise their labels based on this additional information.The results of this experiment are presented in Table <ref>. This table presents the outcome of human evaluation on a sample of 102 queries that both DIN-SQL and T5+PICARD methods were deemed incorrect in terms of execution accuracy. SQL experts conducted this evaluation, with 81.6% of these queries judged as correct for DIN-SQL, and only 25.5% for T5+PICARD. Notably, among the reference queries, only 67.3% were deemed correct.Even after the second round of annotation, a few queries (more specifically, four question-query pairs) still exhibit inconsistent labeling by the annotators. The main challenge with these particular pairs is the inherent ambiguity in the questions or the subjectivity of interpretations, which leads to a lack of a definitive answer. Figure <ref> demonstrates one example of such a question with two possible SQL query as answers.An intriguing observation emerged from this experiment: the DIN-SQL method, powered by GPT-4, produced the highest number of correct answers, surpassing even the ground truth SQL queries. This finding sheds light on the limitations of the current benchmarks and raises doubts about the reliability of current leaderboards and performance metrics.§.§ Error Analysis of Human EvaluationWe performed an error analysis of the SQL queries that were labelled as incorrect in our human evaluation to better understand the error types and causes and to provide insights into areas for improving the ground truth SQL queries. Additionally, we compared the errors in ground truth queries with those of fine-tuning and prompting approaches. The identified errors, categorized into five groups, are briefly discussed next. The distribution of SQL queries across these groups is depicted in Figure <ref>. SchemaThe primary issue responsible for the majority of errors, affecting both the reference SQL queries and the two methods, is the incorrect usage of schemas, which arises when the SQL query utilizes incorrect tables or columns to answer the given question. These errors indicate ambiguities in the database schema and/or questions, as discussed in Section <ref>. Notably, the reference set shows the least number of errors, which is closely followed by DIN-SQL. ConditionThe second-largest group of errors observed pertains to the usage of incorrect conditions within the SQL queries. Unlike the schema group, where the tables and columns were incorrect, in this group, the correct tables and columns are used, but the conditions in the WHERE clause are erroneous. This error primarily manifested in the queries generated by the T5-PICARD method, but was also present in the reference set. The T5 model's tendency to introduce additional columns or omit necessary conditions could be attributed to its smaller size relative to larger models like GPT-4, limiting its grasp of intricate SQL syntax. Nested The source of this problem is using a non-unique column for the nested SQL query, as also discussed in Section <ref>. Figure <ref> shows an example of such an error in a SQL query. This error was more common in the SQL queries provided in the reference set as well as those of T5-PICARD. GROUP BYThis category includes queries that incorrectly used GROUP BY, resulting in ambiguity or uncertainty in the result as discussed in Section <ref>. Notably, the reference set showed the largest number of errors, closely followed by the fine-tuned T5-PICARD. DIN-SQL exhibited the least number of errors. LIMITAs highlighted in Section <ref>, one of the error scenarios involves not properly handling ties when using the LIMIT keyword. The DIN-SQL method demonstrates a lower incidence of this type of error, attributed to its prompting nature. Conversely, T5-PICARD exhibits identical performance to the ground truth in this particular case. §.§ Standard SQL validation We undertook an extensive review of the development set of Spider, BIRD, and Spider-DK benchmarks through the lens of standard SQL validation. The objective was to identify some of the problematic queries discussed in Section <ref> and assess the portability of the benchmarks. As part of this analysis, we migrated the databases and queries of these three benchmarks from Sqlite to PostgreSQL. Our decision to use PostgreSQL, a widely recognized RDBMS, stemmed from its rigorous adherence to SQL standards. Following the migration, we executed every query from the development set on these PostgreSQL databases, with a keen focus on identifying queries that failed duringPostgreSQL execution.Table <ref> provides a breakdown of queries by error type across all three benchmarks. Notably, errors such as UndefinedColumn, SyntaxError, and UndefinedFunction emerge due to the different SQL formats supported by Sqlite and PostgreSQL. These variances necessitate adjustments to make the queries compatible with PostgreSQL standards. For instance, the Spider dataset frequently showcases errors stemming from PostgreSQL's strict typing conventions. While SQLite allows forcomparisons of int with text, PostgreSQL does not. Also, some queries run into problems because of SQLite-exclusive functions, such as strftime and iff, or because PostgreSQL interprets literals in double quotations as column names.The two other types of failures, group by and Order by, included queries that introduced ambiguities to the benchmarks, as discussed in Section <ref>. It should be noted that these benchmarks present a range of issues that are not solely confined to syntax. Challenges related to wrong assumptions on DB content and ambiguities in schema matching are notably pervasive.§ DISCUSSIONOur analysis ( <ref>) reveals the limitations of major text-to-SQL benchmarks, highlighting the fact that even with a perfect model, achieving a perfect accuracy on these benchmarks is not possible. The accuracies presented in Table <ref> serve as a lose upper bound for the achievable accuracy by models. It is lose because our rewritings were unable to address cases that required manual intervention to reconstruct a correct query. Thus, the upper bound is expected to be lower considering other issues such as wrong assumptions on the database content and ambiguity in schema matching.Our human evaluation ( <ref>) further supports our claim and provides more insight into the limitations within one of the benchmarks studied. The results in Table <ref> demonstrate that prompting methods, such as DIN-SQL, are less affected by the inherent limitations of the training set in the benchmarks. However, they are not fully immune because of the few-shot input-output demonstrations that are taken from the train set. On the other hand, fine-tuned approaches, such as T5+PICARD, perfectly mirror the distribution of errors seen in the ground truth queries for types nested, LIMIT, and GROUP BY. The largest number of wrong queries in schema and condition classes belong to our fine-tuned model, due to inability of the model to generate correct SQL queries.§ CONCLUSIONS The reliance on standard text-to-SQL evaluation metrics, namely exact set match accuracy and execution accuracy, has become less reliable as the model performance approaches human-level performance. Our work is the first to systematically study the limitations of these metrics and benchmarks through both human evaluation and query rewriting. Our re-evaluation of well-known benchmarks (Spider, Spider-DK, and BIRD) uncovers common systematic issues that affect the evaluation process and performance estimates, revealing that a significant portion of queries in the train and dev sets are impacted by these issues. Incorporating multiple SQL queries as the ground truth and representing different interpretations of queries offer a promising solution to enhance the evaluation process and achieve a more comprehensive and accurate assessment of Text-to-SQL models.§ LIMITATIONSIn this study, our focus was primarily on cross-domain text-to-SQL benchmarks and models. The failure cases identified in this domain are likely to be present in other domain-specific text-to-SQL benchmarks and models as well. It is essential to conduct further analysis to identify specific failure cases within domain-specific benchmarks and models.Furthermore, it is worth mentioning that our work has a limitation regarding the analysis of failure cases that lack a specific structure and require manual effort for detection. Identifying and addressing such problems necessitates extensive work. The purpose of our study was to highlight these failure cases; a more in-depth analysis of their prevalence can provide a clearer understanding of their impact on the overall performance of text-to-SQL systems.§ ETHICS STATEMENTIn this paper, we acknowledge the importance of ethical considerations in conducting and presenting our research. We affirm our commitment to comply with the ACL Ethics Policy and adhere to ethical guidelines and principles throughout the entire research process. We have taken necessary measures to ensure the privacy, confidentiality, and consent of individuals or entities involved in our data collection, experimentation, and analysis. Any personal or sensitive information used in this study has been appropriately anonymized and safeguarded.Furthermore, we have made efforts to minimize any potential biases and discrimination in our research design, data selection, and interpretation of results. We have strived for transparency, accuracy, and fairness in reporting our findings, and we have provided appropriate citations and acknowledgments to give credit to the work of others.By including this ethics statement, we aim to demonstrate our dedication to conducting research with integrity, respecting ethical principles, and contributing to the responsible advancement of knowledge in our field. acl_natbib
http://arxiv.org/abs/2310.18538v1
{ "authors": [ "Mohammadreza Pourreza", "Davood Rafiei" ], "categories": [ "cs.CL", "cs.DB", "cs.LG" ], "primary_category": "cs.CL", "published": "20231027233614", "title": "Evaluating Cross-Domain Text-to-SQL Models and Benchmarks" }
School of Physics and Astronomy and William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, MN 55455, USA School of Physics and Astronomy and William I. Fine Theoretical Physics Institute, University of Minnesota, Minneapolis, MN 55455, USA We analyze superconductivity in a multi-orbital fermionic system near the onset of a nematic order,using doped FeSe as an example. We associate the nematic order with spontaneous polarization between d_xz and d_yz orbitals.Wederive the pairing interaction, mediated by soft nematic fluctuations, and show that it is attractive, and that its strength depends on the position on the Fermi surfaceAs the consequence, right at the nematic quantum-critical point (QCP), superconducting gap opens up at T_c only at special points and extends into finite arcs at T < T_c.In between the arcs the Fermi surface remains intact. This gives rise to highly unconventional behavior of the specific heat, with no jump at T_c and an apparent finite offset at T=0, when extrapolated from a finite T. We argue that this behavior is consistent with the specific heat data for FeSe_1-xS_x near critical x for the onset of a nematic order.We discuss the behavior of the gap away from a QCP and the pairing symmetry, and apply the results to FeSe_1-xS_xand FeSe_1-xTe_x, which both show superconducting behavior near the QCP distinct from that in a pure FeSe. Unconventional Superconductivity near aNematic Instability in a Multi-Orbital system Andrey Chubukov January 14, 2024 ======================================================================================§ INTRODUCTION.It is widely believed that superconductivity in the cuprates, Fe-pnictides, heavy fermion, and other correlated electron systems is of electronic origin and at least in some portion of the phase diagram can be understood as mediated by soft fluctuations of a particle-hole order parameter, which is about to condense. The most studied scenario of this kind is pairing mediated by spin fluctuations. For the cuprates, it naturally leads to d_x^2-y^2 pairing. For Fe-pnictides,spin-mediated pairing interaction is attractive in both s-wave (s^+-) and d_x^2-y^2 channels. The argument, why pairing holds despite that the electron-electron interaction is repulsive, is the same in the two cases - antiferromagnetic spin fluctuations, peaked at momentum Q, increase the magnitude of a repulsive pairing interaction at the momentum transfer Q (the pair hopping from (k, -k) to k+Q, -k-Q). A repulsive pair hoppingallows for a solution for a gap function, which changes sign between Fermi points at k_F and k_F + Q. There is still a repulsion at small momentum transfer, which is detrimental to any superconductivity,and the bare Coulomb interaction is indeed larger at small momenta than at Q.However, when spin fluctuations are strong, a repulsion at Q gets stronger than at small momentum, andsign-changing superconducting gap does develop.This scenario has been verified by e.g., observation of a spin resonance peak below T_c<cit.>. Spin fluctuations were also identified as the source for spontaneous breaking of lattice rotational symmetry (nematicity)in Fe-pnictides, as nematicity there develops in the immediate vicinity of the stripe magnetic order with momentaQ =(π,0) or (0,π).It has been argued multiple times <cit.> that spin fluctuations create an intermediate phase with a composite spin order,which breaks symmetry between (π,0) and(0,π), but reserves O(3) spin-rotational symmetry. Situation is different, however, in bulk Fe-chalcogenide FeSe, which has been extensively studied in the last few years using various techniques.A pure FeSe develops a nematic order at T_p ∼ 85K, and becomes superconducting at T_c ∼ 9K. A nematic order decreases upon isovalent doping by either S or Te (FeSe_1-xS_x and FeSe_1-xTe_x) and in both cases disappears at critical x_c (0.17 for S doping and 0.53 for Te doping).There is no magnetic order below T_p for any x. The absence of magnetism lead to two conjectures: (i)that nematicityin FeSe isa d-wave Pomeranchuk order, with order parameter bilinear in fermions, rather than a composite spin order, for which an order parameter in a 4-fermion operator, and (ii) that the origin of superconductivity may be different from the one in Fe-pnictides.On (i), there is a consistency between the Pomeranchuk scenario for nematicity and the data already in pureFeSe: a Pomeranchuk order parameter necessary changes sign between hole and electron pockets, consistent with the data<cit.>, and the temperature dependence of nematic susceptibility, measured by Raman, is in line with the Pomeranchuk scenario<cit.>. On (ii), superconductivity in pure FeSe islikelystill mediated by spin fluctuations<cit.>, as evidenced by the correlation between NMR 1/T_1 and superconducting T_c, the consistencybetween ARPES data on the gap anisotropy and calculations within spin fluctuation scenario,and the fact that a magnetic order does develop under pressure<cit.>. However near and above critical x_c, magnetic fluctuations are far weaker <cit.>, e.g., a magnetic order does not develop until high enough pressure. It has been argued<cit.> based on a variety of data (see below) that superconductivity for such x is qualitatively different from the one in pure FeSe. One argument here is that the gap anisotropy changes sign,another is that T_c in FeSe_1-xTe_x shows a clear dome-like behavior around x_c.In this communication we address the issue whether superconductivity in doped FeSe near x_c can be mediated by nematic fluctuations.It seems natural at a first glance to replace spin fluctuations by soft nematic fluctuations as a pairing glue. However, there are two obstacles, both related to the fact that soft nematic fluctuations are at small momentum transfer. First,they do not affect the pair hopping term between hole and electron pockets, which is the key element for spin-mediated superconductivity. Second, the bare pairing interaction at small momentum transfer is repulsive, and dressing it bynematic fluctuations only makes the repulsion stronger.We show that the pairing interaction V_eff (k,-k;p,-p), mediated by nematic fluctuations (first two momenta are incoming, last two are outgoing), does become attractive near x_c, however for a rather special reason, related to the very origin of the Pomeranchuk order.Namely, the driving force for a d-wave Pomeranchuk order is density-density interaction between hole and electron pockets. It does have ad-wave component U^d_he because low-energy excitations in the band basis are constructed of d_xz and d_yz orbitals.Asign-changing nematic order (a spontaneous splitting of densities of d_xz and d_yz orbitals) develops <cit.> when U^d_he exceeds d-wave intra-pocket repulsion, much like sign-changing s^+- order develops when pair hoppingexceeds intra-pocket repulsion in the particle-particle channel.By itself, U^d_hedoes not contribute to pairing, however taken at the second order, it produces an effective attractive interaction between fermion on the same pocket. We go beyond second order and collect all ladder and bubble diagrams which contain d-wave polarization bubbles at asmall momentum transfer. We show that this induced attraction isproportional to the susceptibility for a d-wave Pomeranchuk order. Because a nematic susceptibility diverges at x_c,the induced attraction necessaryexceeds the bare intra-pocket repulsion in some range around x_c, i.e., the full intra-pocket pairing interaction becomes attractive.This attractive interaction V_eff (k-k;,p,-p) ∝ A_k,pχ_nem (| k- p|)is rather peculiarbecause it inherits from U^d_hethe d-wave form-factorA_k,p = cos(2θ_k)cos(2θ_p), where θ_k and θ_p specify the location of the fermions ((in our case, this holds on the hole pocket,which is made equally out of d_xz and d_yzorbitals).A similar pairing interactionhas been earlier suggested for one-band models on phenomenological grounds <cit.> assuming that d-wave nematic coupling is attractive. We show that such an interaction emerges in the model withpurely repulsive interactions, once we add the pairing component, induced by intra-pocket density-densityU^d_he. Because χ_nem (k-p) diverges at k= p, thepresence of the form-factor A_k,p in V_eff (k,-k;p,-p) implies that the strength of the attraction depends on the position of a fermion on a Fermi surface . As the consequence, the gap function on the hole pocket is the largest around hot points, specified by θ_h = n π/2, n=0-3, and rapidly decreases in cold regions centered at θ_c = n π/2 + π/4, n=0-3,This has been already emphasized in the phenomenologicalstudy <cit.>. This behavior shows up most spectacularlyright at a nematic QCP, where the gap emerges at T_c only at hot spots and extends at smaller T into finite size arcs. The arcs length grows as T decreases, but as long as T is finite, there exist cold regions where the gap vanishes, i.e., the system preserves pieces of the original Fermi surface. At T=0, the gap opens everywhere except at the cold spots θ_c, where nematic form factor cos2θ vanishes, but is still exponentially small near them, Δ (θ) ∝exp-1/(θ-θ_c)^2.This, we argue, leads to highly unconventional behavior ofthe specific heat coefficient C_v/T, whichdoes not display a jump at T_c and instead increases as (T_c-T)^1/2, passes through a maximum at T ∼ 0.8 T_c, andbehaves at smallerT like there is a non-zero residual C_v/T at T → 0(see Fig.<ref>). In reality, C_v/T vanishes at T=0, but nearly discontinuously, as 1/(log(T_c/T))^1/2.Also, because the regions, where the gap is non-zero, are disconnected,the gap phases are uncorrelated, and s-wave, d-wave and two-component p-wave (k_x + e^iαk_y) states are degenerate.At a finite distance from a QCP and/or in the presence of non-singular pair-hopping between hole and electron pockets, the gap function becomes continuous, but maxima at θ = n π/2 remain.The specific heat coefficient C(T)/Tacquires a finite jump at T_c, but holds the same behavior at intermediate Tas in Fig.<ref>,within some distance to a QCP.The condensation energies for s-wave, d-wave and p-wave states split.Which order develops depends on the interplay between the attractive pairing interaction, mediated by nematic fluctuations, andnon-singular repulsion. The letter is far stronger in s-wave and d-wave channels, which favors p-wave symmetry.In this case, the most likely outcome is k_x ± i k_y state, which breaks time-reversal symmetry. § RESULTS §.§ Model.    The electronic structure of pure/doped FeSe in the tetragonal phase consists of two non-equal hole pockets, centered at Γ, and two electron pockets centered at X = (π, 0) and Y = (0, π) in the 1FeBZ.The hole pockets are composed of d_xz and d_yz fermions, the X pocket is composed of d_yz and d_xy fermions, and the Y pocket is composed of d_xz and d_xy fermions. The inner hole pocket is quite small and likely does not play much role for nematic order and superconductivity. We assume that heavy d_xy fermions also do not play much role andconsider an effective two-orbital model with a single d_xz/d_yz circular hole pocket, and mono-orbital electron pockets (d_yz X-pocket and d_xz Y-pocket). We define fermionic operators for mono-orbital Y and X pocketsas f_1 and f_2, respectively(f_1,k,σ = d_xz, k+Y, σ, f_2,k,σ = d_yz, k+X, σ).The band operator for the hole pocket is h_k,σ = cosθ_k d_yz, k, σ + sinθ_k d_xz, k, σ.The kinetic energy is quadratic in fermionic densities and there are 11 distinct C_4-symmetric interactions <cit.> involving low-energy fermions near the hole and the two electron pockets (seeRef. <cit.> for details).We takethe absence of strong magnetic fluctuations in doped FeSe as an evidence that interactions at momentum transfer between Γ and X (Y) are far smaller than the interactions at small momentum transfer and neglect them. This leaves 6 interactions with small momentum transfer: 3within hole or electron pockets and 3 between densities of fermions near different pockets.The single interaction between hole fermions contains an angle-independent term and terms proportional to cos2θ_kcos2θ_p and sin2θ_ksin2θ_p, the two interactions between hole and electron pockets contain an angle-independent and a cos2θ_k term, where k belongs to the hole pocket andthe three interactions between fermions on electron pockets contain only angle-independent terms. §.§ Nematic susceptibilityLike we said, we associate the nematic order with a d-wave Pomeranchuk order. In the orbital basis, this order is an orbital polarization (densities of d_xz and d_yz fermions split). In the band basis, we introduce two d-wave order parameters on hole and electron pockets: ϕ_h = ∑_,σ⟨ h_,σ^† h_,σ⟩cos 2 θ_ andϕ_e= ∑_⟨ f_2,,σ^† f_2,,σ⟩-⟨ f_1,σ^† f_1,,σ⟩.The set of two coupled self-consistent equations for ϕ_h and ϕ_e is obtained by summing ladder and bubble diagrams (see Ref. <cit.>) and is ϕ_h = -ϕ_hU_h^dΠ_h^d -ϕ_eU^d_he Π_e ,ϕ_e =-ϕ_eU_e^d Π_e-2 ϕ_h U^d_heΠ_h^d. Here, Π_h^d =- G_p^h G_p^h cos2θ_, and Π_e=-(1/2) _p (G_p^X G_p^X + G_p^Y G_p^Y) are the polarization bubbles for the hole and the electron pockets, (G^i_p = G^i (p, ω_m) are the corresponding Green's functions and _p stands for T ∑_ω_nd^2(2π)^2). As defined,Π_h^d and Π_e are positive. The couplings U_h^d, U_e^d and U^d_he are d-wave components of intra-pocket and inter-pocket density-density interactions.All interactions are positive (repulsive). Theanalysis of (<ref>)shows that the nematic order with different signs of ϕ_h and ϕ_e develops whenU^d_he is strong enough with the condition 2(U_he^d)^2≥ U_h^dU_e^d.The nematic susceptibility is inversely proportional to the determinant of (<ref>). Evaluating it at a small but finitemomentum q, we obtain χ_nem (q) ∝ 1/Z, where Z =(1+U_h^dΠ_h^d (q) )(1+U_e^d Π_e (q) )-2(U^d_he)^2 Π_h^d (q)Π_e(q).§.§ Pairing interactionOur goal is to verify whether the pairing interaction near the onset of a nematic order is (i) attractive, (ii) scales with the nematic susceptibility, and (iii) containsthe d-wave form-factor cos^2 (2 θ_k). To do this, we use the fact that χ_nem (q) contains polarization bubbles Π^d_h (q) and Π_e(q), and obtain the fully dressed pairing interaction by collecting infinite series of renormalizations that contain Π^d_h (q) and Π_e (q) with small momentum q. This can be done analytically (see Refs. <cit.> for detail). Because q is small, the dressed pairing interactions are between fermions on onlyhole pocket or only electron pockets: V_eff^h(, )= V_eff^h ( + /2, -( + /2);-/2, -( - /2), V_eff^e(, )= V_eff^e ( + /2, -( + /2);-/2, -( - /2).We findV_eff^h(, ) = U^d_h/1 + U^d_h Π^d_h ()- A_h (U^d_he)^2cos^2 2 θ_ χ_nem () +...V_eff^e(,) = U^d_e/1 + U^d_e Π_e () -A_e (U^d_he)^2 χ_nem() + ...,where A_h=Π_e1+U_h^d Π_h^d () and A_e=12Π_h^d1+U_e^d Π_e (). The dots stand for other terms which do not contain Π_h^d () and Π_e^d () and are terefore not sensitive to the nematic instability.We see that each interaction contains two terms. The first is the dressed intra-pocket pairing interaction. It does get renormalized, but remains repulsive and non-singular at the nematic instability.The second term is the distinct interaction, induced by U^d_he.It is (i) attractive, (ii) scales with the nematic susceptibility, and (iii) containsthe d-wave nematic form-factorcos^22 θ_. We emphasize that the attraction is induced by induced by inter-pocket density-density interaction, despite that relevant nematic fluctuations are with small momenta and the pairing interactions involve fermions from the same pocket.§.§Gap equationNear a nematic QCP, χ_nem (q) is enhanced andthe interaction, induced by U^d_he, is the dominant one.In the absence of pair-hopping, the gap equation decouples between hole and electron pockets.The most interesting case iswhen the gapdevelops first on the hole pocket (the case A_h > A_e). We use Ornstein-Zernike formχ_nem() = χ_0/(δ^2 + q^2), where δ is the distance to a nematic QCP in units of momentum.At small δ, relevant q are of order δ. To first approximation, the non-linear equation for Δ_h ( k)then becomes local,with angle-dependent coupling:1=gcos^2 2θ__0^Λ dxtanh(√(x^2+|Δ_h()|^2)2 T)√(x^2+Δ_h()^2).where g = m(U^d_he)^2 χ_0/(4π k_F δ). Because the coupling is larger at θ_k = n π/2, n=0-3, the gap appears atT_c =1.13Λexp(-1/g) only at these points.As T decreases, the range, where the gap is non-zero, extends to four finite arcs with the width θ_0 (T) = 0.5 arctan√(glogT_c/T) (see Fig.<ref>b).In theareas between the arcs,the original Fermi surface survives. We emphasize that this is the original Fermi surface, not the Bogoliubov one, which could potentially develop inside the superconducting state<cit.>. We plot |Δ_h ( k)|along the Fermi surface at T=0 and a finite Tin Fig.<ref>a,b, and plot θ_0(T)as a function of T/T_c in Fig.<ref>c. The phases of the gap function in the four arcs are not correlated, hence s-wave, d-wave (d_x^2-y^2) and two-component p-wave (k_x + e^iα k_y with arbitrary α) are all degenerate. At T=0, the arcs ends merge at θ_k = n π/2 + π/4, n=0-3 and the gap becomes non-zero everywhere except these cold spots(red dots in Fig.<ref>a). In explicit form, |Δ_h (k)| = 1.76 T_cexp- tan^2 2θ_/g.The gap near cold spots becomes a bit smoother if we keep the Landau damping in χ_nem and solve the dynamical pairing problem, but still Δ_h ( k)remains highly anisotropic. §.§ Specific heat We split the specific heat coefficient γ_c =C_v (T)/T intocontributions from the gapped and ungapped regions of the Fermi surface: γ_c(T)=γ_c^n(T)+ γ_c^s(T). The first term, γ_c^n(T)=8 N_0 π3[π4-θ_0(T)] which at small T becomes: γ^n_c (T) ≈4N_0π3 √(g̅ log T_c/T). It evolves almost discontinuously:vanishes at T=0, but reaches 1/3 of the normal state value already at T = 0.01 T_c. We obtained γ^s_c (T) at higher T numerically and show the result for the full γ_c (T)in Fig.<ref>.We see that γ_c (T) does not jump at T_c. Instead it increases from its normal state value as√(T_c-T), passes throughmaximum at T ≈ 0.8 T_c and nearly linearlydecreases at smaller T, apparently witha finite offset at T=0.It eventually drops to zero at T=0, but only at extremely small T, as 1/(logT_c/T)^1/2.We emphasize thatγ_c (T) is a function of a single parameter T/T_c, i.e., the smallness of the range, where γ_c (T) drops, is purely numerical. §.§ Away from a nematic QCP    At a finite δ, s-wave, d-wave, and p-wave solutions for the gap function are no longer degenerate.If wekeeponly the interaction induced by U^d_he (the second term in<ref>), we find that s-wave solution has the lowest condensation energy. We show theeigenvalues λ_s,p,d and the gap functions in Fig.<ref>a,b.The gap function is smooth and finite for all angles, but remains strongly anisotropic up to sizable δ/k_F. We define the gap anisotropy α as the ratio of the gap function on the hole fermi surface at θ=π/4(k_x- k_y axis) to θ=0(k_x axis): α=Δ_h(π/4)/Δ_h(0) and show its variation with the nematic mass parameter δ in Fig.<ref>c. The specific heat coefficient γ_c (T) has a finite jump at T_c,whose magnitude increases with δ, yet the low temperature behavior remains nearly the same as at a QCP up to sizable δ/k_F (Fig.<ref>).If we consider the full pairing interaction in (<ref>), situation may change asthe first term in (<ref>) has comparable repulsive s-wave and d-wave harmonics,but a much smaller p-wave harmonic.As the consequence, p-wave may become the leading instability.The condensation energy for a p-wave state is the lowest for k_x+ i k_y and k_x - i k_y gap functions. A selection of one of these states breaks time-reversal symmetry. §.§ Comparison with experimentsWe argued in this work is thatpairing indoped FeSenear a nematic QCP is mediated by nematic fluctuations rather than by spin fluctuations.This is generally consistent with the observations in Ref. <cit.> of two distinct pairing states in pure FeSe and in doped FeSe_1-xS_x and FeSe_1-xTe_x at x ≥ x_c.More specifically, one can distinguish between magnetic and nematic pairing scenarios by measuring the angular dependence of the gap along the hole d_xz/d_yz pocket.We argued that a nematic-mediated pairing gives rise to an anisotropic gap, with maxima along k_x and k_y directions.Within spin-fluctuation scenario,the gapΔ_h (k) = a + b cos4 θis the largest along the diagonaldirectionsk_x ± k_y (b <0, see e.g., Ref. <cit.>). The angular dependence of the gap in pure and doped FeSe has been extracted from ARPES and STM data inRef. <cit.>.For pure and weakly doped FeSe, an extraction of cos4θ dependence is complicated because superconductivity co-exists with long-range nematic order, in which case the gap additionally has cos2 θ term due to nematicity-induced mixing of s-wave and d-wave components <cit.>.Still, the fits of the ARPES data in Refs. <cit.> yielded a negative b, consistent with spin-fluctuation scenario. A negative bis also consistent with the flattening of the gap on thehole pocket near θ =π, observed in the STM study <cit.>. A negative prefactor for cos4 θ term was also reported for Fe-pnictides, e.g.,Ba_0.24K_0.76Fe_2As_2, Ref. <cit.>. In contrast, STM data for tetragonal FeSe_0.45Te_0.55 (Ref.<cit.>) found the maximal gap along k_x and k_y directions, consistent with the pairing by nematic fluctuations. The gap maximum along k_y has also been reported in a recent laser ARPES study ofFeSe_0.78S_0.22 (Ref. <cit.>). Further, recentSTM data on FeSe_1-xS_x(Ref. <cit.> detected a shift of the gap maxima from k_x = ± k_y for x <0.17 to k_x and k_y for x > 0.17, and STM data for FeSe_0.81S_0.19 (Ref. <cit.>) showed clear gap maxima alongk_x and k_y. Taken together, these data strongly support the idea about different pairing mechanisms in pure FeSe and in doped ones at x ≥ x_c, and are consistent with the change of the pairing glue from spin fluctuations at x <x_cto nematic fluctuations at x ≥ x_c.Next, we argued that right at a nematic QCP,the gap vanishes in the cold regions on the Fermi surface, and this leads to highly unconventional behavior of the specific heat coefficient γ_c (T)[This holds when we neglect pair hopping between hole and electron pockets.In the presence of pair hopping, the gap becomes non-zero everywhere except, possibly, special symmetry-related points. Still,in the absence of magnetism nearby, pair-hopping is a weak perturbation, and the gap in cold regions is small.]The specific heat of FeSe_1-xS_x has been measured in Refs. <cit.>. The data clearly indicate that the jump of γ_c (T) at T_c decreases with increasing x and vanishes at around x_c.At smaller T,γ_c (T)passes through a maximum at around 0.8 T_c and then decreases nearly linearly towardsapparentlya finite value at T=0.Theauthors of Ref.<cit.> argued that this behavioris notcaused by fluctuations,because residual resistivity does not exhibit a noticeable increase around x_c (Ref. <cit.>). Other experiments <cit.> also indicated that correlations only get weaker with increasing x.The behavior of γ_c (T) around x_c was first interpreted first as potential BCS-BEC crossover <cit.> and later as a potential evidence of an exotic pairing that creates a Bogolubov Fermi surface in the superconducting state <cit.>. We argue that the specific heat data are consistent with the nematic-mediated pairing, in which near x_cthe gap develops in the arcs near k_x and k_yandnearlyvanishes in between the arcs. This explanation is also consistent with recent observation <cit.> that superfluid density in FeSe_1-xS_x drops at x ≥ x_c, indicating that some fermions remain unpaired.Finally, recent μSR experiments <cit.> presented evidence for time-reversal symmetry breaking in FeSe.The μSR signal is present below T_cfor all x, however in FeSe_1-xTe_x it clearly increases above x_c.This raises a possibility that the superconducting state at x > x_c breaks time-reversal symmetry, at least in FeSe_1-xTe_x. Within our nematic scenario, this would indicate a p-wave pairing with k_x ± i k_y gap structure. We argued that p-wave pairing, mediated by nematic fluctuations, isa strong competitor to s^+- pairing. There is one recent data set, which we cannot explain at the moment.Laser ARPES study of FeSe_0.78S_0.22 (Ref. <cit.>) detected superconducting gap in the polarizarion of light, which covers momenta nearthe X direction, but no gap in polarization selecting momenta near Y.Taken at a face value, this dataimplies that superconducting order strongly breaks C_4 symmetry.In our nematic scenario, pure k_x (or k_y) order is possible, but has smaller condensation energy than k_x ± i k_y.More analysisis needed to resolve this issue. § DISCUSSIONIn this paper we derived an effective pairing interaction near the onset of a nematic order in a 2D two-orbital/three band system of fermions nd applied the results to doped FeSe. The model consists of a hole band, centered at Γ and made equally of d_xz and d_yzfermions, and two electron bands, centered atX and Y and made out d_yz and d_xz fermions, respectively. The nematic order is a spontaneous polarization between d_xz and d_yz orbitals, which changes sign between hole and electron pockets. We found the pairing interaction as the sum of two terms: a dressed bare interaction, which remains non-singular and repulsive, and the term, induced by intra-pocket density-density interaction U^d_he. This last term contains the square of the nematic form-factor and scales withthe nematic susceptibility,and is the dominant pairing interaction near the onset of a nematic order. We obtained the gap function and found that it is highly anisotropic with gap maxima along k_x and k_y directions.This is in variance with pairing by spin fluctuations, for which the gap has maxima along diagonal directions k_x ± k_y.Right at the nematic QCP, thegap develops in four finite arcs around k_x and k_y, while in between the arcs the original Fermi surface survives.Such a gap function, degenerate between s-wave, d-wave, and p-wave,gives rise to highly unconventional behavior of the specific heat coefficient with no jump at T_c and seemingly finite value at T=0 (the actual C_v (T)/T vanishes at T=0, but drops only at extremely low T ∼ 10^-2 T_c).In the tetragonal phase away from a QCP, the degeneracy is lifted, and there is a competition between s-wave and k_x ± i k_y, the latter breaks time-reversal symmetry.In both cases, the gap remains strongly anisotropic, with maxima along X and Y directions. We compared our theory with existing experiments in some details. § ACKNOWLEDGMENTSWe acknowledge with thanks useful conversations with D. Agterberg, E. Berg, P. Canfield, P. Coleman, Z. Dong, R. Fernandes, Y. Gallais, E. Gati, T. Hanaguri, P. Hirschfeld, B. Keimer,A. Klein, H. Kontani, L. Levitov, A. Pasupathi, I. Paul, A. Sacuto, J. Schmalian, T. Shibauchi, and R. Valenti. This workwas supported byU.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0014402. naturemag SUPPLEMENTAL MATERIALS §MODEL HAMILTONIANWe consider an effective two dimensional 2-orbital model Hamiltonian with two hole pockets, centered at the Γ point of the Brillouin zone(BZ) and two electron pockets, centered at X = (0, π) and Y = (π, 0) points of the BZ, respectively. The hole pockets and the corresponding hole bands are composed of d_xz and d_yz orbitals. The X pocket/band is composed of d_yz and the Y pocket/band is composed of d_xz orbitals respectively. For simplicity, we neglect the d_xy orbital contribution to the X and Y electron pockets. We also neglect the spin-orbit coupling on the band dispersion. The kinetic energy of the model Hamiltonian near the Γ, Y and X points are captured by H_Kin=H_Γ+H_Y+H_X,withH_Γ =∑_,σψ_Γ,,σ^†[(μ_h-^22m_h )τ_0-b2^2 cos2θ_ τ_3+ b2^2 sin2θ_ τ_1]ψ_Γ,,σ,H_Y =∑_,σξ_Y, f_1,,σ^† f_1,,σ,H_X =∑_,σξ_X, f_2,,σ^† f_2,,σ.Here, ψ_Γ,,σ^†=(d^†_xz,,σ,d^†_yz,,σ) where d^†_i,,σ creates a fermion with the orbital index i(xz,yz), momentum (measured from Γ) and spin index σ(± 1). θ_ is the polar angle for momentum , measured from the Γ-X-direction in the anti-clockwise direction. τ_i's are the conventional Pauli matrices with τ_0 being the 2× 2 identity matrix. The parameters μ_h,m_h and b>0 are taken from the ARPES data for FeSe at k_z=π <cit.>. Diagonalizing Eq. (<ref>), we find two bands called outer(h) and inner(H) hole pockets with the dispersion relationξ_h,H=μ_h- ^2/2 m_h± b ^2. The fermionic band operators h,H are the linear combination of the orbital operators,h_,σ =cosθ_d_yz,,σ+sinθ_d_xz,,σH_,σ =sinθ_d_yz,,σ-cosθ_d_xz,,σ.On the other hand,f^†_1, and f^†_2, creates fermion with orbital d_xz and d_yz and momentumnear Y- and X- points respectively. The corresponding energy dispersions for the electron pockets are taken of the formξ_Y()=k_x^22 m_x+k_y^22 m_y-μ_e ξ_X()=k_x^22 m_y+k_y^22 m_x-μ_e,where m_x and m_y are fitting parameters from the ARPES data. In the orbital basis, the interaction part of the Hamiltonian involves 14 distinct C_4 symmetric interactions between the low energy fermions near the Γ, X and Y pockets,H_int =U_42∑[d_xz,σ^† d_xz,σd_xz,σ'^† d_xz,σ'+d_yz,σ^† d_yz,σd_yz,σ'^† d_yz,σ']+Ũ_4 ∑d_xz,σ^† d_xz,σd_yz,σ'^† d_yz,σ' +Ũ̃_4 ∑d_xz,σ^† d_yz,σd_yz,σ'^† d_xz,σ'+U̅_42∑[d_xz,σ^† d_yz,σd_xz,σ'^† d_yz,σ'+d_yz,σ^† d_xz,σd_yz,σ'^† d_xz,σ']+U_1 ∑[f_1,σ^† f_1,σ d_xz,σ'^† d_xz,σ'+f_2,σ^† f_2,σ d_yz,σ'^† d_yz,σ']+U̅_1 ∑[f_1,σ^† f_1,σ d_yz,σ'^† d_yz,σ'+f_2,σ^† f_2,σ d_xz,σ'^† d_xz,σ']+ U_2 ∑[ f_1,σ^† d_xz,σ d_xz,σ'^† f_1,σ'+ f_2,σ^† d_yz,σ d_yz,σ'^† f_2,σ'] + U̅_2 ∑[ f_1,σ^† d_yz,σ d_yz,σ'^† f_1,σ'+ f_2,σ^† d_xz,σ d_xz,σ'^† f_2,σ']+U_32∑[ f_1,σ^† d_xz,σ f_1,σ'^† d_xz,σ'+f_2,σ^† d_yz,σ f_2,σ'^† d_yz,σ']+U̅_32∑[ f_1,σ^† d_yz,σ f_1,σ'^† d_yz,σ'+ f_2,σ^† d_xz,σ f_2,σ'^† d_xz,σ']+U_52∑[f_1,σ^† f_1,σf_1,σ'^† f_1,σ'+f_2,σ^† f_2,σf_2,σ'^† f_2,σ'] + Ũ_5 ∑f_1,σ^† f_1,σf_2,σ'^† f_2,σ'++Ũ̃_5 ∑f_1,σ^† f_2,σf_2,σ'^† f_1,σ' +U̅_52∑[f_1,σ^† f_2,σf_1,σ'^† f_2,σ'+f_2,σ^† f_1,σf_2,σ'^† f_1,σ'].Here, we omit the momentum index in the fermionic operators for the simplicity of the notation and "∑" represents summation over the momenta with momentum conservation. In terms of bare Hubbard-Hund interactions, these interactions are U_1 =U_2=U_3=U_4=U_5=U, U̅_1 =Ũ_4=Ũ_5=U', U̅_2 =Ũ̃_4=Ũ̃_5=J, U̅_3 =U̅_̅4̅=U̅_5=J',with U,U',J,J' are intra-orbital Hubbard repulsion, inter-orbital Hubbard repulsion, inter-orbital Hund exchange and inter-orbita Hund exchange term known as pair hopping term respectively.We convert these interactions in the band basis(for X-, Y- pockets Eq. (<ref>) is already in the band basis) using the Eqs. (<ref>-<ref>) and write the interaction Hamiltonian as H_int=∑_i,j∈{h,X,Y}H_i,j+H_otherHere, H_i,j is the interaction between the pockets i and jϵ{h,Y,X} and takes the formH_h,h =12∑_,, h_,σ^† h_+,σh_,σ'^† h_-,σ' V_h,h^den(,+;,-), H_h,Y = ∑_,, f_1,,σ^† f_1,+,σ h_,σ'^† h_-,σ' V_h,Y^den(,-)+∑_,,f_1,,σ^† h_+,σ f_1,,σ'^† h_-,σ' V_h,Y^ph(+,-)+∑_,,f_1,,σ^† h_+,σ h_,σ'^† f_1,-,σ' V_h,Y^ex(+,), H_h,X = ∑_,, f_2,,σ^† f_2,+,σ h_,σ'^† h_-,σ' V_h,X^den(,-)+∑_,,f_2,,σ^† h_+,σ f_2,,σ'^† h_-,σ' V_h,X^ph(+,-)+∑_,,f_2,,σ^† h_+,σ h_,σ'^† f_2,-,σ' V_h,X^ex(+,).The interaction within the electron pockets labeled as H_X,X, H_Y,Y and H_X,Y are already in the band basis and captured by thelast 4 terms of Eq.(<ref>). We group interactions into density-density, exchange and pair-hopping by labeling them "den", "ex" and "ph" respectively. Since the electron pockets are assumed to be mono-orbital, in Eqs.(<ref>-<ref>), interaction strengths are momentum dependent only for the hole pockets in the follwoing wayV_h,h^den=U_4{sinθ_sinθ_+sinθ_sinθ_-+cosθ_cosθ_+cosθ_cosθ_-}+Ũ_4{sinθ_sinθ_+cosθ_cosθ_-+cosθ_cosθ_+sinθ_sinθ_-}+Ũ̃_4{sinθ_cosθ_+cosθ_sinθ_-+cosθ_sinθ_+sinθ_cosθ_-}+ U̅_4{sinθ_cosθ_+sinθ_cosθ_-+cosθ_sinθ_+cosθ_sinθ_-} ,V_h,Y^den = U_1sinθ_sinθ_-+U̅_1 cosθ_cosθ_-,V^h,X_den = U_1cosθ_cosθ_-+U̅_1 sinθ_sinθ_-,V_h,Y^ph = U_32 sinθ_+sinθ_-+U̅_32 cosθ_+cosθ_-, V^h,X_ph = U_32 cosθ_+cosθ_-+U̅_32 sinθ_+sinθ_-, V_h,Y^ex =U_2sinθ_+sinθ_+U̅_2 cosθ_+cosθ_,V_h,X^ex =U_2cosθ_+cosθ_+U̅_2 sinθ_+sinθ_.On the other hand, all other interactions which involve the inner hole band fermions are contained in H_other term. We ignore the effects of this part on the nematic criticality and the superconductivity in the rest of the paper because of the two reasons: (a) unlike the outer hole pocket, inner hole pocket is quite small in size and can even disappear below the Fermi energy in the presence of a small spin-orbit coupling or external strain, and (b) the presence of the inner hole pocket does not change the main results of our paper qualitatively, but complicates the calculation unnecessarily. § ORBITAL ORDER INSTABILITY We define the orbital/nematic order as the density difference between the d_xz and d_yz fermions. A non-zero value for this order parameter breaks the Z_2 symmetry of the system. In the band language, the order parameter translates to two zero momentum d-wave components: one for the hole pocket ϕ_h= ∑_,σ⟨ d_yz,,σ^† d_yz,,σ⟩-⟨ d_xz,σ^† d_xz,,σ⟩=∑_,σ⟨ h_,σ^† h_,σ⟩cos 2 θ_ and another for the electron pockets ϕ_e= ∑_⟨ f_2,,σ^† f_2,,σ⟩-⟨ f_1,σ^† f_1,,σ⟩ = ϕ_x-ϕ_y. Here, ϕ_x and ϕ_y are the fermionic density in the X- and Y- pockets respectively.To discuss the nematic instability, we have introduced an infinitesimally small Z_2 symmetry breaking external perturbation to the interaction Hamiltonian in the form of ϕ_0cos2θ_h^†_ h_ and define nematic susceptibility as ϕ_h=χ_nem ϕ_0. The onset of the nematic order is signaled by the divergence of its susceptibility, χ_nem. We represent the set of self-consistent equations for ϕ_h, ϕ_x and ϕ_y in Fig. <ref>.They are obtained by adding up the Hartree and Fock diagrams for different bands and turn out to be ϕ_h cos 2θ_=ϕ_0 cos 2θ_+ ϕ_h_p G_p^h G_p^h cos 2θ_ V_h,h^den(,;,) -2 ϕ_h _p G_p^h G_p^hcos2θ_ V_h,h^den(,;,) -2 ϕ_y _p G_p^Y G_p^YV_h,Y^den(,;,)-2 ϕ_x _p G_p^X G_p^XV_h,X^den(,;,),ϕ_y =ϕ_y _p G^Y_p G^Y_pU_5-2 ϕ_y_p G^Y_p G^Y_p U_5-2 ϕ_h _p G^h_p G^h_pcos2θV_h,Y^den(,;,)-2 ϕ_x _p G^X_p G^X_pŨ_5, ϕ_x =ϕ_x _p G^X_p G^X_pU_5-2 ϕ_x_p G^Y_p G^Y_pU_5 -2 ϕ_h _p G^h_p G^h_pcos2θV_h,X^den(,;,)-2 ϕ_y _p G^Y_p G^Y_pŨ_5.Here G^i(,ω)=1/i ω-ξ_i() is the Green's function for the pocket i with 4- momentum p=(,ω). _p stands for T ∑_Ω_nd^2(2π)^2, where Ω_n is the fermionic Matsubara frequency(Ω_n=(2n+1)π T),is the lattice momentum and T is the temperature. Because orbital order is a zero momentum order, we only keep the low momentum transfer interaction like density interaction and ignore large momentum transfer interaction like exchange interaction in Eqs.(<ref>-<ref>). Since nematic orders ϕ_h and ϕ_e are d-wave order, only the d-wave component of the interactions( proportional to cos2θ terms) will contribute in Eqs. (<ref>-<ref>).Using Eqs. (<ref>-<ref>), one finds the s- and d- wave components of the interactions V_h,h^den(,;,) =U_4+Ũ̃_42+U_4-Ũ̃_42cos2θ_cos2θ_+Ũ_4+U̅_42sin2θ_ sin2θ_,V_h,h^den(,;,) =U_4+Ũ_42 +U_4-Ũ_42cos2θ_cos2θ_+Ũ̃_4+U̅_42sin2θ_ sin2θ_,V_h,Y^den(,;,) =U_1+U̅_12-U_1-U̅_12cos2θ_,V_h,X^den(,;,) =U_1+U̅_12+U_1-U̅_12cos2θ_. Combining all these terms for Eqs. (<ref>-<ref>), we get ϕ_h = ϕ_0-ϕ_hU_h^dΠ_h^d-ϕ_eU_he Π_e, ϕ_e =-ϕ_eU_e^d Π_e-2 ϕ_h U_heΠ_h^d.Here,U_h^d=U_4-2Ũ_4+Ũ̃_42 and U_e^d =U_5-2 Ũ_5 are the effective d-wave intra-pockets interactions for hole and electron pockets respectively. U_he= (U_1-U̅_1) is the effective d-wave inter-pocket interaction for the nematic order. We define the effective polarization bubbles with vertex factor cos2θ for hole pocket and 1 for electron pockets asΠ_h^d =-_p G^h_p G^h_p cos^22θ_, Π_e =-_p G^e_p G^e_prespectively. Within our convention, the polarization bubbles are positive Π_h^d,Π_e>0. Eq. (<ref>) is found from two other equations, ϕ_y=-ϕ_y U_5Π_e+ϕ_h U_he Π_h^d-2 ϕ_x Ũ_5Π_e andϕ_x=-ϕ_x U_5Π_e-ϕ_h U_he Π_h^d-2 ϕ_y Ũ_5Π_e. Combining Eqs. (<ref>) and (<ref>)), we find the expression for χ_nem,χ_nem=1(1+U_h^d Π_h^d)(1+U_e^d Π_e)-2U_he^2 Π_h^d Π_e=1Z. As stated before the onset of the orbital order is set when Z=0. To see what kind of orbital order (measured by the relative sign between ϕ_h and ϕ_e) is produced at the transition point, we need to solve Eqs. (<ref>) and (<ref>)) without the source term ϕ_0. At the bare interaction level(<ref>-<ref>), we find no solution exists for J>U/5 while only sign preserving solution(called d^++ with sgn(ϕ_h)=sgn(ϕ_e)) exists when J<U/5. To see this we need to include the effect of the large momentum transfer interaction(U_2,U̅_2,Ũ̃_5) as at the bare level they are equal in magnitude to the low momentum transfer density interaction and can't be ignored. The calculation is straightforward <cit.> and makes both the effective intra-pocket and inter-pocket interactions comparable: U_h^d=U_he=U_e^d/2=(5 J-U)/2. A possibility of another solution called d^± orbital order with sgn(ϕ_h)=-sgn(ϕ_e) can exist if we consider U_h^d, U_e^d and U_he differ from the bare values such that U_h^d U_e^d≠ 2 U^2_he and U_he>0. This can happen as a consequence of the renormalization coming from the high energy fermions. In experiments, d^± orbital order is found <cit.>. § BARE PAIRING INTERACTIONIn the superconducting channel, the bare pairing interaction from Eq. (<ref>) readsH_pair= V_0^h2 h_,↑^† h_-,↓^† h_-,↓ h_,↑+V_0^e2∑_i=1^2 f_i,,↑^† f_i,-,↓^† f_i,-,↓ f_i,↑+V^h,e_s h_,↑^† h_-,↓^†(f_1,-,↓ f_1,↑+f_2,-,↓ f_2,↑) +V^h,e_d h_,↑^† h_-,↓^†(f_2,-,↓ f_2,↑-f_1,-,↓ f_1,↑) cos 2θ_.Here,V^h_0 and V_0^e are the intra-pocket pairing interaction for the hole and electron pockets(same for X- and Y-) respectively with the expression V^h_0=U_4+U̅_42+U_4-U̅_42cos 2θ_cos 2 θ_+Ũ_4+Ũ̃_42sin 2θ_sin 2 θ_ and V^e_0=U_5. On the other hand, V^h,e_s and V^h,e_d are the s- and d- wave components of the inter-pocket pairing interactions respectively with the expression V^h,e_s=U_3+U̅_32 and V^h,e_d=U_3-U̅_32. We neglect the pairing interaction between the electron pockets. Using Eqs. (<ref>-<ref>), one finds the pairing interactions are repulsive and can only lead to a superconducting instability if the inter-pocket repulsion overcomes intra-pocket repulsion. We define the superconducting gap on the hole pocket as Δ_h, on X- pocket as Δ_x and on Y-pocket as Δ_y. In the tetragonal phase, s- and d- wave components of the superconducting gap equations decouple. For the s- wave component(labeled by asuffix s) the set of linearized equations for the gap is obtained straightforwardly and reads as[ Δ^s_h; Δ^s_e ]=L [-u^s_h -u_he^s; -2 u_he^s-u_e ][ Δ^s_h; Δ^s_e ].Here, Δ^s_e=Δ_x+Δ_y, u_h^s=N_0 U_4+U̅_42, u_he^s=N_0 U_3+U̅_32, u_e=N_0 U_5, N_0 is the density of states, L=log1.13 ΛT_c and Λ is the upper cut-off for the pairing. Eq. (<ref>) has a solution only if the largest eigen-value λ_s=-(u^e+u_h^s)+√((u^e-u_h^s)^2+8 (u_he^s)^2)2 is positive. For simplicity, if we assume all intra-pocket pairing interactions are same u_s^h=u^e_s=u, then this criteria translates to u_he^s>u√(2). For u_he^s<u√(2), there is no s- wave superconducting instability. For the d-wave, the set of linearized equation become[ Δ^d_h; Δ^d_e ]=L [ -u_h^d/2-u_he^d; - u_he^d -u_e ][ Δ^s_h; Δ^s_e ].Here, Δ^d_e=Δ_x-Δ_y, u^d_h=N_0 U_4-U̅_42, u_he^d=N_0 U_3-U̅_32. Eq. (<ref>) has a solution only if the largest eigen-value λ_d=-(u_e+u_h^d/2)+√((u_e-u_h^d/2)^2+4 (u_he^d)^2)2 is positive. For simplicity, if we assume u_h^d=u_h^s=u_e=u and u_he^s=u_he^d=v, then λ_s=-u+√(2v^2)>λ_d=-3u+√(u^2+16v^2)/4>0 when v>u/√(2): s- wave will be the leading instability. In the next section, we show that near a nematic instability order parameter fluctuations renormalize the intra-pocket pairing interactions u_s^h, u^e_s attractive and scale with the nematic susceptibility χ_nem which makes λ always positive.§ RENORMALIZATION OF THE PAIRING INTERACTION NEAR THE NEMATIC TRANSITION POINT In Sec.<ref>, We find that the bare intra-pocket pairing interactions for both the hole and electron pockets, V^h_0 and V^e_0 respectively are repulsive, and in order to have superconductivity, inter-pocket repulsion, V^h,e_s,d has to overcome the intra-pocket repulsion. In this section, we derive the corrections to the intra-pocket and inter-pocket pairing interaction due to the proximity of a nematic order. We show that the dressed intra-pocket pairing interactions labeled as V^h_eff(for hole) andV^e_eff (for electron pockets) get an attractive singular component proportional to the nematic susceptibility χ_nem besides its regular repulsive part near the nematic order. On the other hand, the large momentum transfer(π) type interaction pair hopping/inter-pocket pairing interaction is mostly unaffected by the low momentum nematic fluctuations.§.§ Intra-pocket pairing interactionWe first focus on the pairing interaction on the hole pocket describing scattering between fermion pair states (,α;-,γ) → (,β;-,δ) where , are the momentum and α,β,γ,δ are the spin components of the incoming and outgoing fermions. Because of the orbital degrees of freedom for the hole pocket fermions, it is convenient to cast Eqs. (<ref>-<ref>) in a slightly different formatH_hh=12∑_,, b^T(,+)Γ_hh b(,-) h_,σ^† h_+,σh_,σ'^† h_-,σ' H_h,X=∑_,, d^TΓ_hxb(,-) f_2,,σ^† f_2,+,σ h_,σ'^† h_-,σ' H_h,Y=∑_,, d^TΓ_hyb(,-) f_1,,σ^† f_1,+,σ h_,σ'^† h_-,σ'.Here, b and d are the bare vertex for the intra-hole(Γ_hh) and inter-pocket(Γ_hx and Γ_hy) density-density interactions defined asb^T(,+) =[ cosθ_cosθ_+ sinθ_sinθ_+ cosθ_sinθ_+ sinθ_cosθ_+ ],d^T =[ 1 1 0 0 ] ,and Γ_hh=[u_4ũ_̃4̃00;ũ_̃4̃u_400;00 u̅_̅4̅ ũ̃_̃̃̃4̃̃̃;00 ũ̃_̃̃̃4̃̃̃ u̅_̅4̅ ]Γ_hx= [U_1000;0 U̅_100;0000;0000 ]Γ_hy= [ U̅_1000;0U_100;0000;0000 ] .The lower 2× 2 diagonal block of Γ_hx and Γ_hy is zero as a consequence of the orbital preserving interaction between hole and electron pockets. To simplify the presentation, we first ignore the inter-electron pocket density interaction Ũ_5 in our microscopic derivation of the dressed interaction, and later give our full result when one includes it.§.§.§ Singular part of the pairing interaction We argue below that the singular part of the renormalized intra-pocket pairing interaction, V^h_eff near the onset of the nematic order is captured by the Dyson equation illustrated in Fig.<ref> and we call it V^h_singular. Each shaded boxes in Fig.<ref> labeled as B_h, B_x and B_y are an infinite series of bubble diagrams(defined below) made of hole, X- and Y- pocket fermions respectively as illustrated in Fig.<ref>. Each bubble is dressed by an insertion of a ladder series of interactions at one of the vertices labeled as γ_h, γ_x and γ_y for the hole, X- and Y- pocket bubbles respectively shown in Fig.<ref>. We also dress the bare external vertex b by including a ladder series of wine-glass (second diagram of the right hand side of the equation of Fig.<ref>) and bubble diagram (third diagram of the right hand side of the equation of Fig.<ref>) shown in Fig.<ref> and label it as γ̃_h. With these preliminary definitions, the computation of the singular part of the effective pairing interaction for the hole pocket fermions as depicted in Fig.<ref> goes as-V^h_singular(,)=γ̃^T_h(,)[S+S ∘ B_h ∘ S+⋯]·γ̃_h(-,-), = γ̃^T_h(,)· S ∘11-B_h∘ S·γ̃_h(-,-),where "⋯" represents thehigher order terms of the series and the spin component of each diagram is δ_αβδ_γδ. S is the sum of the first two diagrams of the right hand side of Fig.<ref>(without the external vertex corrections represented by the black square) which translates toS=Γ_hy· d· d^T·Γ_hy B_y+Γ_hx· d· d^T·Γ_hx B_x.In Eqs. (<ref>-<ref>), for convenience we omit the momentum(q=k-p) dependence of S and B_h, and use "·" to denote the product between a matrix and vector and "∘" to denote the product between two matrices. Because of the orbital structure of the hole pocket, γ̃_h, γ_h, B_h and S are matrix in nature. Below we compute the analytical expression for S,γ_h, γ̃_h, B_h, B_x and B_y to give the final form of Eq. (<ref>). We first calculate γ_h, the vertex correction in the bubble with an insertion of the ladder series as shown in Fig.<ref>. A straightforward calculation for γ_h gives γ_h(,+)=b(,+)- _p b(, +)G^h_p G^h_p+q b^T(,)·Γ_hh· b(+,+)+⋯where ⋯ represents higher order terms in the ladder series. We define_p b(, +)G^h_p G^h_p+qb^T(,)·Γ_hh· b(+,+)=-α(q)· b(,+).Here α(q) is a 4× 4 matrix defined belowα= [ U_4Π_cc+Ũ̃_4 Π_m U_4Π_m+Ũ̃_4 Π_ss00; U_4Π_m+Ũ̃_4 Π_cc U_4Π_ss+Ũ̃_4 Π_m00;00Ũ_4Π_cs+U̅_4Π_mŨ_4Π_m+U̅_4Π_cs;00Ũ_4Π_m+U̅_4Π_csŨ_4Π_cs+U̅_4Π_m ]with Π_cc(q) =-_l G^h_l G^h_l+qcos^2θ_cos^2θ_+, Π_ss(q)=-_l G^h_l G^h_l+qsin^2θ_sin^2θ_+ Π_m(q)=-_l G^h_l G^h_l+qcosθ_sinθ_cosθ_+sinθ_+Π_cs(q)=-_l G^h_l G^h_l+qcos^2 θ_sin^2θ_+In Eq. (<ref>) we omit the q dependence of α and Π's for the simplicity of the representation. We put a negative sign in Eq. (<ref>) to make α numerically positive. For q=0, Π_cc=Π_ss and Π_m=Π_cs. After summing up the ladder series in Eq. (<ref>) we getγ_h(,+) =11-α· b(,+). For the electron pockets, the vertex corrections for X- and Y- pockets are same γ_x=γ_y=γ_e and equal to γ_e=11-Π_e U_5,where Π_e is defined in Eq. (<ref>). Next we find the expression for the shaded boxes B_h, B_x and B_y which are the ladder series of dressed bubbles, shown in Fig.<ref>. For the electron pockets, the insertion of one bubble without the vertex correction makes a contribution of 2 Π_e. A factor of 2 is coming from the spin. Because we define the polarization bubble (Π_e) with an overall minus sign, it makes the overall expression for B_i positive by canceling the effect of -1 coming from the fermionic loop. Summing up the bubbles with the vertex correction γ_e upto infinite order gives B_e =2 Π_e γ_e-(2Π_e γ_e)^2 U_5+⋯ =2Π_e1+U_5 Π_e,where B_x=B_y=B_e.For the hole pocket, the inclusion of a bubble without the vertex correction(γ_h) makes a contribution of(see first diagram for B_h in Fig.<ref>)M(q)= -2_l b(,+) G_l G_l+qb^T(+,),where M(q) is a 4× 4 matrix defined below M=-2 _l G_l G_l+q × [cos^2θ_cos^2θ_+ cosθ_sinθ_cosθ_+sinθ_+ cosθ_sinθ_cos^2θ_+ cos^2 θ_cosθ_+sinθ_+; cosθ_sinθ_cosθ_+sinθ_+sin^2θ_sin^2θ_+sin^2θ_cosθ_+sinθ_+ cosθ_sinθ_sin^2θ_+; cos^2 θ_cosθ_+sinθ_+ cosθ_sinθ_sin^2θ_+ cosθ_sinθ_cosθ_+sinθ_+ cos^2 θ_sin^2θ_+; cosθ_sinθ_cos^2θ_+sin^2θ_cosθ_+sinθ_+sin^2θ_cos^2θ_+ cosθ_sinθ_cosθ_+sinθ_+ ]. It is easy to check that in the presence of any odd number of "sine" term, the integration in Eq. (<ref>) will give zero because of the parity symmetry of the underlying Hamiltonian. As a result, M(q) becomesM=2 [ Π_ccΠ_m00;Π_m Π_ss00;00Π_m Π_cs;00 Π_csΠ_m ].Using Eq. (<ref>) for the vertex correction γ_h^T=b^T ·11-α^T, we sum up the bubbles upto infinite orders and get the expression for B_h as B_h =M∘11-α^T-M∘11-α^T∘Γ_hh∘ M∘11-α^T+⋯=M∘11-α^T∘11+Γ_hh∘ M∘11-α^T.Next we calculate the correction in the external vertex defined as γ̃_h and depicted in Fig.<ref>. Using Eqs. (<ref>) and (<ref>), we find the expression for γ̃_h γ̃_h(,+) =b(,+)+α∘ b(,+)-M∘Γ_hh·b(,+)+⋯=11-α+M ∘Γ_hh· b(,+) Combining Eqs.(<ref>,<ref>,<ref>), we find and further simplify the expression for V^h_singular -V^h_singular =b^T·11-α̂^T+Γ_hh∘ M∘ S ∘11-B_h∘ S∘11-α∘ +M ∘Γ_hh· b=b^T·11-α̂^T+Γ_hh∘ M∘[S^-1-B_h]^-1∘11-α∘ +M ∘Γ_hh· b=b^T·[1-α̂^T+Γ_hh∘ M]^-1[S^-1-M ∘(1-α^T+Γ_hh∘ M)^-1]^-1∘11-α∘ +M ∘Γ_hh· b=b^T·[S^-1(1-α^T+ Γ_hh∘ M )-M ]^-1∘11-α∘ +M ∘Γ_hh· b=b^T·11-α^T+ Γ_hh∘ M - S ∘ M∘ S∘11-α∘ +M ∘Γ_hh· bHere, for simplicity of the presentation we suppress the momentum dependence of V^h_singular,α, M and the vertices b and b^T. Using Eqs. (<ref>, <ref>) we find the expression for S asS=2 Π_e1+U_5 Π_e[ U_1^2+U̅_1^2 2 U_1 U̅_100; 2 U_1 U̅_1 U_1^2+U̅_1^200;0000;0000 ]To understand what qualitative results Eq. (<ref>) yields, we can calculate order by order in inter-pocket interaction S while ignoring all intra-pocket interaction. To simplify the results we assume the external momenta are same p=k. To first order in S, direct computation of Eq. (<ref>) produce Π_e {(U_1+U̅_1)^2+(U_1-U̅_1)^2 cos^22θ_k} while the second order produces 2 Π_e^2 {(U_1+U̅_1)^4Π_h^s+ (U_1-U̅_1)^4Π_h^dcos^22θ_k}. Π_h^s is the polarization bubble defined in the same way as Π_h^d, but with the vertex form factor 1.Π_h^s=-_p G^h_p G^h_p.Finding the pattern, we can add all the higher order terms and find-V^h_singular= Π_e (U_1-U̅_1)^21-2 Π_h^d Π_e (U_1-U̅_1)^2cos^22θ_+Π_e (U_1+U̅_1)^21-2 Π_h^s Π_e (U_1+U̅_1)^2. Identifying with Eq. (<ref>), we find the first term of Eq. (<ref>) is nothing but the nematic susceptibility χ_nem with a prefactor proportional to (U_1-U̅_1)^2 cos^22θ_ when one ignores the intra-pocket density interactions. The second term is non-singular asit is not related to any instability and can be ignored. Next we will include intra-pocket interactions and calculate Eq.(<ref>) exactly. To make the flow of the calculation simple, we introduce a matrix function Q(x) as Q(x)=1-α^T + Γ_hh∘ M- xS ∘ M,such that Eq. (<ref>) can be written as(again suppressing the momentum dependence to simplify the presentation)-V^h_singular= b^T ·[Q(1)]^-1∘ S ∘[Q^T(0)]^-1· b. Since, the lower 2× 2 diagonal block of S is zero, to compute the effective interaction we need to compute only the upper block of Q(x). Using Eqs. (<ref>,<ref>,<ref>,<ref>), we find the exact expression for Q(x),Q(x)=[ A_11(x) A_12(x); A_21(x) A_22(x) ],with A_11(x) =1+U_4 Π_cc+(2 Ũ_4-Ũ̃_4) Π_m-x2 Π_eB[(U_1+U̅_1)^2 (Π_cc+Π_m)+(U_1-U̅_1)^2 (Π_cc-Π_m)]A_22(x) =1+U_4 Π_ss+(2 Ũ_4-Ũ̃_4) Π_m-x2 Π_eB[U_1+U̅_1)^2 (Π_ss+Π_m)+(U_1-U̅_1)^2 (Π_ss-Π_m)]A_12(x) =U_4 Π_m+(2 Ũ_4-Ũ̃_4 ) Π_ss-x2 Π_eB[(U_1+U̅_1)^2 (Π_ss+Π_m)+(U_1-U̅_1)^2 (Π_ss-Π_m)]A_21(x) =U_4 Π_m+(2 Ũ_4-Ũ̃_4 ) Π_cc-x2 Π_eB[(U_1+U̅_1)^2 (Π_cc+Π_m)+(U_1-U̅_1)^2 (Π_cc-Π_m)] where B=1+U_5 Π_e. Using the expression of Q(x), the effective pairing interaction in terms of A_i,j becomes -V^h_singular= 1Det[Q(1)]1Det[Q(0)b^T ·[A_22(1) -A_12(1); -A_21(1)A_11(1) ]∘ S ∘[A_22(0) -A_12(0); -A_21(0)A_11(0) ]· b,where "Det" represents the determinant. We express the form of determinant for Q(x) using Eqs. (<ref>-<ref>) as a power series in B with coefficients Z_i,Det[Q(x)]= Z_1+x (Z_2B+Z_3B^2)withZ_1 =(1+U_h^d Π_h^d)(1+U_h^s Π_h^s)-(Π_cc-Π_ss)^2 U_h^dU_h^s,Z_2= -2 Π_e (U_1+U̅_1)^2 Π_h^s (1+U_h^d Π_h^d)-2 Π_e (U_1-U̅_1)^2 Π_h^d (1+U_h^s Π_h^s)+ 2 Π_e (Π_cc-Π_ss)^2 × ( (U_1+U̅_1)^2 U_h^d+ (U_1-U̅_1)^2 U_h^s)Z_3 =4 Π_e^2(U_1+U̅_1)^2(U_1-U̅_1)^2 [Π_h^d Π_h^s-(Π_cc-Π_ss)^2]. Here, U_h^s=U_4+2Ũ_4-Ũ̃_42 and we have used the following relations(suppressing the momentum dependence of the polarization bubble below) Π_h^s =Π_cc+Π_ss+2Π_m, Π_h^d =Π_cc+Π_ss-2Π_m. In general, Π_cc and Π_ss differ from each other for q≠ 0. Since nematic order is q=0 momentum order with strong fluctuations around it, the leading singularity in the effective pairing interaction comes form q=0. As a result, we can ignore the term Π_cc-Π_ss in the Eqs. (<ref>-<ref>) as the difference for q→ 0 is vanishingly small. With a little bit of algebraic manipulation Eq. (<ref>) becomes Det[Q(x)]^-1=(1+U_5Π_e)^2 K_d(x) × K_s(x), where K_d and K_s are defined as K_d(x)= 1[(1+U_5 Π_e)(1+U_h^dΠ_h^d)-2 x Π_h^d Π_e (U_1-U̅_1)^2] K_s(x)= 1[(1+U_5 Π_e)(1+U_h^sΠ_h^s)-2 x Π_h^s Π_e (U_1+U̅_1)^2]. Next, we compute the matrix multiplication in Eq. (<ref>). we assert that the bare vertex b(,) has effectively 2 components relevant to our calculation withb(,)=[ cosθ_cosθ_ sinθ_sinθ_ ]. A straight forward calculation showsb^T(,) ·[A_22(1) -A_12(1); -A_21(1)A_11(1) ]∘ S ∘[A_22(0) -A_12(0); -A_21(0)A_11(0) ]· b(,) =2Π_e1+U_5 Π_e[V_1()2(1+cos 2θ_cos2θ_)+V_2()2sin 2θ_sin2θ_],withV_1 =(U_1+U̅_1)^22(A_11(1)-A_12(1))(A_11(0)-A_12(0))+(U_1-U̅_1)^22(A_11(1)+A_12(1))(A_11(0)+A_12(0))V_2 =(U_1+U̅_1)^22(A_11(1)-A_12(1))(A_11(0)-A_12(0))-(U_1-U̅_1)^22(A_11(1)+A_12(1))(A_11(0)+A_12(0)).All the momentum dependence of V_1,2 are hidden in the polarization bubbles used in A_i,j functions. Using Eqs. (<ref>-<ref>), we find the following expression forA_11(1)-A_12(1) =1+U_h^dΠ_h^d -2 Π_e Π_h^d1+U_5 Π_e (U_1-U̅_1)^2=1K_d(1) (1+U_5 Π_e)A_11(0)-A_12(0) =1+U_h^dΠ_h^d = 1K_d(0) (1+U_5 Π_e)A_11(1)+A_12(1) =1+U_h^sΠ_h^s -2 Π_e Π_h^s1+U_5 Π_e (U_1+U̅_1)^2 = 1K_s(1) (1+U_5 Π_e)A_11(0)+A_12(0) =1+U_h^sΠ_h^s = 1K_s(0) (1+U_5 Π_e).Combining all the terms Eqs.(<ref>-<ref>), we find the expression for the singular pairing interaction -V^h_singular(,)=Π_e(U_1-U̅_1)^2 cos^2(θ_+θ_)(1+U_5Π_e) K_d(1)K_d(0) + Π_e(U_1+U̅_1)^2 cos^2(θ_-θ_)(1+U_5Π_e) K_s(1)K_s(0).In the presence of the inter-electron pocket density interaction Ũ_5, one needs to add two more diagrams to S depicted in Fig.<ref>. This changes the expression for S from Eq. (<ref>)) to S=Γ_hy· d· d^T·Γ_hyB_y+Γ_hx· d· d^T·Γ_hxB_x+2 Γ_hx· d· d^T·Γ_hyB_yB_x Ũ_5 .Using Eq. (<ref>) to perform the subsequent calculations outlined before, Eq. (<ref>-<ref>), one arrives at the the final form for the V^h_singular as -V^h_singular(,)=Π_e1+U_h^d Π_h^d(U_1-U̅_1)^2 cos^2(θ_+θ_) χ_nem + Π_e1+U_h^s Π_h^s(U_1+U̅_1)^2 cos^2(θ_-θ_) χ^s_nem,where χ^s_nem=1/[(1+U_e^sΠ_e)(1+U_h^sΠ_h^s)-2Π_h^s Π_e (U_1+U̅_1)^2] with U_e^s=U_5+2 Ũ_5. χ^s_nem represent the susceptibility of the total fermionic density in the A_1g channel and does not diverge at any temperature. On the other hand, near the onset of the d-wave nematic instability, the susceptibility χ_nem diverges. As a result, we can ignore the non-singular second term of the Eq. (<ref>) and write the effective interaction near the nematic order asV_eff(,)=-A_h(U_1-U̅_1)^2 cos^2(θ_+θ_)χ_nem(-),where V_eff is the singular part of V^h_singular and A_h=Π_e1+U_h^d Π_h^d.§.§.§ Non-singular part of the pairing interactionIn Sec.<ref>, we find the singular component of the dressed pairing interaction on the hole pocket. It scales with the nematic susceptibility χ_nem with an angular dependence of cos^22θ_ and exist as a consequence of the inter-pocket density interaction U_he. On the other hand, the regular part of the dressed pairing interaction comes purely from the intra-pocket(in this case hole pocket) density interaction as we will show in this section. For the simplicity of the calculation, we keep the momentum of the external fermions same as the internal ones and argue that the regular component termed as V^h_regular of the intra-pocket pairing interaction for the hole pocket is depicted in Fig.<ref> From Fig.<ref>a, V^h_regular=γ̃_h.Γ_hh.γ_h where γ̃_h and γ_h are defined in Fig.<ref> and <ref>.A straight forward calculation using Eqs.(<ref>),(<ref>) givesV^h_regular=U_4+Ũ_42χ^s_spin χ^s_den +U_4-Ũ_42χ^d_spin χ^d_den cos^22θ_+U̅_4+Ũ̃_42χ^mix_spin χ^mix_den sin^22θ_,with the spin component δ_αβδ_γδ. Here, χ^s_spin,χ^d_spin and χ^mix_spin are the spin susceptibilities with the form factor 1, cos2θ and sin2θ respectively, while χ^s_den,χ^d_den and χ^mix_den are the density susceptibilities with the form factor 1, cos2θ and sin2θ respectively for a one band model(in our case it is the hole pocket). They are defined asχ^s_spin =11-U_4+Ũ̃_42 Π_h^s,χ^s_den=11+U_4+2Ũ_4-Ũ̃_42 Π_h^s, χ^d_spin =11-U_4-Ũ̃_42 Π_h^s,χ^d_den=11+U_4-2Ũ_4Ũ̃_42 Π_h^s, χ^mix_spin =11-2Π_m (U̅_4+Ũ_4),χ^mix_den=11-2Π_m (U̅_4+2Ũ̃_4-Ũ_4).To complete our calculation of the dressed pairing interaction, we also compute thefully anti-symmetrized pairing interaction. The anti-symmetric part of the fully anti-symmetrized pairing interaction describes scattering between fermion pair states (,α;-,γ)→(-,δ;,β). We depict the relevant diagrams contributing to V^anti_hh in Fig.<ref>b. Using Eq. (<ref>) one findsV^anti_hh=U_4+U̅_4211-U_4+U̅_42 Π_h^s+U_4-U̅_42cos^22θ_1-U_4-U̅_42 Π_h^d+Ũ_4+Ũ̃_42sin^22θ_1-Ũ_4+Ũ̃_42 Π_h^dwith the spin component δ_αδδ_γβ. To understand what Eqs. (<ref>-<ref>) produce, we combine both the terms and write the non-singular part of the fully anti-symmetrized pairing vertex Γ^h_regular using the bare Hubbard-Hund interaction, Eq. (<ref>-<ref>)asΓ^h_regular= δ_αβδ_γδ[ U-J(1+3U-5J2 Π_h^s)(1-U+J2 Π_h^s)+J(1+5J-U2 Π_h^d)(1-U-J2 Π_h^d)]- δ_αδδ_γβ[ U+J2(1-U+J2 Π_h^s)+U-J2(1-U-J2 Π_h^d)] = Γ_cδ_αβδ_γδ+Γ_s σ̂_αβ.σ̂_γδ.Here, Γ_c and Γ_sare the spin and charge components of the antisymmetrized vertex with the formΓ_c =3U-5J411+3U-5J2 Π_h^s+5J-U411+5J-U2 Π_h^d Γ_s =-U+J411-U+J2 Π_h^s-U-J411-U-J2 Π_h^d.From Eq. (<ref>), one finds that the regular part does not scale with the nematic instability χ_nem and is angle independent unlike the singular part V^h_singular which has an angular dependence as cos^22θ_. We calculate the effective intra-pocket interaction for the electron pockets depicted in Fig.<ref> in the similar way and findV_eff^e(,)=-A_e (U_1-U̅_1)^2 χ_nem(-),where A_e=12Π_h^d1+U_e^d Π_e. Because of the extra factor of 1/2 and Π_h^d<Π_e, A_h>A_e.§.§ Inter-pocket pairing interactionWe argue that the inter-pocket pairing interaction a.k.a pair hopping interaction does not get affected by the strong nematic fluctuation. Pair hopping interaction is a large momentum(Q) transfer interaction with Q=(0,π) or (π,0). On the other hand, nematic fluctuations are peaked near zero momentum and can't influence pair hoppingterm much.§ SOLVING THE SUPERCONDUCTING GAP EQUATION AT THE QCP: Δ=0 In Section.<ref> we find that near the nematic instability the intra-pocket pairing interaction becomes attractive and divergent as it scales with the nematic susceptibility χ_nem compared to the finite repulsive inter-pocket interaction. As a result, we can ignore the effect of the inter-pocket pairing interaction and study the superconducting gap equation for individual pocketsled by the intra-pocket pairing interaction. The pairing interaction within the hole and the electron pockets presented in Eqs. (<ref>,<ref>) differ by the prefactor A_h,e. Since A_h> A_e, pairing interaction is larger on the hole pocket and superconductivity will first develop on it. In our further analysis, we keep the pairing interaction(Eq. (<ref>)) static, predominantly between fermions near the Fermi surface and used the regular Ornstein-Zernike form for the χ_nem()(= χ_0/(ξ^-2+^2)) and find V_eff(,) =-A_h χ_0 (U_1-U̅_1)^2 cos^2(θ_+θ_)ξ^-2+(-)^2=-g̅cos^2(θ_+θ_)δ^2+4 sin^2(θ_-θ_2).Here χ_0 is the bare susceptibility, δ=1/k_f ξ is the inverse of the nematic correlation length ξ, g̅=A_h (U_1-U̅_1)^2k_F^2 is the effective coupling constant and k_F is the Fermi momentum. With the pairing interaction listed in Eq. (<ref>) we write the non-linear gap equation Δ_h() on hole pocket depicted in Fig.<ref> as Δ_h()=g̅ d^2(2π)^2Δ_h()cos^2(θ_+θ_)δ^2+4 sin^2(θ_-θ_/2 )tanh(β E_h()/2)E_h(), with β=1/T and the excitation energy E_h(=√(ξ_h()^2+Δ_h()^2)). We shift the momentum =+ and rewrite the integration in Eq. (<ref>) as Δ_h()=g̅ d^2(2π)^2Δ_h(+)cos^2(2 θ_+θ_)δ^2+4 sin^2(θ_/2 )tanh(β E_h(+)/2)E_h(+). Eq. (<ref>) is analytically tractable in the limit δ→ 0. In this limit, the integration is peaked near θ_=0. As a result, we can ignore thedependence of the gap and the cosine factor in Eq. (<ref>) and transform it from an integral equation to an algebraic equation Δ_h() =g̅N_0Δ_h() cos^2 2θ__0^2πd θ_2π1δ^2+4 sin^2(θ_/2 )_0^Λ dxtanh(√(x^2+|Δ_h()|^2)2 T)√(x^2+Δ_h()^2),where N_0 is the density of states and Λ is the upper pairing cutoff. Performing the angular integration over θ_ and defining the effective coupling g=N_0 g̅2 δ, Eq. (<ref>) reduces to1=gcos^2 2θ__0^Λ dxtanh(√(x^2+|Δ_h()|^2)2 T)√(x^2+Δ_h()^2). Since Eq. (<ref>) depends only on the modulus of the gap Δ_h, its solution does not provide any information about the phase of that gap:Δ_h=|Δ_h()| e^i ϕ(). Solving the linearized equation by putting Δ_h=0 in Eq. (<ref>), we find an angle dependent critical temperature T̅_c(θ_)=1.13Λexp(-1/g cos^22θ_). The largest critical temperature happens when cos^22θ_=1 and the superconducting gap will first develop at these discrete number of points on the Fermi surface and remain zero everywhere. The set of these points is W={0,π/2,π,3π/2} and the true transition temperature of the system isT_c=1.1.3 Λexp(-1/g). At any lower temperature T<T_c, the gap develops around these 4 special points(∈ W) and remains non-zero upto θ_< |θ_0(T)-θ_W| where θ_0(T) is the solution of Eq. (<ref>) with Δ_h=0.1gcos^2 2θ_0(T)=_0^Λ dxtanh(x/2 T)x=log1.13ΛT=1g+logT_cT→θ_0(T)=12arctan√(glogT_cT).We plot θ_0(T) in Fig.<ref>b as a function of the reduced temperature t=T/T_c. For θ_>|θ_0(T)-θ_W|, the gap vanishes and the Fermi surface remains intact with the width Δθ(T)=π/2-2 θ_0(T) where θ_0(T) is defined in Eq. (<ref>). At zero temperature, Δθ reduces to zero letting the gap open everywhere except on the cold spots(red dots in Fig.<ref>a): θ_c=π/4+n π/2 where n is an integer. In Fig.<ref>c, we plot the angular variation of the gap magnitude for a set of temperatures. At T=0, the angular variation of the gap turns out to be|Δ_h()|T_c|_T=0≈ 1.76 exp- tan^2 2θ_/g.Unlike the conventional d- wave superconducting gap(∝cos 2θ_), where near the cold spot the gap is linearly proportional to the angular deviation(∝ (θ-θ_c)), in this case the gap is exponentially suppressed near the cold spots and behaves as exp-14g(θ-θ_c)^2.§ CALCULATION OF THE SPECIFIC HEAT C_V AT Δ=0 The specific heat C_V(T) for the hole band is given by the expressionC_v(T) =2Td^2 (2π)^2(-∂ n_∂ E_h()) [E_h()^2-T2∂ |Δ_h()|^2∂ T] where n_ is the Fermi function 1/[exp(β E_h)+1] and E_h()=√(ξ_h()^2+|Δ_h()|^2) is the excitation energy. Since ∂n_∂E_h()=-1T^2(βE_h()/2) is peaked near the Fermi surface at low temperature, we convert the momentum integration into the energy and angle variables and approximate the energy integral by keeping the density of state at the Fermi surface which givesC_v(T)=N_02T^2_-∞^∞ dx _0^2πdθ2π^2(√(x^2+|Δ_h(θ)|^2)2 T) [x^2+|Δ_h(θ)|^2-T2∂ |Δ_h(θ)|^2∂ T].Here, N_0 is the density of states at the fermi surface. In Sec.<ref>, we solve the non-linear gap equation, Eq. (<ref>) and use the solution to compute the specific heat in this section. Since the gap function Δ_h(θ) vanishes for θ≥θ_0(T) at the temperature T for θ∈ (0,π/4)(See Fig.<ref>a), we can split the specific heat into two parts, one for the metallic part called C_v^N for which the gap vanishes and another for the superconducting part called C_v^S for which the gap exist: C_v(T)=C_v^N(T)+ C_v^S(T) defined asC_v^S(T) =4N_0 T^2_0^θ_0(T)dθ2π_-∞^∞ dx ^2(√(x^2+|Δ_h(θ)|^2)2 T) [x^2+|Δ_h(θ)|^2-T2∂ |Δ_h(θ)|^2∂ T],C_v^N(T) =4N_0 T^2_θ_0(T)^π/4dθ2π_-∞^∞ dx ^2( x2 T)x^2A straightforward calculation gives the normal part of the specific heatC_v^N(T)=8 N_0 πT3[π4-θ_0(T)].We plot the specific heat coeffcient for the metalic part, C_v^N/T as a function of the reduced temperature t=T/T_c in Fig.<ref>a. At t=1, C_v^N/T reduces to the normal metallic contribution 2 π^2 N_0/3. On the other hand, at low temperature, it behaves asC_v^N(T)T|_T → 0= 4N_0π3 √(|glog t|)+0((log t)^-3/2).To compute the superconducting part of the specific heat, we scale the temperature, gap and energy integration variable by the critical temperature T̅_c(θ) inside the angular integration of Eq. (<ref>) and write C_v^S(T)=4N_0T_c t^2_0^θ_0(T)dθ2π(T̅_c(θ)T_c)^3_-∞^∞ dx̃^2(√(x̃^2+|Δ̃_h(θ)|^2)2 t_θ) [x̃^2+|Δ̃_h(θ)|^2-t_θ2∂ |Δ̃_h(θ)|^2∂ t_θ],wheret_θ=T/T̅_c(θ), Δ̃_h(θ)= Δ_h(θ)/T_c(θ) andx̃=x/T_c(θ). We define the integration under x̃ variable in Eq. (<ref>) asf(t_θ)=_-∞^∞ dx̃^2(√(x̃^2+|Δ̃_h(θ)|^2)2 t_θ) [x̃^2+|Δ̃_h(θ)|^2-t_θ2∂ |Δ̃_h(θ)|^2∂ t_θ],such thatC_v^S(T) =4N_0T_c t^2_0^θ_0(T)dθ2π (T̅_c(θ)T_c)^3 f(t_θ)=4N_0T_c t^2_0^θ_0(T)dθ2π e^-3tan^2 2θ/g f( e^tan^2 2θ/gt)=N_0 gT_c _t^1 dx f(x)x^4 (1+g logxt) √(glogxt)) .Here we have used the relations T̅_c(θ)/T_c=exp(-tan^2 2θ/g) and f(t_θ)≡ f(t×T_cT̅_c(θ)). Eq. (<ref>) has a log singularity near x=t which approximates the integration toC_v^S(T)T= N_0 g f(t)t^3_1^1/tdy√(g log y)=√(π)N_0√(g) f(t)t^3Erfi[√(|log t|)],where Erfi(x) is the imaginary error function. We obtain the scaling function f(x) numerically and plot in Fig.<ref>b. We numerically compute Eq. (<ref>) and present our result for the specific heat coefficient of the superconducting part, C_v^S/T in Fig.<ref>c.In Fig.<ref>d, we plot the total specific heat coefficient, C_v/T as a function of the temperature for a range of values for the coupling constant g̅. The dashed black line represent the normal state result for the specific heat. We find there is no specific heat jump at the onset of the superconductivity because gap opens only at the discrete number of points on the Fermi surface and remains zero everywhere. This is captured by the fact that C_v^S/T=0 as T goes to T_c. Near the critical temperature T_c,C_v/T first goes up, attains a maximum around t=0.8 and then fallswith lowering the temperature. At very low temperature, specific heat declines very rapidly as it goes as log(t)^-1/2. This is a consequence of the existence of the exponentially suppressed gap near the cold spots, captured in Eq. (<ref>). § EFFECT OF THE PAIR-HOPPING INTERACTION AT Δ=0So far in our analysis we have neglected the effect of the inter-pocket pairing interaction a.k.a pair hopping defined in Eq. (<ref>). In this section, we study if the inter-pocket pairing interaction can break the degeneracy between s-,d- and p- wave channel at δ=0 limit. For the simplicity of the calculation, we have neglected the pairing interaction within the electron pockets and keep the density of states N_0 same for all the pockets. In the presence of the pair-hopping interaction, the non-linear gap equation for Δ_h, Δ_x and Δ_y become Δ_h(θ_) =g Δ_h(θ_) cos^22θ__0^Λ dξ_htanh(√(ξ_h^2+|Δ_h(θ_)|^2)2 T)√(ξ_h^2+|Δ_h(θ_)|^2)-Δ_xN_0(U_s+U_dcos 2θ_) _0^Λ dξ_x tanh(√(ξ_x^2+|Δ_x|^2)2 T)√(ξ_x^2+|Δ_x|^2) -Δ_y N_0 (U_s-U_dcos2θ_) _0^Λ dξ_y tanh(√(ξ_y^2+|Δ_y|^2)2 T)√(ξ_y^2+|Δ_y|^2) Δ_x =-N_0_0^2πdθ_2π (U_s+U_d cos2θ)Δ_h(θ_) _0^Λ dξ_htanh(√(ξ_h^2+|Δ_h(θ_)|^2)2 T)√(ξ_h^2+|Δ_h(θ_)|^2) Δ_y =-N_0 _0^2πdθ_2π (U_s-U_d cos2θ)Δ_h(θ_) _0^Λ dξ_htanh(√(ξ_h^2+|Δ_h(θ_)|^2)2 T)√(ξ_h^2+|Δ_h(θ_)|^2). Because of the pair-hopping interaction, the gap on the hole pocket will induce uniform gap on the electron pockets which, as a feedback induces non-zero gap everywhere on the hole pocket. We break the inter-pocket pairing interaction into s- and d- wave components and call them U_s and U_d respectively. In terms of the bare interactions, U_s=(U+J)/2 and U_d=(U-J)/2. We first focus on the linearized gap equationsΔ_h(θ_) =gΔ_h(θ_) cos^22θ_L-(Δ_x+Δ_y)u_sL-(Δ_x-Δ_y)u_d cos2θ_L , Δ_x+Δ_y=-2Lu_s _0^2πdθ_2πΔ_h(θ_), Δ_x-Δ_y=-2Lu_d _0^2πdθ_2πΔ_h(θ_) cos2θ_,where L=logΛT, u_s=N_0 U_s and u_d=N_0 U_d. We plug Eqs. (<ref>-<ref>) into Eq. (<ref>) and find the effective linearized gap equation on the hole pocket as Δ_h(θ_) =gΔ_h(θ_) cos^22θ_L+2 U^2_s L^2 _0^2πdθ_2πΔ_h(θ_)+2U^2_d L^2 cos2θ__0^2πdθ_2πΔ_h(θ_) cos2θ_.In the absence of the pair hopping, Eq. (<ref>) reduces to Eq. (<ref>) and we get back the results of Section.<ref>. In its presence, we make the following observation from the Eq. (<ref>). First, pair-hopping only affects the s- and d- wave components of the gap, not the p- wave. For the s- wave solution, the third term vanishes while for the d- wave the second term goes away. On the other hand, for the p wave solution, both the second and third term will vanish. Second, just like the isolated case, the first superconducting instability will happen at 4 points dictated by the relation cos^22θ_=1 on the Fermi surface. These are the points along the k_x and k_y axis on the Fermi surface. Third, the critical temperature gets modified by the pair-hopping term. We argue that because the gap opens only at discrete number of points, to compute the critical temperature one needs to substitute the integration over angle in the second and third term of Eq. (<ref>) by a summation over these 4 points. This givesΔ_h(θ_i) =gΔ_h(θ_i) cos^22θ_i L+2 U^2_s L^2 ∑_kΔ_h(θ_k)+2U^2_d L^2 cos2θ_i∑_kΔ_h(θ_k) cos2θ_W,where θ_k ={0,π/2,π, 3π/2}. Writing Eq. (<ref>) in s-,p- and d- wave channel gives the following conditions for the instability respectively1 =gL+ 8L^2 u_s^2, 1 =gL, 1 =gL+ 8L^2 u_d^2.Solving Eqs. (<ref>-<ref>) we find the renormalized critical temperaturesT^*_0,s( for s- wave), T^*_0,d(for d- wave) and T^*_0,p(for p- wave)such thatT_0,s^*≈Λ e^-1/g e^8 u_s^2/g^3≥T_0,d^*≈Λe^-1/g e^8 u_d^2/g^3>T_0,p^*=Λ e^-1/g.Inter-pocket pairing interaction favors s- and d- wave channel compared to the p- wave. When u_s=u_d, the equality sign holds and s- and d- wave becomes degenerate. To realize what gap symmetry at low temperature prevails for u_s=u_d, we solve the non-linear gap equations listed in Eq. (<ref>-<ref>) numerically and present our result in Fig. Our results supports an s- symmetry at all low temperature down to zero within the numerical accuracy. To understand if there is a possibility of an s+id state with a tiny d- wave component not captured by the numerical result, we perform a perturbative calculation around the numerical solution. We write the gap asΔ_h(θ)=Δ_h^s(θ)+i Δ_h^d(δ), Δ_x,y=Δ_x,y^s+i Δ_x,y^d,where Δ_h^s, Δ_x^s and Δ_y^s are the numerical solution of Eq. (<ref>-<ref>) for the hole, X- and Y- pocket respectively.Δ_h^d, Δ_x^d and Δ_y^d are the small perturbing fields. Expanding Eq. (<ref>-<ref>) in Δ^d's, we find the linearized equation for Δ^d_h,x,y in the presence of the full grown Δ_h,x,y^s gap asΔ_h^d(θ) =gΔ_h^d(θ) cos^22θ J(|Δ_h^s(θ)|,T )-Δ_x^du̅ cos^2θJ( |Δ_x^s| ,T )-Δ_y^du̅ sin^2θJ( |Δ_y^s|,T ) Δ_x^d =-u̅_0^2πdθ2πcos^2θ Δ_h^d(θ) J( |Δ_h^s(θ)|,T )Δ_y^d =-u̅_0^2πdθ2πsin^2θ Δ_h^d(θ) J( |Δ_h^s(θ)|,T ), where u̅=2 u_s=2 u_d and the function J defined as J(|x|,T)=∫_0^Λ dξtanh(√(ξ^2+|x|^2)2 T)√(ξ^2+|x|^2).Combining Eqs. (<ref>-<ref>) into Eq. (<ref>) and doing a little bit of manipulation we write the effective linearized equation for Δ_h^d Δ_h^d(θ) =g Δ_h^d(θ) cos^2 2θJ(|Δ_h^s(θ)|,T)+u̅^22cos2θ_0^2πdθ'2π Δ_h^d(θ') J(|Δ_h^s(θ')|,T) J(|Δ_e^s|,T),=g Δ_h^d(θ) cos^2 2θJ(|Δ_h^s(θ)|,T)+u̅^22cos2θ Γ(T) where Γ(T)=_0^2πdθ'2π Δ_h^d(θ') J(|Δ_h^s(θ')|,T) J(|Δ_e^s|,T) and we use |Δ_x^s|=|Δ_y^s|=Δ_e^s. We convert Eq. (<ref>) into an algebraic equation using the definition of Γ and find the self-consistent equation for the existence of the d- wave component as1= J(|Δ_e^s|) u̅^22_0^2πdθ2πcos^2 2θ J(|Δ_h^s(θ)|,T)1-g cos^2 2θJ(|Δ_h^s(θ)|,T).To complete our proof, we revisit Eq. (<ref>-<ref>) and write the non-linear integral equation for Δ_h^sΔ_h^s(θ)=g Δ_h^s(θ) cos^2 2θ J(|Δ_h^s(θ)|,T)+u̅2 J(|Δ_e|,T) _0^2πdθ'2π Δ_h^s(θ') J(|Δ_h^s(θ')|,T).With a little bit of manipulation, Eq. (<ref>) can be rearranged as1-g cos^2 2θ J(|Δ_h^s(θ)|,T)=u̅2J(|Δ_e|,T)_0^2πdθ'2π Δ_h^s(θ') J(|Δ_h^s(θ')|,T)Δ_h^s(θ). Using Eq. (<ref>), we further simplify the r.h.s of Eq.(<ref>) and arrive J(|Δ_e^s|) u̅^22_0^2πdθ2πcos^2 2θ J(|Δ_h^s(θ)|,T)1-g cos^2 2θJ(|Δ_h^s(θ)|,T) =_0^2πdθ2πcos^2 2θJ(|Δ_h^s(θ)|,T) Δ_h^s(θ) _0^2πdθ2πJ(|Δ_h^s(θ)|,T) Δ_h^s(θ) <1.Because of the extra cos^2 2θ factor in the numerator, the ratio will always be less than 1, and Eq. (<ref>) can never be fulfilled below the critical temperature T_0,s^*. This proves that the gap symmetry remains s- wave at all temperature below the critical point. § NUMERICAL SOLUTION OF THE GAP EQUATION, THE POSSIBLE GAP SYMMETRY AND SPECIFIC HEAT AWAY FROM THE QCP: Δ≠ 0In Section.<ref> we solve the non-linear gap equationat the nematic quantum critical point(δ=0) by transforming it from an integral equation(Eq. (<ref>)) to an algebraic equation(Eq. (<ref>)). Even though this is right to do at δ= 0, the price one has to pay is that the solution gives only the modulus of the gap function Δ_h() leading to a functional degeneracy to the obtained solution, as any function of the form Δ_h(k)e^i Φ() is also another possible solution. Away from the critical point (δ≠ 0), one has to solve the integral equation. In this section, we present our numerical result for the solution of the non-linear integral gap equation for non-zero values of δ. We first focus on the linearized gap equation1λΔ_h(θ_)= d θ_2πcos^2(θ_+θ_)δ^2+4 sin^2(θ_-θ_/2 )Δ_h(θ_) =d θ_2πM(θ_,θ_) Δ_h(θ_),cast into a matrix equation in angular space with eigenvalue labeled as λ and kernel matrixM(θ_,θ_)=cos^2(θ_+θ_)δ^2+4 sin^2(θ_-θ_/2 ).Here, we suppress the δ dependence of the matrix M for the simplicity of the representation. We break Eq. (<ref>) into different angular momentum channels and solve for the largest eigenvalue in each channel to find the leading instability. The relevant channels consistent with the pairing interaction, Eq. (<ref>) are A_1g (eigen functions are like: 1, cos 4θ, cos 8θ etc), B_1g (eigen functions are like: cos2θ, cos 6θ, cos 10θ etc), and E_g (eigen functions are like: cosθ, cos 3θ, cos 5θ etc). We will refer them as s,d and p- waves respectively. In Fig.<ref>a, we plot how the largest eigenvalue λ_s,λ_d and λ_p in s,d, and p- wave channel respectively vary with the nematic mass parameter δ. We find the leading instability happens in the s- channel followed by d- and p- wave respectively from the linearized gap equation. In the inset, we plot the ratio of λ_d/λ_s and λ_p/λ_s with varying δ. As δ goes to zero, these eigen values get closer to each other and become degenerate at δ=0. Below the transition point, there could be 2 situations possible: (a) after the gap in s- wave channel develops, it acts against the subleading d and p waves and never let them win. As a result, s- wave remains down to zero temperature, or (b) at some low temperature there is a second phase transition from s- to either s+i d or p wave. We solve the full non-linear gap equation Eq. (<ref>) numerically for δ=0.01 at different temperatures and show our result in Fig.<ref>b. Our results support the first case of s- wave gap symmetry being intact at all the temperature below the critical temperature T_c. Even though s- wave wins, we argue there is a possibility of having only p- wave gap symmetry if one includes the repulsive component of the pairing interaction presented in Eq. (<ref>). The repulsive regular part of the intra-pocket pairing interaction does not scale with the nematic susceptibility and has only s- and d- wave components. As a result, the pairing strength for the p- wave is larger than for s- and d- wave when one includes both the attractive (V^h_singular) and repulsive (V^h_regular) components of the effective pairing interaction in the solution of the non-linear gap equation. The p- wave has to be of the form p_x± ip_y, as |Δ_h()| is same on the k_x and k_y axis found from the analysis of section.<ref>. We define the gap anisotropy α=Δ_h(θ=π/4)/Δ_h(θ=0) as the ratio of the gap along the k_x=± k_y axis(θ=π/4) to k_x axis(θ=0)show its variation with the nematic mass parameter δ in Fig.<ref>c. We fit our result upto second order in δ. We find a good fit with: α(δ)=2.12δ^2+0.44δ. We numerically obtain the specific heat coefficient as a function of the reduced temperature T/T_c for two different values of δ=0.01 and 0.1 and show the result in Fig.<ref>d. For small but non-zero δ, there will be jump at the transition point as mentioned before. On the other hand, at the low temperature the behavior of the specific heat coefficient will be qualitatively similar to the δ=0 case because of the presence of the exponentially small gap near the cold spot for small values of δ.
http://arxiv.org/abs/2310.17728v1
{ "authors": [ "Kazi Ranjibul Islam", "Andrey Chubukov" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20231026184000", "title": "Unconventional Superconductivity near a Nematic Instability in a Multi-Orbital system" }
ArcheType: A Novel Framework for Open-Source Column Type Annotation using Large Language Models Juliana Freire Accepted XXX. Received YYY; in original form ZZZ ===============================================================================================†Major part of this work was done during a research internship at MBZUAI. Claim span identification (CSI) is an important step in fact-checking pipelines, aiming to identify text segments that contain a check-worthy claim or assertion in a social media post.Despite its importance to journalists and human fact-checkers, it remains a severely understudied problem, and the scarce research on this topic so far has only focused on English.Here we aim to bridge this gap by creating a novel dataset, , consisting of 7K real-world claims collected from numerous social media platforms in five Indian languages and English. We report strong baselines with state-of-the-art encoder-only language models (e.g., XLM-R) and we demonstrate the benefits of training on multiple languages over alternative cross-lingual transfer methods such as zero-shot transfer, or training on translated data, from a high-resource language such as English. We evaluate generative large language models from the GPT series using prompting methods on thedataset and we find that they underperform the smaller encoder-only language models for low-resource languages.[We release ourdataset and code at https://github.com/mbzuai-nlp/x-claimhttps://github.com/mbzuai-nlp/x-claim] § INTRODUCTION Social media platforms have become a prominent hub for connecting people worldwide. Along with the myriad benefits of this connectivity, e.g., the ability to share information instantaneously with a large audience, the spread of inaccurate and misleading information has emerged as a major problem <cit.>.Misinformation spread via social media has far-reaching consequences, including the potential to sow chaos, to foster hatred, to manipulate public opinion, and to disturb societal stability <cit.>. Claims play an integral role in propagating fake news and misinformation, serving as the building blocks upon which these deceptive narratives are formed.In their Argumentation Theory, <cit.> described a claim as “a statement that asserts something as true or valid, often without providing sufficient evidence for verification.”Such intentional or unintentional claims quickly gain traction over social media platforms, resulting in rapid dissemination of misinformation as was seen during recent events such as the COVID-19 pandemic <cit.> and Brexit <cit.>. To mitigate the detrimental impact of false claims, numerous fact-checking initiatives, such as PolitiFact and Snopes, dedicate substantial efforts to fact-checking claims made by public figures, organizations, and social media users.However, due to the time-intensive nature of this process, many misleading claims dodge verification and remain unaddressed. To address this, computational linguistic approaches have been developed that can assist human fact-checkers <cit.>. Recently, <cit.> introduced the task of claim span identification (CSI), where the goal is to identify textual segments that contain claims or assertions made within the social media posts.The CSI task serves as a precursor to various downstream tasks such as claim verification and check-worthiness estimation.While efforts have been made in combating misinformation in different languages <cit.>, research in identifying the claim spans has so far been limited to English.Previously, <cit.> have manually extracted COVID-19 claim spans from Twitter in English.However, the landscape of fraudulent claims goes beyond COVID-19 and Twitter. In this work, we aim to bridge these gaps by studying the task of multilingual claim span identification (mCSI) across numerous social media platforms and multiple languages.To the best of our knowledge, this is the first attempt towards identifying the claim spans in a language different from English. We design the first data curation pipeline for the task of mCSI, which, unlike <cit.>, does not require manual annotation to create the training data.We collect data from various fact-checking sites and we automatically annotate the claim spans within the post.Using the pipeline, we create a novel dataset, named , containing 7K real-world claims from numerous social media platforms in six languages: English, Hindi, Punjabi, Tamil, Telugu, and Bengali. <Ref> showcases a few examples from our dataset. We report strong baselines for the mCSI task with state-of-the-art multilingual models. We find that joint training across languages improves the model performance when compared to alternative cross-lingual transfer methods like zero-shot transfer, or training on translated data, from a high-resource language like English. In this work, we make the following contributions: * We introduce the first automated data annotation and curation pipeline for the mCSI task. * We create a novel dataset, named , for the mCSI task in six languages. * We experiment with multiple state-of-the-art encoder-only language models and the generative large language models to achieve high performance on the proposed task. § RELATED WORK Efforts to combat misinformation and fake news have focused on claims in various sources.The existing body of work in this area can be broadly categorized into the following major groups: claim detection <cit.>, claim check-worthiness<cit.>, claim span identification <cit.>, and claim verification <cit.>.Being the precursor of several other downstream tasks, claim detection has garnered significant attention.Various methods have been proposed to tackle claim detection, aiming to identify statements that may contain claims <cit.>.In response to the escalating issue of false claims on social media, there has been a surge in the development of claim detection systems specifically designed to handle text from social media platforms <cit.>. Recently, <cit.> introduced the task of claim span identification where the system needs to label the claim-containing textual segments from social media posts, making claim detection systems more explainable through this task.While most existing methods to combat fake news are primarily tailored for English <cit.>, in recent times, there has been a surge in interest regarding the advancement of fact-checking techniques for various other languages.ClaimRank <cit.> introduced an online system to identify sentences containing check-worthy claims in Arabic and English.The CheckThat! Lab has organized several multilingual claim tasks over the past five years, progressively expanding language support and garnering an increasing number of submissions <cit.>.In their latest edition, <cit.> featured factuality tasks in seven languages: English, German, Arabic, Italian, Spanish, Dutch, and Turkish. <cit.> introduced X-FACT, a comprehensive multilingual dataset for factual verification of real-world claims in 25 languages. Unlike that work, here we focus on extracting the claim from a social media post, rather than fact-checking a claim.The task of claim span identification remains unexplored due to the lack of datasets in other languages. <cit.> developed a dataset of 7.5K manually annotated claim spans in tweets, named CURT; all the tweets and claim spans in that dataset are in English. Additionally, while there has been interest in claims in other languages, there is a notable lack of progress on Indian languages. Here, we aim to bridge this gap. § DATASET We follow a two-step pipeline to develop our dataset: (i) data collection and (ii) automated annotation.We present a high-level overview of our proposed data creation methodology in <Ref>.Below, we explain these steps in detail. §.§ Data Collection We observe in various fact-checking websites that professional fact-checkers, while investigating a given social media post or news article, first find the claim made in the post, which we call a normalized claim, and then they verify whether that claim is true, misleading, or false. This is the motivation for the CSI task as a precursor to fact-checking as it is a step in the fact-checking process as performed by humans. Thus, we leverage the efforts of fact-checkers and we collect data from numerous fact-checking websites that are recognized by the International Fact-Checking Network (IFCN).[<https://www.poynter.org/ifcn/>] We aim to create a dataset comprising claims made in social media and in multiple languages, with a focus on Indian languages. We scrape data from fact-checked posts in six languages: English, Hindi, Punjabi, Tamil, Telugu, and Bengali.We highlight that we deal with low-resource languages since we found only a couple of fact-checking websites that analyze social media posts in languages other than English.For each website, we scrape all the fact-checked posts[The data was scraped in May 2023.] with the help of a web scraping API.[<https://www.octoparse.com/>]Then, we collect the text of the social media post text and the normalized claim from the web page of each fact-checked post with the help of regular expressions based on the structure of the fact-checking website. Finally, we use various filtering rules to remove posts that are about videos, Instagram reels, or when their text is too short or excessively long. These rules help us to collect only the social media posts with a text modality. We provide more details about the process of data collection in <Ref>.§.§ Automated AnnotationWe label the claim-containing a textual segment within the social media post using the human-written normalized claim as a guidance from the previous step. The normalized claim can be relied on to be extremely trustworthy since it was manually written by professional fact-checkers. However, it does not have to be literally spelled out as part of the social media post. Having this normalized claim gives us a good guidance about where to look for the claim span, and we try to do this mapping automatically.As shown in the bottom row in <Ref>, this step includes two substeps: sentence selection and conversion of the normalized claim to the claim span. Both substeps use modules that support multiple languages and do not require human intervention.First, we look for the most relevant sentence that encapsulates the claim made in the post. We do this by computing a similarity score between the normalized claim and each of the post's sentences, and we select the sentence with the maximum score.Second, using<cit.>, we find the word tokens in the post sentence that align with the word tokens in the normalized claim. We then obtain the claim span as the sequence of word tokens, starting with the first aligned word token and ending with the last aligned word token in the sentence.We use Stanza <cit.> to perform sentence segmentation for English, Hindi, Tamil, and Telugu. For Punjabi and Bengali, we consider the complete post text as a single sentence since we did not find any publicly available sentence segmentation tools for these languages. While usingin conversion from the normalized claim to the claim span, we used the official repository of <cit.>. Recent works <cit.> have used word-alignment to produce silver labels in the target language (like Hindi) using gold labels available in the source language (like English). <cit.> used word alignments from , and then considered the longest contiguous sequence of aligned tokens in the translated text as the final projected gold labels. Taking the longest contiguous sequence is suitable for tasks where the target text, the gold labels, or both, are relatively short. However, in our mCSI task, the normalized claims and the post texts are quite long (see <Ref>). Thus, we took the sequence of words from the first to the last aligned word.We found that this yielded better performance than taking the longest contiguous sequence of aligned words in the social media post.Note that we empirically chose the most appropriate sentence similarity measure for sentence selection, after trying a variety of similarity measures.Tasks such as machine translation <cit.> and text summarization <cit.> require evaluation measures that take paraphrasing and synonyms into account while comparing the model's generated text to the gold reference text. We leverage these evaluation measures for sentence similarity. To evaluate the commonly used measures such as ROUGE <cit.>, METEOR <cit.> and BERTScore, we manually annotated the claim spans for 300 randomly sampled posts in the six languages. Then, we evaluated the automatically annotated claim spans when using different similarity measures against the manually annotated claim spans. The results are shown in <Ref>: we can see that BERTScore-Recall yields consistently better performance for finding the annotated spans. For Punjabi and Bengali, we only useddue to the lack of a sentence segmentation module and we observed high-quality F1 scores of 81.23% and 78.6%, respectively. Overall, our two-step data creation methodology yields a robust, scalable, and high-quality automatically annotated data for our multilingual claim span identification task. §.§ Evaluation Sets and Dataset Analysis We created the evaluation sets with the help of linguistic experts in the six languages.We provided them with nearly 100 samples from the curated data in each language (400 in English) along with detailed annotation guidelines for the CSI task from <cit.>. We asked them to annotate the claim spans in the social media posts under the guidance of claims authored by professional fact-checkers. We created training and development splits in a ratio of 80:20 on the remaining curated data. For Telugu and Bengali, we only formed test sets as there were less examples available for these languages. <Ref> shows statistics about the dataset and the splits, and <Ref> shows a few examples from ourdataset.<Ref> further reports the length of the post text and the claim span.As the claim spans are generally concise and do not contain extra neighboring words, we observe that the claim spans are nearly half of the text of the post for all languages.§ EXPERIMENTSEvaluation Measures:Following <cit.>, we address mCSI as a sequence tagging task. For evaluation, we use three measures, computed at the span level <cit.>: Precision (P), Recall (R), and F1-score.Models: We use state-of-the-art transformer-based <cit.> multilingual pretrained encoder-only language models such as mBERT <cit.>, mDeBERTa <cit.>, and XLM-RoBERTa (XLM-R) <cit.>.We encode each post's token with IO (Inside-Outside) tags to mark the claim spans. Other encodings such as BIO, BEO and BEIO performed worse (see <Ref> for detailed comparison of encodings). More details about the training are given in <Ref>.§ RESULTSWe carry out an exhaustive empirical investigation to answer the following research questions:* Does the model benefit from joint training with multiple languages? (<Ref>)* Do we need training data in low-resource languages when we have abundant data in high-resource languages?[We consider English to be a high-resource language.] (<Ref>)* Can large language models (LLMs) such as GPT-4 identify the claims made in multilingual social media? (<Ref>)* How does the automatically annotateddataset compare to prior manually annotated datasets like CURT? (<Ref>)§.§ Training on Multilingual Social MediaWe train and compare two kinds of models:andmodels. In asetup, we train one model for each language using the available training data indataset, whereas in asetup, we train a single model on the training data for all languages combined. We note that there is nomodel for Telugu and Bengali due to the lack of training data for these languages. However, we evaluate themodel on them as that model was trained in multiple languages. The performance of these models with different pretrained encoders is shown in <Ref>.We can see that themodels outperform themodels by 1.15% precision and 0.93% F1, averaged over all languages (except for Telugu and Bengali). Even though the recall gets hurt by 0.45%, the improvement in F1 suggests that the model does benefit from joint training. We posit that the drop in recall and the gain in precision indicate that the model has become more careful when identifying the claims.§.§ Cross-lingual Transfer from EnglishWe use the English training data in two experimental settings and we compare them tomodels. In the first setting, we leverage the strong cross-lingual transfer capabilities of pretrained multilingual models <cit.>.We takemodels for English and test them on the remaining five languages. In this setting, we have zero-shot transfer from monolingual-English models. In the second setting, which we call translate-train models, we translate the English training data to the target language and we train a model only on the translated data. To perform translation of social media posts, we use Google translate,[<https://translate.google.com/>] and we project the claim spans (in English), or the token labels, on the translated post using our automated annotation pipeline (see <Ref> for detail).Both the zero-shot transfer and the translate-train models are almost consistently worse than themodels (in terms of F1) for all five languages.The translate-train models show a drop of 1.19% F1, whereas zero-shot transfer models are 2.13% F1 behind . This offers strong evidence that the training data in low-resource languages helps over the training data in a high-resource language.Interestingly, we notice that zero-shot transfer models are consistently worse than translate-train ones when using mBERT and mDeBERTa, for all five languages. For instance, with mBERT, zero-shot transfer models are worse by 2.92% F1. However, with XLM-R, zero-shot transfer models are better than translate-train models by 1.15% precision and 0.64% F1.We believe that this is because XLM-R has stronger cross-lingual transfer capabilities, stemming from its larger pretraining data compared to mBERT and mDeBERTa.§.§ Evaluating the GPT Series LLMsWe experiment with several large language models (LLMs):(T-DV3),(GPT-3.5) and(GPT-4) on the mCSI task using the OpenAI API.[<https://platform.openai.com/docs/api-reference>]We prompted each LLM with each social post from the test sets in ourdataset and we asked the LLM to respond with the claim span.The generated response may contain words that are either not present in the post or are synonyms of words from the posts. Thus, we treated the response like a normalized claim (<Ref>) and we passed it through our automated annotation step (<Ref>) to create the corresponding claim span. We evaluated the predicted claim spans with respect to the gold claim spans. More details about this setup are given in <Ref>.Zero-shot Prompting. We experiment with four prompts that use no examples: , , , and . The exact prompt structure is given in <Ref> in the Appendix. <Ref> shows their performance when used with different LLMs on ourdataset.We noticed that the LLMs mostly responded in English even when asked to analyze a post in another language. One reason could be that the prompts do not explicitly specify the language the LLM should respond in. Since our automated annotation step is language-agnostic, the corresponding claim span is in the target language. To overcome this, we asked the LLM to respond in the target language with theprompt. Interestingly, and unlike GPT-3.5 and GPT-4, the performance of T-DV3 withprompt significantly dropped by 12-37% F1 (averaged over all languages except English) when compared to the other three prompts.This suggests that T-DV3 is weaker in a multilingual setup.We further find that GPT-4 is nearly always better than GPT-3.5 by an average of 4.23% precision and 1.5% F1 over the four prompts. GPT-3.5 consistently outperformed T-DV3 by an average of 35.96% recall and 27.63% F1, but it lags behind by 0.5% in terms of precision. In-Context Learning. Here, we give the model a few labeled examples as part of the prompt as shown in <Ref> of the Appendix. Since GPT-4 outperformed the other two LLMs and showed the best performance with(<Ref>), we experimented with in-context learning with GPT-4 andprompt. For Telugu and Bengali, we use examples from translated data (<Ref>) due to the lack of training data in these languages. The results are shown in <Ref>.We see that in-context learning consistently improves F1 score over the zero-shot prompting in all six languages. With more examples shown, the performance increased in English, Hindi and Punjabi at the cost of more computation time. We find that 10-shot in-context learning improved the performance by an average of 2.78% F1 for the six languages in comparison to zero-shot prompting.Comparing mDeBERTa and GPT-4. We compared the best-performing fine-tuned encoder-only language model to the best-performing generative LLM.ThemDeBERTa model and GPT-4 yielded the best results for most languages as reported in <Ref>, <Ref>, and <Ref>.In the case of GPT-4, the best setting uses theprompt with 10-shot in-context learning for the six languages. <Ref> compares the two models in terms of F1 scores; we further offer comparison in terms of precision and recall in <Ref> of the Appendix.We find in <Ref> thatmDeBERTa outperforms GPT-4 by 2.07% F1, averaged over the six languages. GPT-4 shows competitive performance with mDeBERTa in English, Hindi and Punjabi. On the remaining three languages, mDeBERTa outperforms GPT-4 by a large margin of 2-7% F1. This suggests that the LLMs show strong performance on high-resource languages like English, but still lag behind smaller fine-tuned LMs on low-resource languages such as Bengali. §.§ Comparingand CURTWe trained mDeBERTa on the CURT dataset <cit.>, containing tweets in English, and we compared it to the Englishmodel (trained with mDeBERTa on English data in ) on the test sets for the six languages in thedataset. We show the F1 scores for both models in <Ref> and we report the precision and the recall scores in <Ref> in the Appendix.The mDeBERTa model fine-tuned on theEnglish data performs competitively in English with the CURT trained model and shows 3.52% F1 average gain over the remaining five languages. Note that CURT is manually annotated and is twice larger than the English part of thedataset. This offers empirical evidence of better model generalization when training on thedataset compared to the CURT dataset. § ERROR ANALYSIS In this section, we qualitatively analyze the errors made by the best-performingmDeBERTa model. To provide insights on how LLMs can be improved for this task, we also discuss the errors made by GPT-4 in its best-performing setting of 10-shot in-context learning. We analyzed the predictions on the test examples in English and Hindi, and we report the kinds of errors made by the two language models in <Ref>.Below, we discuss the results of the analysis.English. In the first post in <Ref>, both models deviate from the gold claim span.GPT-4 model correctly identifies the presence of the claim but inadvertently veers away from the central check-worthy assertion and focuses on the secondary claim.On the other hand, the mDeBERTa model includes information about moisture and bacteria in the mask, but contains several grammatical errors and lacks clarity.In particular, the phrase `every day day legionnaires disease' is confusing and doesn't convey a clear message.Both models provide similar claim spans for the second social media post, capturing the central assertion accurately.However, mDeBERTa contains the extra words `pregnancy your' at the beginning that are not present in the gold span.These extra words introduce confusion and do not accurately represent the claim made in the social media post.Hindi. Claim span identification in other languages is more complicated than in English due to the lack of proper guidelines pertaining to their linguistic characteristics. In the first example, GPT-4 almost accurately predicted the span, missing the first word (Mrs.) in the beginning.While mDeBERTa predicted both the claim and the premise, defying the very purpose of the task, which is to extract precise claim phrases from the post.In the second post, both models performed well overall.However, we observe a similar issue as for English: the inclusion of additional phrases alongside the claim spans, which can potentially detract from the clarity and precision of the claim. This indicates that these models struggle to make precise decisions about claim boundaries. We can conclude that for both languages, the models can identify the claim but might propose wider boundaries, including extra words.§ CONCLUSION AND FUTURE WORK We proposed a novel automated data annotation methodology for multilingual claim span identification. Using it, we created and released a new dataset called , which consists of real-world claim spans, and social media posts containing them, collected from numerous social media platforms in six languages: English, Hindi, Punjabi, Tamil, Telugu, and Bengali. Using state-of-the-art multilingual models, we established strong baselines based on encoder-only and generative language models.Our experiments demonstrated the benefits of multilingual training when compared to other cross-lingual transfer methods such as zero-shot transfer, or training on the translated data, from a high-resource language like English.We observed lower performance for GPT-style generative LLMs when compared to smaller fine-tuned encoder-only language models and we discussed their error analysis in the spirit of improving the LLMs on this task.Our work opens many important research questions: (1) How to obtain real-world claims without relying on fact-checkers analysis? (2) How to improve the understanding of LLMs about claims and social media in low-resource languages? (3) How to automatically curate multiple check-worthy claims made in the post? (4) How to improve the evaluation metric for the mCSI task? and (5) How to expand the CSI task to other low-resource languages?We plan to address these research questions in future work.§ LIMITATIONS Ourdataset for the mCSI task is limited to six languages. We do not know how well the developed systems will perform in languages that are not considered in this work. Moreover, the proposed dataset handles only the primary claim in the given social media post and ignores any other potentially check-worthy claims that the post might contain. In practice, the post may contain multiple check-worthy claims.§ ETHICSBroader Impact: Our dataset and model will help the fact-checkers filter out extraneous information, thus saving them significant amounts of time, effort and resources.Data: We place the utmost importance on user privacy. As a result, we have no intention of disclosing any information about the users. The data we curated is solely for research purposes, ensuring that user confidentiality and privacy are protected. Environmental Impact: It is critical to acknowledge the environmental consequences of training large language models. In our case, we mitigate this concern to some extent by focusing primarily on fine-tuning pretrained models rather than training them from scratch. acl_natbib[(Appendix) ]§ DATA COLLECTION Various fact-checking websites analyze social media posts, news articles, and other information sources that may spread misleading information. We confine our data collection to those websites that meet the following requirements. First, the website should have fact-checked numerous social media posts, at least 100, so that we can have a reasonable-sized dataset. Second, it should have investigated posts containing text.We find that many social media posts investigated by fact-checkers have their claim encapsulated in another modality, such as image or video, than text. The fact-checkers manually find the claims made in the posts, which we call as normalized claim. Our last requirement is that the fact-checking website should provide the normalized claim on the webpage of the fact-checked post.We find that there are only a couple of fact-checking websites that have investigated social media posts in low-resource languages and that meet the requirements discussed above. The website names, along with the number of fact-checked posts scraped from them, are reported in <Ref>. For English, we collect data from ThipMedia,[<https://www.thip.media/>] FullFact,[<https://fullfact.org/>] Snopes,[<https://www.snopes.com/>] PolitiFact,[<https://www.politifact.com/>] Factly,[<https://factly.in/>] and Vishvasnews.[<https://www.vishvasnews.com/>] We use Vishvasnews for the remaining languages along with Aajtak[<https://www.aajtak.in/>] for Hindi alone. We find that there are relatively fewer posts in Telugu and Bengali than in other languages, highlighting the difficulty in creating data for these extremely low-resource languages. We recognize the structure of the webpage for each fact-checking website and write rules (e.g., regular expressions) to collect the post text and the normalized claim. Once the post text and the normalized claim are collected, we pass the pair through various noise removal filters so that the noisy instances (like the ones that do not meet our requirements but dodged the previous steps) are removed from the data. These include removing when the post text or the claim contains words like video, photo, reel, etc.We find that this rule is almost always correct. Further, we remove the data points when the length of the post or claim is less than 3 words, omitting the erroneously scraped text, or more than 700 words, more like news articles. These filtering steps remove only 2.5% of the total data collected, averaged across six languages.§ MODEL TRAINING DETAILSWe train our models using the Adam optimizer <cit.> with weight decay of 0, β_1=0.9 and β_2=0.999. All experiments are carried out on a single A100 (40 GB) GPU. We use and adapt the code of <cit.> for our task. The models are trained with three different random seeds and we report the median of three evaluation runs since we observed a high variation of scores across the runs.We do hyperparameter tuning for the learning rate and the batch size over the English data and use the same hyperparameters over the data of the remaining five languages. Driven by the motivation that the base transformer model is pretrained on a large corpus of text and requires less training, we use a smaller learning rate of 1e-5 for it, but a slightly bigger learning rate of 3e-4 for the token-classifier network. We use a batch size of 32 for training mBERT and mDeBERTa whereas a smaller batch size of 16 for the larger model, XLM-R. The maximum sequence length for the three encoder-only language models is set to 512 to avoid initializing and training new positional embeddings. We use early stopping with a patience of 7 epochs to find the best model checkpoint as per the best F1 score over the development set.The development set is set differently in various training methodologies.Formodels, we use the development data in the target language, whereas, formodels, we combine the development sets of all languages. The translate-train models use the development data in the target language when available (Hindi, Punjabi, and Tamil) and use the translated English development set for Telugu and Bengali.We provide the number of trainable parameters of the pretrained encoder-only language models in <Ref>. For training on English data, XLM-R consumes nearly 1 hour of GPU runtime whereas mBERT and mDeBERTa take nearly 0.5 hours.§ MODELLING DETAILS The encoder-only language models are trained in the frame of sequence tagging task where the model needs to predict the correct label for each token in the post text. A randomly initialized feed-forward neural network is placed on top of the pretrained encoder as a token-classifier network. It takes as input the contextualized token embeddings (output by the pretrained encoder) and results in the probability distribution over the label space. The cardinality of the label space depends on how the tokens are encoded.We experiment with token-level encoding schemes: IO, BIO, BEO and BEIO. We train four XLM-R models, one with each encoding, on the English training data indataset and compare their performance on the English test set in . The scores are reported in <Ref>: IO encoding shows the best F1 performance among different encoding schemes.§ PROMPTING THE LARGE LANGUAGE MODELS (LLMS) IN GPT SERIES We use OpenAI API and evaluate ,andmodels on the multilingual claim span identification task through prompting. The prompts used in zero-shot prompting are provided in <Ref>. The decoding temperature is set to 0 and we use the default maximum response length. All the GPT series LLMs were prompted from October 16, 2023 to October 22, 2023.
http://arxiv.org/abs/2310.18205v1
{ "authors": [ "Shubham Mittal", "Megha Sundriyal", "Preslav Nakov" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027152812", "title": "Lost in Translation, Found in Spans: Identifying Claims in Multilingual Social Media" }
positioning arrows,backgrounds decorations.pathmorphing Bibliography/bibliography.bib node distance=2cm, auto row sep/normal=2.7em,column sep/normal=2.7em[description]style=multiline,leftmargin=3cm same equationsectiondefinitionDefinition[section] proposition[definition]Proposition lemma[definition]Lemma theorem[definition]Theorem corollary[definition]Corollarydefinition example[definition]Example remark[definition]Remark[The differential bundlesof the geometric tangent category of an operad Marcello Lanfranchi======================================================================== Affine schemes can be understood as objects of the opposite of the category of commutative and unital algebras. Similarly, -affine schemes can be defined as objects of the opposite of the category of algebras over an operad . An example is the opposite of the category of associative algebras. The category of operadic schemes of an operad carries a canonical tangent structure. This paper aims to initiate the study of the geometry of operadic affine schemes via this tangent category. For example, we expect the tangent structure over the opposite of the category of associative algebras to describe algebraic non-commutative geometry. In order to initiate such a program, the first step is to classify differential bundles, which are the analogs of vector bundles for differential geometry. In this paper, we prove that the tangent category of affine schemes of the enveloping operad  over a -affine scheme A is precisely the slice tangent category over A of -affine schemes. We are going to employ this result to show that differential bundles over a -affine scheme A are precisely A-modules in the operadic sense. Acknowledgements. We want to thank Sacha Ikonicoff and Jean-Simon Lemay for the work done together in <cit.> which led to this paper, and for the informal discussions we had around this topic. We are also thankful to Dorette Pronk and Geoffrey Cruttwell (PhD supervisors) for the discussions, advice, support and precious help during the realization of this article.§ INTRODUCTIONCruttwell and Lemay showed that some key geometrical features of affine schemes, in the sense of algebraic geometry, can be captured by defining a suitable tangent structure(cf. <cit.>). A tangent structureover a categoryprovides a categorical axiomatization for the tangent bundle functor of differential geometry.Concretely, a tangent structure (cf. <cit.>) consists of an endofunctoroftogether with a projection p⇒𝕀_, a zero section z𝕀_⇒ of the projection, a sum morphism s_2⇒, whose domain _2 is the pullback of the projection along itself, so that for every object A∈, p A→ A becomes an additive bundle (cf. <cit.>), that is a commutative monoid in the slice category over A. Moreover, a tangent structure carries two other structures: a vertical lift l⇒^2, where ^2 denotes the composition ofwith itself, and a canonical flip c^2⇒^2.The vertical lift defines an abstract version of the Euler vector field and, by satisfying a key universal property (cf. <cit.>), introduces a notion of linearity for morphisms of differential bundles (cf. <cit.>). Moreover, when the tangent category has negatives (cf. <cit.>) this universal property is also used to equip the set of sections of the projection, i.e. the vector fields, with Lie brackets. Finally, the canonical flip encodes the symmetry of the Hessian matrix.Tangent categories (with negatives) were first introduced by Rosický (<cit.>). Recently, the ideas of Rosický were revisited and generalized by Cockett and Cruttwell (<cit.>) and expanded into a flourishing research program. In the tangent category of affine schemes described by Cruttwell and Lemay, the tangent bundle functor is the functor that maps a commutative algebra A into the symmetric algebra of the A-module of Kähler differentials Ω A of A, i.e. A_AΩ A (cf. <cit.>). One striking result of their paper is the complete classification of differential bundles in this tangent category. Differential bundles, first introduced by Cockett and Cruttwell (cf. <cit.>), play the same role as vector bundles in the category of smooth finite-dimensional manifolds for an abstract tangent category (cf. <cit.>). Interestingly, Cruttwell and Lemay show that the category of differential bundles and linear morphisms over an affine scheme A is equivalent to the opposite of the category of modules over A.The author of this paper together with Sacha Ikonicoff and Jean-Simon Lemay extended the idea of studying the algebraic geometry of affine schemes with tangent categories to a new plethora of contexts. In <cit.>, they showed that the category of algebras _ of a (symmetric) operadover the category of R-modules (for a commutative and unital ring R) comes equipped with a tangent structure. In the following, we refer to this as the algebraic tangent structure of the operadwhich will be denoted by , or simply bywhen the operadis clear from the context. Moreover, the corresponding tangent category will be denoted as ()(̄_,). In the aforementioned paper, it was proven that every operad comes with a coCartesian differential monad (cf. <cit.>) and that this tangent category is precisely the tangent category of algebras of this monad.Crucially,admits an adjoint tangent structure (cf. <cit.>) which makes the opposite of the category of operadic algebras into a tangent category. In the following, we refer to this tangent structure as the geometric tangent structure of the operadwhich will be denoted by , or simply bywhen the operadis clear from the context. This tangent category can be interpreted as the tangent category of affine schemes over the operad , and will be denoted by ()(̄_^,).To properly appreciate the relevance of this result, notice that before the article <cit.>, the most revelant available examples of tangent categories were differential geometry, synthetic differential geometry, algebraic geometry, commutative rings etc. In particular, there was no example of non-commutative geometry completely described by tangent category theory. The existence of the geometric tangent category () of the associative operad , whose algebras are associative algebras, proves that tangent categories are suitable to describe a wider variety of geometries, including non-commutative geometry. In Example <ref> we discuss in detail this particular case, with a comparison with the commutative one. In the same paper, differential objects (cf. <cit.>) of the geometric tangent category of an operadwere classified and proved to be in bijective correspondence with left (1)-modules, where (1) denotes the unital and associative ring defined over the first entry of the operadand whose unit and multiplication are defined by the unit and the multiplication of the operad.In the same way as the tangent category described by Cruttwell and Lemay captures some key geometrical features of (commutative and unital) affine schemes, we expect the geometric tangent category of an operadto capture similar geometrical properties of the affine schemes over . The goal of this paper is to investigate this assumption by covering the intimate relationship between operads and their corresponding geometric tangent categories. One of the main results of the paper will be the complete classification of differential bundles over operadic affine schemes. We will reinterpret Cruttwell and Lemay's result as a special case of a larger phenomenon: the category of differential bundles and linear morphisms over an operadic affine scheme is equivalent to the opposite of the category of modules of the affine scheme.To prove this, we will first show another key result: the geometric tangent category of the enveloping operad over a -algebra A is equivalent to the slice tangent category over A of the geometric tangent category of . The classification of differential bundles will follow directly from this insight: differential bundles are precisely differential objects in the slice tangent category.§.§ OutlineThe paper is organized as follows. In Section <ref>, we first recall the main result of <cit.> which establishes that every operadproduces two tangent categories: the algebraic and the geometric tangent categories of . Once this is established, we show that the operation which takes an operad to its associated tangent categories is functorial (Section <ref>). In particular, we provide four distinct functors from the category of operads to the category of tangent categories. In Section <ref> we recall the notion of the slice tangent category of a tangent category over an object and we give a new characterization of this construction. In particular, in Section <ref> we show that the operation which takes a tangent pair to its associated slice tangent category extends to a right adjoint of the functor , which sends a tangent category with terminal object to the tangent pair formed by the tangent category and its terminal object. The main result of the paper is proved in Section <ref>. We first recall the definition of the enveloping operad of an operadic pair and then prove that the geometric tangent category of the enveloping operad  of the operadic pair (;A) is equivalent to the slice tangent category over the geometric tangent category of the operadover A. In Section <ref> we employ this result to classify the differential bundles over an operadic affine scheme as modules over the affine scheme. Finally, we dedicate Section <ref> to exploring some ideas for future work.§.§ BackgroundWe assume the reader is comfortable with the theory of symmetric operads over a symmetric monoidal category (see <cit.> for reference), and with fundamental notions of category theory like functors, adjunctions, limits, colimits, pullbacks, pushouts etc. We also assume the reader is knowledgeable about basic notions of tangent category theory (see <cit.> for reference). Even if we summarize in the first section the main results of the previous paper, we also recommend reading <cit.> to fully appreciate the whole story.§.§ Notation and naming conventionsWe denote by R a fixed commutative and unital ring and by _R the associated category of left R-modules. For an operadwe refer to a symmetric operad over the symmetric monoidal category _R, where the symmetric monoidal structure is defined by the usual tensor product over R, simply denoted by . The symmetric group that acts over n distinct elements is denoted by _n. The generators of the free -algebra over an R-module M are denoted by (μ;v_1 v_m), where μ∈(m), v_1 v_m∈ M. Given μ∈(m) and μ_1∈(k_1) μ_m∈(k_m), for positive integers m,k_1 k_m, the operadic composition of μ with μ_1 μ_m is denoted by μ(μ_1 μ_m). The unit of the operadis denoted by 1_; the monad associated withis denoted by _, with γ_ for the composition. We denote bythe category of symmetric operads over _R and their morphisms.The category of -algebras is denoted by _. Given a -algebra A, the action of the abstract m-ary operation μ∈(m) over m elements a_1 a_m of A induced by the structure map of A is denoted by μ_A(a_1 a_m) and when A is clear from the context simply by μ(a_1 a_m).The category of modules (in the operadic sense) over a -algebra A is denoted by _A, or simply by _A whenis clear from the context. We will write expressions like ∑_k=1^mμ(a_1 x_k a_m) to denote the sum over the index k of μσ̇_k(a_1 a_k-1,a_k+1 a_m,x_k) where σ_k denotes the cylic permutation (k k+1… m), where x_k∈ M, a_1 a_k-1,a_k+1 a_m∈ A and M is an A-module. Given a tangent category (,), we denote the tangent bundle functorby using the same letter as used for the tangent structure. For the projection, the zero morphism, the sum morphism, the lift, the canonical flip, and the negation (in case of a tangent category with negatives) we will use the letters p,z,s,l,c and n, respectively. When the tangent structure is clear from the context, we will simplify the notation by omitting the superscript .Morphisms of tangent categories come in different flavours. We need to distinguish among them therefore we introduce the following convention. Given two tangent categories (,) and (','), we refer to a lax tangent morphism(F,α)(,)→(',') as a functor F→' together with a natural transformation α Fø⇒'ø F compatible with the tangent structures (cf. <cit.>). We refer to α as the lax distributive law of the morphism.By a colax tangent morphism(G,β)(,)(',') we mean a functor G→' together with a natural transformation β'ø G⇒ Gø compatible with the tangent structures (the compatibilities are similar to the ones of a lax tangent morphism, where the distributive law goes in the opposite direction). We refer to β as the colax distributive law of the morphism. We also adopt the notationto denote colax tangent morphisms.By a strong tangent morphism we mean a lax tangent morphism where the distributive law is an isomorphism. Notice that the underlying functor of a strong tangent morphism together with the inverse of the lax distributive law defines a colax tangent morphism. Finally, by a strict tangent morphism we refer to a strong tangent morphism whose distributive law is the identity. Since, in this case, the distributive law is trivial, we will omit it completely in the notation and simply refer to the functor as the strict tangent morphism.We denote bythe category of tangent categories and lax tangent morphisms. When required, we abuse notation and denote bythe 2-category with the same objects and 1-morphisms and whose 2-morphisms are natural transformations compatible with the lax distributive laws. Similarly, we denote by _≅ the category of tangent categories and strong tangent morphisms, and finally, by _= the category of tangent categories and strict tangent morphisms. Adopting the same naming convention used in <cit.>, a categoryis called semi-additive ifhas finite biproducts, which means that it admits finite products, finite coproducts and the canonical morphism between n products and n coproducts is an isomorphism. We denote by ⊕ the biproducts of . In particular, in _R, given two R-modules X and Y, we denote the elements of X⊕ Y as pairs (x,y) for each x∈ X and y∈ Y. In such a category, the empty biproduct is denoted by 0 and is the zero object, which is an object that is both initial and terminal. Note that for a category to be semi-additive is equivalent to being enriched over the category of commutative monoids. A semi-additive categorycomes equipped with a canonical tangent structurewhose tangent bundle functoris the diagonal functor X=X⊕ X, the projection is the projection on the first coordinate, i.e. p=π_1 X⊕ X→ X, the zero morphism is the injection in the first coordinate, i.e. z=ι_1 X→ X⊕ X, the n-fold pullback of the projection along itself is (isomorphic to) the n+1 tuple _n X=X⊕ X⊕…⊕ X, the sum morphism is the identity in the first coordinate and the sum on the second and the third, i.e. s X⊕ X⊕ XX⊕ X; the vertical lift maps the first coordinate to the first one and the second coordinate to the fourth one, i.e. l X⊕ XX⊕ X⊕ X⊕ X; the canonical flip flips the internal coordinates, i.e. c X⊕ X⊕ X⊕ XX⊕ X⊕ X⊕ X, where τ=π_2øι_1,π_1øι_2 X⊕ Y→ Y⊕ X; finally, ifis additive, i.e. Ab-enriched, then the negation morphism is the identity on the first coordinate and the negation on the second, i.e. n X⊕ XX⊕ X. In this paper, we refer to the tangent structureinduced by additivity over _R as the canonical tangent structure and to (_R,) as the canonical tangent category. For two composable morphisms f A→ B and g B→ C of a category , we denote by gø f their composition. We will also often use the diagrammatic notation, i.e. fgḡø f. For functors, we adopt a similar notation with a single variation: when an object X∈ is specified, we denote by GFX the object (Gø F)(X) and similarly for morphisms. An adjunction between two functors F→' and G'→ with unit η and counit ϵ is denoted by (η,ϵ) F⊣ G. A similar notation will be adopted for conjunctions in the context of double categories.§ THE GEOMETRY OF AFFINE SCHEMES OVER AN OPERADIn <cit.>, the author of this paper, Sacha Ikonicoff, and Jean-Simon Lemay showed that every operad provides a tangent structure over the category of operadic algebras (<cit.>) as well as a tangent structure over the opposite of the same category (<cit.>). Since this is the starting point for this paper, we dedicate this section to recall this construction.Concretely, the tangent structureover the category of -algebras is defined as follows:tangent bundle functor The tangent bundle functor _→_ maps every -algebra A to the semi-direct product A⋉ A, which is the -algebra over the R-module A× A and with structure map defined as follows:μ((a_1,b_1) (a_m,b_m))(μ(a_1 a_m),∑_k=1^mμ(a_1 b_k a_m))projection The projection p A→ A projects along the first component, that is:p(a,b)ān-fold pullbacks The n-fold pullback along the projection of the tangent bundle functor _n_→_ maps every -algebra into the semi-direct product A⋉(A A). Moreover, the k-th projection π_k_n A→ A is defined as follows:π_k(a;b_1 b_n)(̄a,b_k)zero morphism The zero morphism z A→ A injects into the first component, that is:z(a)(̄a,0)sum morphism The sum morphism s_2A→ A is defined by:s(a;b_1,b_2)(̄a;b_1+b_2)vertical lift The vertical lift l A→^2A is defined by:l(a,b)(̄a,0,0,b)canonical flip The canonical flip c^2A→^2A is defined by:c(a_1,b_1,a_2,b_2)(̄a_1,a_2,b_1,b_2)negation The negation morphism n A→ A is defined by:n(a,b)(̄a,-b)On the other hand, the tangent structureover the opposite of the category of the category of -algebras is defined as follows:tangent bundle functor The tangent bundle functor _^→_^ maps a -algebra A to the -algebra _A A, where _A_A→_ is the functor that maps a A-module M to the free -algebra under A (cf. <cit.>) and A is the module of Kähler differentials of A. Concretely, A is the P-algebra generated by all elements a of A and symbols ̣̂ a, for each a∈ A such that the following relations are fulfilled:μ_ A(a_1 a_m)=μ_A(a_1 a_m)̣̂(ra+sb)=ṛ̂ a+ṣ̂ ḅ̂(μ(a_1 a_m))=∑_k=1^mμ(a_1 ̣̂ a_k a_m)for every r,s∈ R and a,b,a_1 a_m∈ A. In the following, we will omit the superscriptin ̣̂ whenever the operadis clear from the context. projection The projection, regarded as an _-morphism, p A→ A injects a∈ A into a∈ A.n-fold pullbacks The n-fold pushout (in _) along the projection of the tangent bundle functor _n_^→_^ is the -algebra generated by all the elements a of A and by symbols _̣1a,_̣2a _̣na, for each a∈ A, such that the following relations are fulfilled:μ__nA(a_1 a_m)=μ_A(a_1 a_m)_̣i(ra+sb)=r_̣ia+s_̣ib_̣i(μ(a_1 a_m))=∑_k=1^mμ(a_1 _̣ia_k a_m)for every r,s∈ R, a,b,a_1 a_m∈ A, and for every i=1 n. Moreover, the injections ι_k A→_nA map each a to a and ạ to _̣k a, for every k=1 n.zero morphism The zero morphism, regarded as a _-morphism, z A→ A projects each a to itself a and each ạ to 0.sum morphism The sum morphism, regarded as a _-morphism, s A→_2A maps each a to a and each ạ into _̣1a+_̣2a.vertical lift The vertical lift, regarded as a _-morphism, l^2A→ A maps each a∈ A to a, ạ and '̣a to 0 and '̣ạ to ạ.canonical flip The canonical flip, regarded as a _-morphism, c^2A→^2A maps each a to a, ạ to '̣a, '̣a to ạ and '̣ạ to '̣ạ.negation The negation morphism, regarded as a _-morphism, n A→ A maps each a to a and each ạ to -ạ.In the following, given an operad , we refer to ()(̄_,) as the algebraic tangent category ofand to ()(̄_^,) as the geometric tangent category of . §.§ The functoriality of the algebraic and the geometric tangent categoriesSo far we recapped the main result of <cit.>: every operad produces two distinct tangent categories, () and (). In this section, we explore the relationship between morphisms of operads and the corresponding morphisms of tangent categories; we will also show that this operation is functorial.First, we briefly recall that a morphism of operads φ→ is a sequence of R-linear morphisms {φ_n(n)→(n)}_n∈, compatible with the operadic structures, that is, given μ∈(m),μ_1∈(k_1) μ_m∈(k_m):φ_1(1_)=1_φ_M(μ(μ_1 μ_m))=φ_m(μ)(φ_k_1(μ_1) φ_k_m(μ_m))where Mk̄_1+…+k_m. For the sake of simplicity, in the following, we will omit the index and simply denote by φ any of the morphisms in the sequence. A morphism of operads induces a forgetful functor φ^_→_, which sends a -algebra B into the -algebra φ^B over the R-module underlying B and with structure map defined by:μ_φ^B(b_1 b_m)(̄φ(μ))_B(b_1 b_m)The functor φ^ admits a left adjoint φ_!_→_, which sends each -algebra A to the -algebra φ_!A obtained by identifying the two structure maps induced by the operadic composition and by the structure map of A over the free -algebra over the underlying R-module of A. Concretely, φ_!A can be understood as the coequalizer:__ A __ A _ A φ_!A[dashed, from=1-3, to=1-4] ["__φ A", from=1-1, to=1-2] ["γ_", from=1-2, to=1-3] ["_θ"', bend right, from=1-1, to=1-3]where θ is the structure map of A.As already mentioned in the introduction, the existence of the algebraic tangent category () of an operadis a consequence of the fact that the monad _ associated tocarries a differential combinator ∂_, so that _ becomes a coCartesian differential monad over _R (see <cit.> for details). As shown by <cit.>, over a semi-additive categorythere is a bijective correspondence between coCartesian differential monads overand tangent monads over the tangent category (,), whereis defined by the existence of biproducts in(cf. <cit.>).We recall that a tangent monad, first introduced in <cit.>, is a monad in the 2-categoryof tangent categories, lax tangent morphisms, and tangent natural transformations, which are natural transformations compatible in an obvious way with the distributive laws. We also recall that the distributive law associated to a tangent monad lifts the tangent structure over the base tangent category to the category of algebras of the monad. Concretely, the tangent bundle functor Ŝ_S→_S over the category of algebras of a tangent monad (S,α) over the canonical tangent category (_R,) sends an S-algebra A with structure map θ SA→ A into the S-algebra A with structure map S A SA A, where α Sø⇒ø S is the lax distributive law of S.This is precisely the origin of the tangent structure of (), which is lifted from the canonical tangent structure on _R. On the other hand, the tangent structure of () is the adjoint tangent structure of the algebraic one (see <cit.> for details). Concretely, this means that the tangent bundle functor , regarded as an endofunctor over _, is the left adjoint ofand that the projection, the zero morphism, the sum morphism, the vertical lift, the canonical flip and the negation ofare the mates of the corresponding natural transformations ofalong the adjunction ⊣.The intimate connection between operads and tangent monads plays a crucial role in understanding the relationship between morphisms of operads and corresponding morphisms of tangent categories. It is not hard to see that a morphism of operads φ→ induces a morphism of the corresponding tangent monads φ(_,α_)→(_,α_), where we recall that the distributive law α__ø⇒ø_ associated to an operadis the natural transformation:α_(μ;(x_1,y_1) (x_m,y_m))=((μ;x_1 x_m),∑_k=1^m(μ;x_1 y_k x_m))In this context, a morphism of tangent monads φ(S,α)→(W,β) over (_R,) consists of a natural transformation φ S⇒ W, compatible with the lax distributive laws α and β, that is:SøWø ø S ø W["φ_", from=1-1, to=1-2] ["φ"', from=2-1, to=2-2] ["α"', from=1-1, to=2-1] ["β", from=1-2, to=2-2]Moreover, since the tangent structure Ŝ over the category of algebras _S of a tangent monad (S,α) is lifted along the distributive law α from the base tangent category (_R,), a morphism of tangent monads φ(S,α)→(W,β) induces a strict tangent morphism φ^(_W,Ŵ)→(_S,Ŝ), whose underlying functor is the forgetful functor which sends a W-algebra B with structure map ψ WB→ B to the S-algebra B with structure map SBWBB. To see this, take a W-algebra B with structure map ψ WB→ B. So, φ^ŴB is the S-algebra B with structure map:S BW B WB BOn the other hand, Ŝφ^B is the S-algebra B with structure map:S B SB WB BThanks to Equation (<ref>) and to the naturality of φ, φ^ŴB is precisely Ŝφ^B.By putting together that morphisms of operads induce morphisms of tangent monads and that morphisms of tangent monads induce strict tangent morphisms of the corresponding tangent categories, we find that: The operation which takes an operad to its algebraic tangent category extends to a functor ^^→_= which sends a morphism of operads φ→ to the strict tangent morphism φ^()→(). As previously recalled, a morphism of operads φ→ induces a left adjoint φ_!_→_. Given a tangent morphism (G,β)(',')→(,) between two tangent categories whose underlying functor G'→ admits a left adjoint F→', it is natural to ask whether or not the functor F inherits from (G,β) a distributive law α which makes (F,α) into a new tangent morphism.It turns out that this works only if (G,β) is a colax tangent morphism. In that case, F becomes a lax tangent morphism. This interesting role played by colax tangent morphisms is better contextualized within the settings of double categories. Heuristically, a double category is a collection of objects together with two classes of morphisms, called horizontal and vertical morphisms, denoted by → and the second ones by , respectively, and a collection of double cells, that are squares:∙ ∙ ∙ ∙[""name=0, anchor=center, inner sep=0, from=1-1, to=1-2] [""name=1, anchor=center, inner sep=0, from=2-1, to=2-2] [""marking, from=1-2, to=2-2] [""marking, from=1-1, to=2-1] ["θ"description, draw=none, from=0, to=1]which can be composed horizontally and vertically. We invite the interested reader to consult <cit.> for more details on double categories. Notice that double categories can also be characterized as internal categories in the 2-category of categories. Tangent categories can be organized into a double categorywhose horizontal morphisms are lax tangent morphisms, vertical morphisms are colax tangent morphisms and double cells:(_1,_1) ('_1,'_1) (_2,_2) ('_2,'_2)["(G,β)"', ""marking, from=1-1, to=2-1] ["(G',β')", ""marking, from=1-2, to=2-2] ["(F_1,α_1)", from=1-1, to=1-2] ["(F_2,α_2)"', from=2-1, to=2-2] ["φ"description, draw=none, from=2-1, to=1-2]are tangent double cells, which are natural transformations φ F_2ø G⇒ G'ø F_1, fulfilling the commutativity of the following diagram:F_2ø_2ø G F_2ø Gø_1 G'ø F_1ø_1 '_2ø F_2ø G '_2ø G'ø F_1 G'ø'_1ø F_1["(α_2)_G"', from=1-1, to=2-1] ["F_2β", from=1-1, to=1-2] ["φ_", from=1-2, to=1-3] ["G'α_1", from=1-3, to=2-3] ["_2'φ"', from=2-1, to=2-2] ["β'_F_1"', from=2-2, to=2-3]The proof thatis a double category is straightforward but tedious, thus is left to the reader. Proposition <ref> shows that tangent categories can be organized into a double category. Conjunctions in this double category play a fundamental role in our story. Intuitively speaking, a conjunction in an arbitrary double category is the analog of an adjunction of 1-morphisms in a 2-category. Concretely, a conjunction consists of a vertical morphism G'→ together with a horizontal morphism F→' and two double cells η and ϵ '["G", ""marking, from=1-2, to=2-2] ["F", from=1-1, to=1-2] [""marking, Rightarrow, no head, from=1-1, to=2-1] [Rightarrow, no head, from=2-1, to=2-2] ["η"description, Rightarrow, from=2-1, to=1-2] ' ''[Rightarrow, no head, from=1-1, to=1-2] [""marking, Rightarrow, no head, from=1-2, to=2-2] ["G"', ""marking, from=1-1, to=2-1] ["F"', from=2-1, to=2-2] ["ϵ"description, Rightarrow, from=2-1, to=1-2]fulfilling the triangle identities. If the underlying functor G of a colax tangent morphism (G,β)(',')(,) is the right adjoint in a functorial adjunction (η,ϵ) F⊣ G, then the left adjoint F becomes a lax tangent morphism with the lax distributive law defined as the mate of β along the adjunction, that is:α FøFøø Gø FFø Gø'ø F'ø FIn particular, (η,ϵ)(F,α)⊣(G,β) forms a conjunction in the double category . Finally, also the opposite holds: any conjunction inis of the form (η,ϵ)(F,α)⊣(G,β) where α is defined as in Equation (<ref>) and (η,ϵ) F⊣ G is a functorial adjunction.Let's start by proving that (F,α) is a lax tangent morphism. The first step is to show that α is compatible with the projections, i.e. α p'̂_F=Fp, where p'̂ denotes the projection of the tangent structure ' and p the projection of . We will adopt a similar notation for the other natural transformations of the tangent structures. This amounts to showing the commutativity of the following diagram:Fø Føø Gø F Fø Gø'ø F 'ø F Fø Gø F Fø Gø F F F ["Fη"', from=3-1, to=2-2] [""name=0, anchor=center, inner sep=0, Rightarrow, no head, from=3-1, to=3-4] ["Fp"', from=1-1, to=3-1] ["p'̂_F", from=1-4, to=3-4] ["(Fø)η", from=1-1, to=1-2] [""name=1, anchor=center, inner sep=0, "Fβ_F", from=1-2, to=1-3] ["ϵ_'ø F", from=1-3, to=1-4] ["Fp_Gø F"', from=1-2, to=2-2] ["(Fø G)p'̂_F", from=1-3, to=2-3] ["ϵ_F"', from=2-3, to=3-4] [""name=2, anchor=center, inner sep=0, Rightarrow, no head, from=2-2, to=2-3] [""', draw=none, from=1-1, to=2-2] ["", draw=none, from=1-4, to=2-3] ["Δ"description, draw=none, from=2, to=0] ["(β;p,p'̂)"description, draw=none, from=1, to=2]To express the commutativity of the diagrams that compose the whole diagram we adopted the following convention: withwe denoted commutativity by naturality, by (β;p,p'̂) we denoted the compatibility between β and the projections, and Δ indicates the triangle identities between the unit and the counit of the adjunction. In the following, we adopt a similar notation.The second step is to prove the compatibility with the zero morphisms. This amounts to showing that Fzα=z'̂_F, i.e.:Fø Føø Gø F Fø Gø'ø F 'ø F Fø Gø F Fø Gø F F F ["Fη"', from=3-1, to=2-2] [""name=0, anchor=center, inner sep=0, Rightarrow, no head, from=3-1, to=3-4] ["Fz", from=3-1, to=1-1] ["z'̂_F"', from=3-4, to=1-4] ["(Fø)η", from=1-1, to=1-2] [""name=1, anchor=center, inner sep=0, "Fβ_F", from=1-2, to=1-3] ["ϵ_'ø F", from=1-3, to=1-4] ["Fz_Gø F", from=2-2, to=1-2] ["(Fø G)z'̂_F"', from=2-3, to=1-3] ["ϵ_F"', from=2-3, to=3-4] [""name=2, anchor=center, inner sep=0, Rightarrow, no head, from=2-2, to=2-3] [""', draw=none, from=1-1, to=2-2] ["", draw=none, from=1-4, to=2-3] ["Δ"description, draw=none, from=2, to=0] ["(β;z,z'̂)"description, draw=none, from=1, to=2]Let's show the compatibility with the sum morphism, which is (α)_2s'̂_F=Fsα:Fø_2 Fø_2ø G_2ø F_2 Fø G_2ø'_2ø F_2 '_2ø F_2Fø Føø Gø F Fø Gø'ø FF ["Fs"', from=1-1, to=2-1] ["s'̂_F", from=1-4, to=2-4] [""name=0, anchor=center, inner sep=0, "(Fø_2)η_2", from=1-1, to=1-2] [""name=1, anchor=center, inner sep=0, "F(β_2)_F_2", from=1-2, to=1-3] [""name=2, anchor=center, inner sep=0, "(ϵ_2)_'_2ø F_2", from=1-3, to=1-4] [""name=3, anchor=center, inner sep=0, "(Fø)η"', from=2-1, to=2-2] [""name=4, anchor=center, inner sep=0, "Fβ_F"', from=2-2, to=2-3] [""name=5, anchor=center, inner sep=0, "ϵ_'ø F"', from=2-3, to=2-4] ["Fs_Gø F"', from=1-2, to=2-2] ["(Fø G)s'̂_F", from=1-3, to=2-3] [""description, draw=none, from=0, to=3] [""description, draw=none, from=2, to=5] ["(β;s,s'̂)"description, draw=none, from=1, to=4]Let's now show the compatibility with the vertical lifts, i.e. α l'̂_F=Flα_'α:scale=.7,center Fø Føø Gø F Fø Gø'ø FF Fø^2ø Gø F Føø Gø'ø F Føø Gø'ø F Fø Gø'^2ø F Føø Gø Fø Gø'ø F Føø Gø Fø Gø'ø FFøø Gø Føø Gø F Fø Gø'ø Fø Gø'ø F Fø Gø'ø Føø Gø F Fø Gø'ø Føø Gø F Fø^2 Føø Gø Fø Fø Gø'ø Fø 'ø Føø Gø F 'ø Føø Gø F '^2ø F 'ø Fø 'ø Fø["(Fø)η", from=1-1, to=1-2] ["Fβ_F", from=1-2, to=1-5] ["ϵ_'ø F", from=1-5, to=1-6] [""name=0, anchor=center, inner sep=0, "Fl"', from=1-1, to=6-1] [""name=1, anchor=center, inner sep=0, "l'̂_F", from=1-6, to=6-6] ["(Fø)η_Gø'ø F"', from=2-3, to=3-3] [""name=2, anchor=center, inner sep=0, Rightarrow, no head, from=2-3, to=2-4] ["(Fø)η_"', from=6-1, to=6-2] [""name=3, anchor=center, inner sep=0, "Fβ_Fø"', from=6-2, to=6-3] ["('ø Fø)η"', from=7-4, to=6-4] [""name=4, anchor=center, inner sep=0, "('ø F)β_F"', from=6-4, to=6-5] ["'ϵ_'ø F"', from=6-5, to=6-6] ["ϵ_'ø Føø Gø F"pos=0.8, from=5-4, to=6-4] [""name=5, anchor=center, inner sep=0, "(Fø Gø'ø F)β_F"pos=0.3, from=5-4, to=4-5] ["(Føø Gø Fø)η"', from=6-2, to=4-2] ["ϵ_'ø Fø Gø'ø F"', from=4-5, to=6-5] [""name=6, anchor=center, inner sep=0, "(Føø Gø F)β_F"'pos=0.8, from=4-2, to=3-3] ["(Fø^2)η", from=6-1, to=2-2] ["(Fø)η_ø Gø F", from=2-2, to=4-2] ["(Fø Gø')ϵ_'ø F", from=4-5, to=2-5] ["ϵ_'^2ø F", from=2-5, to=6-6] ["(Fø)β_F", from=2-2, to=2-3] ["Fβ_'ø F", from=2-4, to=2-5] ["Fl_Gø F"', from=1-2, to=2-2] ["(Fø G)l'̂_F", from=1-5, to=2-5] [""description, draw=none, from=1-1, to=4-2] [""description, draw=none, from=1-6, to=4-5] [""description, draw=none, from=3-3, to=5-4] ["(Føø G)ϵ_'ø F"', from=3-4, to=2-4] [""name=7, anchor=center, inner sep=0, "Fβ_Fø Gø'ø F"'pos=0.3, from=3-4, to=4-5] [""name=8, anchor=center, inner sep=0, Rightarrow, no head, from=3-3, to=3-4] [""name=9, anchor=center, inner sep=0, "Fβ_Føø Gø F"pos=0.8, from=4-2, to=5-3] ["(Fø Gø'ø Fø)η"pos=0.2, from=6-3, to=5-3] [""name=10, anchor=center, inner sep=0, Rightarrow, no head, from=5-3, to=5-4] ["ϵ_'ø Fø"', from=6-3, to=7-3] [""name=11, anchor=center, inner sep=0, Rightarrow, no head, from=7-3, to=7-4] ["(β;l,l'̂)"description, draw=none, from=1-2, to=2-5] [""description, pos=0.7, draw=none, from=0, to=6-2] [""description, pos=0.3, draw=none, from=6-5, to=1] [""description, pos=0.3, draw=none, from=2-2, to=6] [""description, draw=none, from=5, to=4] [""description, draw=none, from=9, to=3] [""description, draw=none, from=10, to=11] ["Δ"description, draw=none, from=2, to=8] [""description, pos=0.3, draw=none, from=2-5, to=7]Finally, the compatibility with the canonical flips, i.e. α_'α c'̂_F=Fcα_'α:scale=.7,center'ø Fø 'ø Fø Fø^2 Føø Gø Fø Fø Gø'ø Fø 'ø Føø Gø F 'ø Føø Gø F '^2ø F Fø Gø'ø Føø Gø F Fø Gø'ø Føø Gø FFøø Gø Føø Gø F Fø G'ø Fø Gø'ø F Føø Gø Fø Gø'ø F Føø Gø Fø Gø'ø FFø^2ø Gø F Føø Gø'ø F Føø Gø'ø F Fø Gø'^2ø FFø^2ø Gø F Føø Gø'ø F Føø Gø'ø F Fø Gø'^2ø F Føø Gø Fø Gø'ø F Føø Gø Fø Gø'ø FFøø Gø Føø Gø F Fø Gø'ø Fø Gø'ø F Fø Gø'ø Føø Gø F Fø Gø'ø Føø Gø F Fø^2 Føø Gø Fø Fø Gø'ø Fø 'ø Føø Gø F 'ø Føø Gø F '^2ø F 'ø Fø 'ø Fø[""name=0, anchor=center, inner sep=0, "c'̂_F", from=2-6, to=11-6] ["(Fø)η_Gø'ø F"', from=7-3, to=8-3] [""name=1, anchor=center, inner sep=0, Rightarrow, no head, from=7-3, to=7-4] [""name=2, anchor=center, inner sep=0, "(Fø)η_"', from=11-1, to=11-2] [""name=3, anchor=center, inner sep=0, "Fβ_Fø"', from=11-2, to=11-3] ["('ø Fø)η"', from=12-4, to=11-4] [""name=4, anchor=center, inner sep=0, "'ø Fβ_F"', from=11-4, to=11-5] [""name=5, anchor=center, inner sep=0, "'ϵ_'ø F"', from=11-5, to=11-6] ["ϵ_'ø Føø Gø F", from=10-4, to=11-4] [""name=6, anchor=center, inner sep=0, "(Fø Gø'ø F)β_F"pos=0.3, from=10-4, to=9-5] ["(Føø Gø Fø)η"', from=11-2, to=9-2] ["ϵ_'ø Fø Gø'ø F"', from=9-5, to=11-5] [""name=7, anchor=center, inner sep=0, "(Foø Gø F)β_F"'pos=0.8, from=9-2, to=8-3] ["(Fø^2)η", from=11-1, to=7-2] ["(Fø)η_ø Gø F", from=7-2, to=9-2] ["(Fø Gø')ϵ_'ø F", from=9-5, to=7-5] ["ϵ_'^2ø F", from=7-5, to=11-6] ["(Fø)β_F", from=7-2, to=7-3] ["Fβ_'ø F", from=7-4, to=7-5] [""description, draw=none, from=8-3, to=10-4] ["(Føø G)ϵ_'ø F"', from=8-4, to=7-4] [""name=8, anchor=center, inner sep=0, "Fβ_Fø Gø'ø F"'pos=0.3, from=8-4, to=9-5] [""name=9, anchor=center, inner sep=0, Rightarrow, no head, from=8-3, to=8-4] [""name=10, anchor=center, inner sep=0, "Fβ_Føø Gø F"pos=0.8, from=9-2, to=10-3] ["(Fø Gø'ø Fø)η", from=11-3, to=10-3] [""name=11, anchor=center, inner sep=0, Rightarrow, no head, from=10-3, to=10-4] ["ϵ_'ø Fø"', from=11-3, to=12-3] [""name=12, anchor=center, inner sep=0, Rightarrow, no head, from=12-3, to=12-4] [""name=13, anchor=center, inner sep=0, "Fc"', from=2-1, to=11-1] ["(Fø)β_F"', from=6-2, to=6-3] [""name=14, anchor=center, inner sep=0, Rightarrow, no head, from=6-3, to=6-4] ["Fβ_'ø F"', from=6-4, to=6-5] ["Fc_Gø F", from=6-2, to=7-2] ["(Fø G)c'̂_F"', from=6-5, to=7-5] [""name=15, anchor=center, inner sep=0, Rightarrow, no head, from=5-3, to=5-4] ["(Fø)η_Gø'ø F", from=6-3, to=5-3] ["(Føø G)ϵ_'ø F", from=5-4, to=6-4] ["(Fø)η_ø Gø F"', from=6-2, to=4-2] ["(Fø Gø')ϵ_'ø F"', from=4-5, to=6-5] [""name=16, anchor=center, inner sep=0, "Fβ_Fø Gø'ø F"pos=0.3, from=5-4, to=4-5] [""name=17, anchor=center, inner sep=0, "(Føø Gø F)β_F"pos=0.8, from=4-2, to=5-3] [""name=18, anchor=center, inner sep=0, Rightarrow, no head, from=3-3, to=3-4] [""name=19, anchor=center, inner sep=0, "Fβ_Føø Gø F"'pos=0.8, from=4-2, to=3-3] [""name=20, anchor=center, inner sep=0, "Fβ_Fø Gø'ø F"', from=3-4, to=4-5] [""name=21, anchor=center, inner sep=0, "(Fø)η_", from=2-1, to=2-2] [""name=22, anchor=center, inner sep=0, "Fβ_Fø", from=2-2, to=2-3] [""name=23, anchor=center, inner sep=0, Rightarrow, no head, from=1-3, to=1-4] ["ϵ_'ø Fø", from=2-3, to=1-3] ["('ø Fø)η", from=1-4, to=2-4] [""name=24, anchor=center, inner sep=0, "('ø F)β_F", from=2-4, to=2-5] [""name=25, anchor=center, inner sep=0, "'ϵ_'ø F", from=2-5, to=2-6] ["ϵ_'ø Fø Gø'ø F", from=4-5, to=2-5] ["(Føø Gø Fø)η", from=2-2, to=4-2] ["(Fø Gø'ø Fø)η"', from=2-3, to=3-3] ["ϵ_'ø Føø Gø F"', from=3-4, to=2-4] ["ϵ_'^2ø F"', from=6-5, to=2-6] ["(Fø^2)η"', from=2-1, to=6-2] [""description, pos=0.3, draw=none, from=7-2, to=7] ["Δ"description, draw=none, from=1, to=9] [""description, draw=none, from=6, to=4] [""description, draw=none, from=10, to=3] [""description, draw=none, from=11, to=12] [""description, pos=0.7, draw=none, from=13, to=11-2] ["Δ"description, draw=none, from=14, to=15] [""description, pos=0.3, draw=none, from=6-2, to=17] [""description, pos=0.3, draw=none, from=6-5, to=16] ["(β;c,c'̂)"description, draw=none, from=14, to=1] [""description, draw=none, from=20, to=24] [""description, draw=none, from=19, to=22] [""description, pos=0.3, draw=none, from=11-5, to=0] [""description, pos=0.3, draw=none, from=2-5, to=0] [""description, pos=0.3, draw=none, from=2-2, to=13] [""description, draw=none, from=23, to=18] [""description, Rightarrow, draw=none, from=18, to=15] [""description, draw=none, from=21, to=2] [""description, draw=none, from=25, to=5] [""description, pos=0.3, draw=none, from=7-5, to=8]So far, we proved that (F,α) is a lax tangent morphism. The next step is to prove that:(,) (',') (,) (,)[""marking, Rightarrow, no head, from=1-1, to=2-1] [Rightarrow, no head, from=2-1, to=2-2] ["(G,β)", ""marking, from=1-2, to=2-2] ["(F,α)", from=1-1, to=1-2] ["η"description, draw=none, from=2-1, to=1-2](',') (',') (,) (',')["(G,β)"', ""marking, from=1-1, to=2-1] ["(F,α)"', from=2-1, to=2-2] [Rightarrow, no head, from=1-1, to=1-2] [""marking, Rightarrow, no head, from=1-2, to=2-2] ["ϵ"description, draw=none, from=2-1, to=1-2]are tangent double cells. This amounts to showing the commutativity of the following diagrams:Gø Fø ø Gø F Gø Føø Gø F Gø'ø F Gø Fø Gø'ø F Gø'ø F Gø'ø F["η_", from=1-1, to=1-2] ["(Gø Fø)η", from=1-2, to=2-2] ["η"', from=1-1, to=2-1] ["η_ø Gø F"', from=2-1, to=2-2] [""description, draw=none, from=1-2, to=2-1] ["(Gø F)β_F", from=2-2, to=3-2] ["Gϵ_Qø F", from=3-2, to=4-2] [Rightarrow, no head, from=4-1, to=4-2] ["β_F"', from=2-1, to=3-1] [Rightarrow, no head, from=3-1, to=4-1] ["η_Gø'ø F", from=3-1, to=3-2] [""description, draw=none, from=2-2, to=3-1] ["Δ"description, draw=none, from=3-1, to=4-2]Føø G Føø G Føø Gø Fø G Føø G Fø Gø'ø Fø G Fø Gø' 'ø Fø G ' [Rightarrow, no head, from=1-1, to=1-2] [Rightarrow, no head, from=1-2, to=2-2] ["(Fø)η_G"', from=1-1, to=2-1] ["(Føø G)ϵ"', from=2-1, to=2-2] ["Fβ", from=2-2, to=3-2] ["Fβ_Fø G"', from=2-1, to=3-1] ["(Fø Gø')ϵ"', from=3-1, to=3-2] [""description, draw=none, from=2-2, to=3-1] ["Δ"description, draw=none, from=1-2, to=2-1] ["ϵ_'ø Fø G"', from=3-1, to=4-1] ["'ϵ"', from=4-1, to=4-2] ["ϵ_'", from=3-2, to=4-2] [""description, draw=none, from=3-2, to=4-1]The converse is a straightforward computation we leave for the reader to spell out.Thanks to Proposition <ref> we can extends () to a covariant pseudofunctor which sends each morphism of operads φ→ to a lax tangent morphism (φ_!,α_!)()→(). The operation which takes an operad to its algebraic tangent category extends to a pseudofunctor _!→ which sends each morphism of operads φ→ to the lax tangent morphism (φ_!,β_!)()→(), whose underlying functor is the left adjoint of φ^ and β_! is defined as follows:β_!φ_!øφ_!øøφ^øφ_!=φ_!øφ^øøφ_!øφ_!Notice that _! is only pseudofunctorial. This comes from the fact that the left adjoint of a functor is only unique up to a unique natural isomorphism. Such a natural isomorphism equips _! with an associator and a left and a right unitor. In Proposition <ref>, we used that φ^ is a colax tangent morphism, since φ^ is a strict tangent morphism. To unwrap the definition of β_! notice that, given a -algebra A, φ_!(A) is the -algebra generated by pairs (a,b) for a,b∈ A, satisfying some suitable relations defined by the coequalizer that defines φ_!. Similarly, also (φ_!A) is generated by pairs (a,b) for a,b∈ A. So, β_! sends each generator (a,b) to the corresponding generator (a,b). Consider the operadsand , respectively known as the associative and the commutative operads. The corresponding algebras are the associative R-algebras and the commutative R-algebras, respectively. Concretely,is generated by a 2-ary operation μ which satisfies the following relation:μ(1_,μ)=μ(μ,1_)Similarly,is generated by a 2-ary operation ν which satisfies the same associativity condition as μ and moreover is symmetric, i.e.:ντ̇=νwhere τ∈_2 is the permutation (1 2). Since ν satisfies the same relation as μ, there is a quotient morphism φ→ of operads, which sends μ to ν, and that induces an adjunction:φ_!_⇆_φ^ φ^ sends a commutative algebra B to the underlying associative algebra φ^B, while φ_! sends an associative algebra A to its abelianization A/[A,A], where [A,A] denotes the commutator, i.e. the ideal generated by symbols ab-ba, for any a,b∈ A.The functor ^ maps the morphism of operads φ to the strict tangent morphism over the pullback functor φ^, which makes () a tangent subcategory of ().The functor _! maps the morphism of operads φ to the lax tangent morphism whose underlying functor is the abelianization functor φ_!. To understand what is the corresponding distributive law φ_!ø→øφ_!, first notice that, for an associative algebra A, φ_!((A)) is the abelianization of A⋉ A. It is not hard to see that this is isomorphic to φ_!(A)⋉φ_!(A) which is precisely (φ_!(A)). On the other hand, the distributive law sends the generator (a,b)∈φ_!(A⋉ A) to (a,b)∈φ_!(A)⋉φ_!(A). Thus, the distributive law is precisely the isomorphism between the abelianization of A⋉ A and the semi-direct product of the abelianization of A with itself.Consider the operad , which generates Lie algebras. Concretely,is the operad generated by a binary operation μ satisfying the following relations:μτ̇=-μμ(μ,1_)+μ(μ,1_)σ̇+μ(μ,1_)σ̇^2=0Note that the first relation encodes the antisymmetry of Lie brackets, while the second one corresponds to the Jacobi identity, where τ∈_2 is the permutation (1 2) and σ∈_3 is the cyclical permutation (1 2 3). The interested reader can find detailed equivalent constructions ofin <cit.>. There is a canonical morphism of operads φ→ (see <cit.>). Consider the induced adjunction:φ_!_⇆_φ^The pullback functor φ^ sends an associative algebra A to the underlying Lie algebra with Lie brackets defined by the commutator [a,b]āb-ba. On the other hand, the left adjoint φ_! sends a Lie algebrato its universal enveloping algebra _.The functor ^ sends φ to the strict tangent morphism whose underlying functor is the pullback functor φ^. The functor _! sends φ to the lax tangent morphism whose underlying functor is the universal enveloping algebra functor φ_!. To understand the distributive law φ_!ø→øφ_!, we first take a closer look at φ_!(()) and (φ_!()), for a Lie algebra . The former is the universal enveloping algebra of the semi-direct product ⋉. Concretely, this is the associative algebra generated by pairs (g,h) for each g,h∈, satisfying the relation:(g,h)(g',h')-(g',h')(g,h)=([g,g'],[g,h']+[h,g'])The second one is the semi-direct product of the universal enveloping algebra with itself. Concretely, this is the associative algebra of pairs (g,h) for g,h∈_, satisfying the relations:(g,h)(g',h')=(gg',gh'+hg')gh-hg=[g,h]It is straightforward to see that the latter relations imply the former ones, thus there is a canonical morphism of Lie algebras φ_!(())→(φ_!()), which corresponds to the distributive law.Given a morphism of operads φ→, we could be tempted to think that (φ_!,β_!) is strong, or maybe even strict, since φ^ is a strict tangent morphism. A counterexample is given by Example <ref>: relations (<ref>) imply relations (<ref>), but not vice versa. To see that, notice that, given a Lie algebra , the associative multiplication of φ_!(()) is an operation of pairs, e.g. (g,h)(g',h') and, a priori, there is no well-defined multiplication on single elements of , while in (φ_!()) there is indeed a multiplication on the elements ofitself that comes from the universal enveloping algebra φ_!(). To understand the reasons why the distributive law β_! of φ_! is not an isomorphism notice that even if β_! is the mate of an isomorphism β and that mating preserves pasting diagrams (see <cit.>) this holds as long as the mates of the diagrams are well-defined. This is not the case for β^-1, which does not admit a mate along the adjunction φ^⊣φ_!. So far we proved that the operation which takes an operad to its algebraic tangent categories extends to a pair of functors ^ and _!. Now, we focus our attention on the geometric tangent category of an operad. We are going to employ the fact that the geometric tangent structure is the adjoint tangent structure of the algebraic one.We briefly recall that a tangent structureover a categoryis called adjunctable (in <cit.> the authors introduced the “dual tangent structure” while in <cit.> the authors use the expression “having an adjoint tangent structure”. Here we use “adjunctable tangent structure”) if for any positive integer n, the functor _n→, which sends each object A∈ to the n-fold pullback _nA along the projection over A, admits a left adjoint _n. Cockett and Cruttwell proved in <cit.> that ifis adjunctable, then the opposite category ^ ofadmits a tangent structure , called the adjoint tangent structure of , whose tangent bundle functor is the left adjointofand whose projection, zero morphism, sum morphism, vertical lift and canonical flip are mates of the corresponding natural transformations of .Thanks to <cit.>, ifhas enough finite colimits, e.g.is cocomplete, then a tangent structureoveris adjunctable if and only if the tangent bundle functoradmits a left adjoint . In the following we denote bythe 2-category of adjunctable tangent categories, lax tangent morphisms and tangent natural transformations. The operation which takes an adjunctable tangent category (,) to its associated adjoint tangent category (^,) extends to a pseudofunctor (-)^→, which equips the 2-categorywith a pseudoinvolution, that is an endofunctor together with a natural isomorphism (-)^ø(-)^⇒𝕀_. In particular, given two adjunctable tangent categories (,) and (',') with adjoint tangent categories (^,) and ('^,'),respectively, and a lax tangent morphism (F,α)(,)→(','), (F,α)^(^,)→('^,') is the lax tangent morphism whose underlying functor is F^ and whose lax distributive law α^, is the mate of α along the adjunctions (θ,τ)⊣ and (θ',τ')'⊣', that is:α^'ø F'ø Føø'ø'ø FøFøregarded as a morphism in '.By definition, the natural transformations (i.e. projection etcetera) of the adjoint tangent structureof a tangent structureare mates along the adjunction (θ,τ)⊣ between the tangent bundle functors of the corresponding natural transformations of . Thanks to <cit.>, the mate of a pasting diagram is the pasting diagram of the mates, as long as the mate of each morphism of the diagram is well-defined. Therefore, given a lax tangent morphism (F,α) the distributive law α^ is compatible with the tangent structures and thus (F^,α^) is a lax tangent morphism between the corresponding adjoint tangent categories. To prove that (-)^ is a pseudofunctor notice first that, given three adjunctable tangent categories (,),(',') and (”,”) with adjoint tangent categories (^,),('^,') and (”^,”), respectively, and two lax tangent morphisms (F,α)(,)→(',') and (G,β)(',')→(”,”), the composition of (F^,α^) with (G^,β^) is (G^ø F^,Gα^øβ^_F). This must be compared with the opposite of the composition (Gø F,β_Fø Gα). However, for the pasting diagram property of mates, these are the same lax tangent morphism. Similarly, we can argue that (𝕀_^,𝕀_^) corresponds precisely to (𝕀_^,𝕀_). Finally, notice that if (,) is adjunctable, then so is its adjoint tangent category (^,) and its adjoint is (isomorphic to) (,). This makes (-)^ a pseudoinvolution over .We point out that (-)^ defined by Proposition <ref> is only a pseudofunctor and not a strict functor because the choice of a left adjoint for the tangent bundle functoris only unique up to a unique isomorphism. This implies that associativity and unitality are only defined up to a unique isomorphism, which defines the associator and the left and the right unitors of (-)^. One could hope that a similar pseudoinvolution (-)^ could also occur in the 2-category _ of adjunctable tangent categories, colax tangent morphisms, and corresponding tangent natural transformations. However, this is not the case. The reason is that mates of the colax distributive laws along the adjunctions of the tangent bundle functors are simply not well-defined. This breaking of symmetry plays a crucial role in understanding the differences between non-commutative algebraic geometry and the geometry of commutative affine schemes. We will come back to this point later in Example <ref>. Before proving the functoriality of the operation which takes an operad to its geometric tangent category, we notice an interesting fact. Consider a strong tangent morphism (G,α)(',')→(,) between two adjunctable tangent categories. Suppose also that the functor G has a left adjoint F⊣ G and denote by βᾱ^-1ø G⇒ Gø' the inverse of α Gø'⇒ø G. Then the corresponding tangent morphism (F^,(β_!)^)(^,)→('^,') over the left adjoint F and between the adjoint tangent categories is also strong.By Proposition <ref>, the mate of β along the adjunction F⊣ G defines a lax tangent morphism (F,β_!)(,)→(','), where β_! Fø⇒'ø F.By Proposition <ref>, the mate of the distributive law α along the adjunctions between the tangent bundle functors and their left adjoint defines a lax tangent morphism (G^,α^)('^,')→(^,), so that, as an -morphism, α^ø G⇒ Gø'. Similarly, β_! defines, again by mating, a lax tangent morphism (F^,(β_!)^)(^,)→('^,') , so that, as a '-morphism, (β_!)^'ø F⇒ Fø. Interestingly, α^ admits a second mate along the adjunction (η,ϵ) F⊣ G:(α^)_! FøFøø Gø FFø Gø'ø F'ø Fregarded as a morphism in '. Thus, we also obtain a colax tangent morphism (F^,(α^)_!)(^,)('^,'). To prove that (α^)_! is the inverse of (β_!)^, consider the following diagram:scale=.6,center Fø Føø Gø F Føø Gø'ø'ø F Føøø Gø'ø F Fø Gø'ø F 'ø FFøøø Føø Gø Føø Føø Gø'ø'ø Føø 'ø Føø Føøø Gø Fø Føø Gø Føø Gø Fø Føø Gø'ø'ø Føø Gø Fø 'ø Føø Gø Fø Føø Gø'ø Fø Føø Gø Fø Gø'ø Fø Føø Gø'ø'ø Fø Gø'ø Fø 'ø Fø Gø'ø FøFøø Gø'ø Fø Føø Gø'ø'ø'ø Fø 'ø'ø Fø Føø Gø'ø Fø Føøø Gø Fø Fø Gø Fø Fø T["Fη", from=1-1, to=1-2] ["F Gθ'_F", from=1-2, to=1-3] ["Fα_' F", from=1-3, to=1-4] ["Fτ_G' F", from=1-4, to=1-5] ["ϵ_' F", from=1-5, to=1-6] ["' Fθ", from=1-6, to=2-6] ["' Fη_", from=2-6, to=3-6] ["' Fβ_F", from=3-6, to=4-6] ["'ϵ_' F", from=4-6, to=5-6] ["τ'_F", from=5-6, to=6-6] ["F G'' Fθ", from=1-3, to=2-3] ["F G'' Fη_", from=2-3, to=3-3] ["F G'' Fβ_F", from=3-3, to=4-3] ["F G''ϵ_' F", from=4-3, to=5-3] ["F G'τ'_F", from=5-3, to=6-3] ["Fα_F"', from=6-3, to=6-4] ["Fτ_GF"', from=6-4, to=6-5] ["ϵ_F"', from=6-5, to=6-6] [""description, draw=none, from=1-6, to=6-3] [""name=0, anchor=center, inner sep=0, Rightarrow, no head, from=5-2, to=6-3] ["F Gθ'_' F", from=5-2, to=5-3] ["F G Fθ", from=1-2, to=2-2] ["F GFη_", from=2-2, to=3-2] ["F G Fβ_F", from=3-2, to=4-2] ["F Gϵ_' F", from=4-2, to=5-2] [""description, draw=none, from=1-2, to=6-3] ["Fθ"', from=1-1, to=2-1] ["Fη_"', from=2-1, to=3-1] ["Fβ_F"', from=3-1, to=4-1] [""name=1, anchor=center, inner sep=0, Rightarrow, no head, from=4-1, to=5-2] ["Fη_G' F", from=4-1, to=4-2] [""description, draw=none, from=1-1, to=4-2] ["Δ"description, draw=none, from=0, to=5-3] ["Δ"description, draw=none, from=1, to=4-2]where (η,ϵ) F⊣ G, (θ,τ)⊣ and (θ',τ')'⊣'. This shows that the following diagram commutes:Fø 'ø F Føøø Føøø Gø Fø Føø Gø'ø Fø Føøø Gø Fø Fø Gø FøFø["(α^)_!", from=1-1, to=1-4] ["(β_!)^", from=1-4, to=4-4] ["Fθ"', from=1-1, to=2-1] ["Fη_"', from=2-1, to=3-1] ["Fβ_F"', from=3-1, to=4-1] ["Fα_F"', from=4-1, to=4-2] ["Fτ_GF"', from=4-2, to=4-3] ["ϵ_F"', from=4-3, to=4-4]However:FøFøFøøø Fø Føøø Gø Fø Føøø Gø Fø Føø Gø'ø Fø Føøø Gø Fø Fø Gø FøFø["Fθ"', from=1-1, to=2-1] ["Fη_"', from=2-1, to=3-1] ["Fβ_F"', from=3-1, to=4-1] ["Fα_F"', from=4-1, to=4-2] ["Fτ_GF"', from=4-2, to=4-3] ["ϵ_F"', from=4-3, to=4-4] [""name=0, anchor=center, inner sep=0, "Fτ_"description, from=2-1, to=2-3] [""name=1, anchor=center, inner sep=0, "Fη_"description, from=2-3, to=4-3] [""name=2, anchor=center, inner sep=0, Rightarrow, no head, from=1-1, to=1-4] [""name=3, anchor=center, inner sep=0, Rightarrow, no head, from=1-4, to=4-4] [Rightarrow, no head, from=1-4, to=2-3] [Rightarrow, no head, from=3-1, to=3-2] [""description, draw=none, from=2-3, to=3-2] [Rightarrow, no head, from=3-2, to=4-2] ["β=α^-1"description, draw=none, from=3-2, to=4-1] ["Δ"description, draw=none, from=2, to=0] ["Δ"description, draw=none, from=3, to=1]We just proved that (β_!)^ø(α^)_!=𝕀_F. Similarly, one can prove also that converse and conclude that (α^)_! is the inverse of (β_!)^, as expected.Given a pair of conjoints (F,β_!)⊣(G,α) in the double category of tangent categories where (G,α) is a strong tangent morphism, Lemma <ref> establishes that the pseudofunctor (-)^ maps (F,β_!)⊣(G,α) to another pair of conjoints (G^,α^)⊣(F^,(β_!)^) and that (F^,(β_!)^) is also a strong tangent morphism. However, if (G,α) is strict this does not imply that (F^,(β_!)^) is strict as well. In the following diagram, we represent the proof of Lemma <ref>.β β_! α(β_!)^ α^ (α^)_!["F⊣ G", from=1-2, to=1-3] ["(-)^", from=1-3, to=2-3] ["(-)^"', from=2-1, to=3-1] ["F⊣ G"', from=3-1, to=3-2] ["inverses", leftrightarrow, from=2-1, to=1-2] ["inverses"', dashed, leftrightarrow, from=3-2, to=2-3]Starting from β, which is the inverse of the strong distributive law α, by moving to the right, i.e. by mating along the adjunction F⊣ G, we obtain a lax distributive β_!, which, as noticed in Remark <ref>, in general, is not invertible.By moving down from β_!, by applying the pseudofunctor (-)^, we obtain a lax distributive law (β_!)^. Similarly, by starting from α and moving down, i.e. applying (-)^, we obtain a lax distributive law α^, which, as mentioned in Remark <ref>, in general, is not invertible. Finally, by moving from α^ to the right, i.e. by mating along the adjunction F⊣ G, we obtain a colax distributive law (α^)_! which turns out to be the inverse of (β_!)^.We can now prove the functoriality of the operation which takes an operad to its associated geometric tangent category. Similarly, as for the algebraic counterpart of this construction, this operation extends to two functors, one mapping operad morphisms φ to a lax tangent morphism whose underlying functor is (φ^)^ and the second to a strong tangent morphism whose underlying functor is φ_!^. The operation which takes an operadto its associated geometric tangent category () extends to a contraviarant pseudofunctor ^^→ which sends a morphism of operads φ→ to the lax tangent morphism (φ^,α^)()→(), where α^ is defined as follows:α^øφ^øφ^øø=øøφ^øφ^øwhere (θ,τ)⊣ and (θ,τ)⊣. Moreover, the same operation extends also to a covariant pseudofunctor _!→_≅ which sends a morphism of operads φ→ to the strong tangent morphism (φ_!,α_!)()→(), where α_! is defined as follows:α_!øφ_!øφ_!øøøøφ_!øφ_!øwhere β_! is defined as in Proposition <ref>. Concretely, given a morphism φ→ and a -algebra B, φ^(B) is a -algebra generated by all b∈ B and by symbols ̣̂ b, for b∈ B, satisfying suitable relations. On the other hand, (φ^ B) is generated by all b∈ B and by symbols ̣̂ b, for b∈ B, satisfying suitable relations. Thus, the distributive law α^(φ^ B)→φ^(B) associated with φ^ sends each b to b and each ̣̂ b to ̣̂ b.Similarly, given a -algebra A, φ_!(A) is generated by all a∈ A and by ̣̂ a for a∈ A, satisfying suitable relations. On the other hand, (φ_!A) is generated by all a∈ A and by ̣̂ a, for a∈ A, satisfying suitable relations. Thus, the distributive law α_!φ_!(A)→(φ_!A) sends each a to a and each ̣̂ a to ̣̂ a. In Example <ref> we showed how the canonical morphism of operads φ→ is mapped by the functors ^ and _!. The functor ^ maps φ to the lax tangent morphism defined over the pullback functor φ^. Interestingly, this lax tangent morphism is not strong, i.e. the distributive law øφ^→φ^ø (as a -algebra morphism) is not an isomorphism.To prove that, notice that the module of Kähler differentials A of a commutative algebra A is given by quotienting the ideal I(ν A⊗_RA→ A), where ν represents the multiplication of A, by I^2, i.e. A=I/I^2. If B is an associative algebra, the corresponding module of Kähler differentials B is simply given by the ideal I(μ B⊗_RB→ B), where μ represents the multiplication of B (see <cit.> for a detailed description of both the modules of Kähler differentials in the commutative and in the associative case).Thus, for a commutative algebra A, there is a natural quotient map π^(A)=I→ I/I^2=A. The distributive law is induced precisely by this quotient map since it maps the symbols ̣̂ a to ̣̂ a. If the distributive law was an isomorphism such a comparison map between the modules of Kähler differentials would be invertible, which is clearly not. We note that a similar argument was used by Ginzburg in <cit.> to distinguish between “noncommutative geometry in the small, and noncommutative geometry in the large”, meaning that the former “is a generalization of the conventional ‘commutative’ algebraic geometry to the noncommutative world”. The latter instead“is not a generalization of commutative theory. The world of noncommutative geometry ‘in the large’ does not contain commutative world as a special case, but is only similar, parallel, to it.” (<cit.>).Finally, the functor _! maps the morphism of operads φ to the strong tangent morphism whose underlying functor is the (opposite of the) abelianization functor φ_!. The corresponding distributive law øφ_!⇒φ_!ø (as a commutative algebra morphism) is the commutative algebra morphism:(A/[A,A])→A/[A,A]which sends the generators [a] and ̣̂[a] to [a] and [̣̂ a], respectively, where we used the square brackets to indicate the left coset given by the commutator and an element of the associative algebra A. It is not hard to see that the algebra morphism A→(A/[A,A]) which sends each a to [a] and ̣̂ a to ̣̂[a] is well-defined and provides an inverse for the distributive law.In Example <ref> we showed how the canonical morphism of operads φ→ is mapped by the functors ^ and _!. The functor ^ maps φ to the lax tangent morphism whose underlying functor is (the opposite of) φ^. In order to understand the distributive law øφ^⇒φ^ø (as an associative algebra morphism), let's first take a closer look at (φ^(A)) and φ^((A)) for an associative algebra A. The former one is the Lie algebra generated by a∈ A and by symbols ̣̂ a for each a∈ A, satisfying the following relations:[a,b]=ab-bậ([a,b])=[̣̂ a,b]+[a,̣̂ b]The second algebra is generated by a∈ A and by symbols ̣̂ a for each a∈ A, satisfying the following relations:[a,b]=ab-bậ(ab)=̣̂ aḃ+ậ b[a,̣̂ b]=ậ b-̣̂ bȧ[̣̂ a,̣̂ b]=̣̂ ậ b-̣̂ ḅ̂ aNote that the relations of the former one are implied by the relations of the latter. The canonical quotient map (φ^(A))→φ^((A)) corresponds the distributive law. Note that such a map is not an isomorphism.Finally, the functor _! maps φ to the lax tangent morphism whose underlying functor is the (opposite of the) universal enveloping algebra functor φ_!. To understand the distributive law øφ_!⇒φ_!ø (as an associative algebra morphism), we first take a closer look at (φ_!()) and φ_!(()) for a Lie algebra . The former is the associative algebra generated by all g∈ and by symbols ̣̂ g for each g∈ and satisfying the relations:gh-hg=[g,h]̣̂(gh)=̣̂ gḣ+ĝ̣ hThe latter is the associative algebra generated by g∈ and by symbols ̣̂ g for each g∈, satisfying the relations:gh-hg=[g,h]̣̂ gḣ-ḥ̂ g=[̣̂ g,h]ĝ̣ h-̣̂ hġ=[g,̣̂ h]̣̂ ġ̣h-̣̂ ḥ̇g=[̣̂ g,̣̂ h]̣̂[g,h]=[̣̂ g,h]+[h,̣̂ g]Because the first set of relations implies the latter, this allows us to define a morphism of associative algebras φ_!(())→(φ_!()), which corresponds to the (inverse of the) distributive law. Thanks to Lemma <ref>, this morphism is an isomorphism.§ THE SLICE TANGENT CATEGORY AS A RIGHT ADJOINT FUNCTORRosický proved that, under mild assumptions, the slice of a tangent category (,) over an object A∈ is still a tangent category (cf. <cit.>). Cockett and Cruttwell further investigated this construction and related this to the notion of tangent fibrations (cf. <cit.>).In this section, we prove an important result that shows the deep relationship between operads and tangent categories. In a nutshell, we show that the slice tangent category of the geometric tangent category () of an operadover a -affine scheme A∈() is still the geometric tangent category (Â) of an operad Â. In particular,  is the enveloping operad of the -algebra A.To prove this result we are going to show that the functor which associates to each pair ((,),A) formed by a tangent category (sliceable over A) and an object A∈ the corresponding slice tangent category (,)/A fulfills the same universality condition of the functor that associates to each pair (,A) formed by an operadand a -algebra A the corresponding enveloping operad Â. This is not just an important connection between the world of operads and the one of tangent categories, but it also provides a new characterization for the construction of the slice tangent category in terms of a right adjoint functor and therefore it also constitutes a new result in tangent category theory.The section is organized as follows: first, we recall the original definition of the slice tangent category of a tangent category over an object. Then, we introduce the new characterization of this construction in terms of a right adjoint functor. Let's start with the main definitions. A tangent category (,) is sliceable over an object A∈ if for any E∈ and f E→ A in , the -pullback of f along the zero morphism is well-defined, that is the following diagram:ÂEEA A[" f", from=1-2, to=2-2] ["z"', from=2-1, to=2-2] ["f^"', dashed, from=1-1, to=2-1] ["v_f", dashed, from=1-1, to=1-2] ["⌟"anchor=center, pos=0.125, draw=none, from=1-1, to=2-2]is a well-defined pullback diagram and for every positive integer m the functor ^møø…ø preserves its universality. We also say that a tangent category is sliceable if it is sliceable over all of its objects. Given a sliceable tangent category (,) over an object A we can define a tangent bundle functor:Â/A→/Awhich maps each morphism f E→ A into the unique morphism f^ÂE→ A. We adopt the following notation: we will write Âf for the tangent bundle over f∈/A, regarded as an object in the slice category over A. Abusing notation, we also denote by ÂE the domain of ÂfÂE→ A, regarded as a morphism of . Notice that  is functorial in the slice category but not in .This characterization of the slice tangent bundle functor, also known as the vertical tangent bundle functor, is due to Cockett and Cruttwell in their article on differential bundles and tangent fibrations. The equivalent original characterization of  is due to Rosický. For our purposes the Rosický version is more useful, therefore we recall briefly here this construction. First, notice that a tangent category (,) is sliceable over A∈ if and only if for any morphism f E→ A, the equalizer:ÂEEA["v_f", dashed, from=1-1, to=1-2] [" f", shift left=2, from=1-2, to=1-3] [" fpz"', shift right=2, from=1-2, to=1-3]is well-defined and is a -equalizer, which means that for every positive integer m the functor ^m preserves its universality. In the following, we denote by v_f the equalizer map v_fÂE→ E. We can then give the following characterization:tangent bundle functor The tangent bundle functor Â/A→/A is defined as follows:Â(f E→ A)ÂE EEAfor any f∈/A. Moreover, given a morphism g(f E→ A)→(f' E'→ A), i.e. g E→ E' such that gf'=f, we can define:ÂEEA ÂE'E'A["v_f", from=1-1, to=1-2] [" f", shift left=2, from=1-2, to=1-3] [" fpz"', shift right=2, from=1-2, to=1-3] ["v_f'"', from=2-1, to=2-2] [" f'", shift left=2, from=2-2, to=2-3] [" f'pz"', shift right=2, from=2-2, to=2-3] [Rightarrow, no head, from=1-3, to=2-3] [" g"description, from=1-2, to=2-2] [" g"', dashed, from=1-1, to=2-1]In particular,  is functorial.projection The projection pÂÂf→ f, is defined as:ÂEE E ["v_f", from=1-1, to=1-2] ["p", from=1-2, to=1-3]zero morphism The zero morphism z f→Âf is defined as the unique morphism that makes commuting the following diagram:ÂEE A B ["v_f", from=1-1, to=1-2] [" f", shift left=2, from=1-2, to=1-3] [" fpz"', shift right=2, from=1-2, to=1-3] ["z"', from=2-1, to=1-2] ["zÂ", dashed, from=2-1, to=1-1]where we employed the universality of v_f;sum morphism The sum morphism sÂÂ_2f→Âf is defined as:ÂEE AÂ_2E _2E["v_f", from=1-2, to=1-3] [" f", shift left=2, from=1-3, to=1-4] [" fpz"', shift right=2, from=1-3, to=1-4] ["s"', from=2-2, to=1-3] [dashed, from=2-2, to=1-2] ["v_f× v_f"', from=2-1, to=2-2] ["sÂ", dashed, from=2-1, to=1-2]where we employed the universality of v_f;vertical lift The lift lÂÂf→(Â)^2f is defined as:(Â)^2E ÂE ^2EAÂEE [" v_f", from=1-2, to=1-3] ["^2f", shift left=2, from=1-3, to=1-4] ["^2f p z"', shift right=2, from=1-3, to=1-4] ["l"', from=2-2, to=1-3] [dashed, from=2-2, to=1-2] ["v_Âf", from=1-1, to=1-2] ["v_f"', from=2-1, to=2-2] ["lÂ", dashed, from=2-1, to=1-1]where we employed the universality of v_f and v_Âf;canonical flip The canonical flip cÂ(Â)^2f→(Â)^2f is defined as:(Â)^2E ÂE ^2EA(Â)^2E ÂE ^2E[" v_f", from=1-3, to=1-4] ["^2f", shift left=2, from=1-4, to=1-5] ["^2f p z"', shift right=2, from=1-4, to=1-5] ["c"', from=2-3, to=1-4] [dashed, from=2-3, to=1-3] ["v_Âf", from=1-2, to=1-3] [" v_f"', from=2-2, to=2-3] [dashed, from=2-2, to=1-2] ["v_Âf"', from=2-1, to=2-2] ["cÂ", dashed, from=2-1, to=1-2]where we employed the universality of v_f and v_Âf.If (,) has negatives with negation n, then we can also lift the negation morphism to the slice tangent category as follows:negation The negation morphism nÂÂf→Âf is defined by:ÂEEA ÂEEA["v_f", from=1-1, to=1-2] [" f", shift left=2, from=1-2, to=1-3] [" fpz"', shift right=2, from=1-2, to=1-3] ["v_f", from=2-1, to=2-2] [" f", shift left=2, from=2-2, to=2-3] [" fpz"', shift right=2, from=2-2, to=2-3] [Rightarrow, no head, from=1-3, to=2-3] ["n"description, from=2-2, to=1-2] ["nÂ", dashed, from=2-1, to=1-1]where we employed the universality of v_f.We refer to this tangent category as the slice tangent category of (,) over A and we denote it by (,)/A. Given a sliceable tangent category (,) it is not hard to see that the operation A↦(,)/A extends to a pseudofunctor ^→_≅. Cockett and Cruttwell in <cit.> proved that the fibres of a tangent fibration (cf. <cit.>) are tangent categories and that the substitution functors are strong tangent morphisms. This result extends to a correspondence between tangent fibrations and pseudofunctors like ^→_≅. Interestingly, <cit.> shows thatis the pseudofunctor associated to a suitable tangent fibration.§.§ The universal property of slicingThe goal of this subsection is to prove that the operation which takes a pair (,; A) formed by a tangent category (sliceable over A) and an object A∈ to its associated slice tangent category extends to a right adjoint of the functor that sends each tangent category (,) with terminal objectto the pair ((,),). Let's start by introducing some useful jargon. A tangent pair is a pair formed by a tangent category (,) sliceable over an object A and the object A itself. We denote a tangent pair by (,;A). Moreover, given two tangent pairs (,;A) and (',';B), a morphism of tangent pairs(F,α;φ)(,;A)→(',';B) is a lax tangent morphism (F,α)(,)→(',') together with a morphism φ FA→ B of '.Tangent pairs together with their morphisms form a category denoted by . In particular, notice that the composition of two morphisms of tangent pairs (F,α;φ)(,;A)→(',';B) and (G,β;ψ)(',';B)→(”,”;C) is the triple formed by Gø F→”, the associated lax distributive law Gø FøGø'ø F”ø Gø F, and the morphism G(F(A))GBC of ”. Consider the pseudofunctor → which sends each tangent category (,) to the category of objects A ofsuch that (,) is sliceable over A. Via the Grothendieck construction, this produces a cofibration ∫^→. The category of elements of this cofibration coincides with the categoryof tangent pairs.Consider two tangent pairs (,;A) and (',';B) and a morphism of tangent pairs (F,α;φ)(,;A)→(',';B). Let f E→ A a morphism in . Finally, consider the morphism θ_f FÂE→'B̂FE, as the unique morphism which makes commuting the following diagram:FÂEF E'B̂FE' FE' FA B ' BFA F A["Fv_f", from=1-1, to=1-5] ["Ff^"'pos=0.6, from=1-1, to=5-1] ["(Ffφ)^", from=2-2, to=4-2] [""name=0, anchor=center, inner sep=0, "' Ff"', from=2-4, to=3-4] ["v_Ff"', from=2-2, to=2-4] ["'φ"', from=3-4, to=4-4] ["θ_f", dashed, from=1-1, to=2-2] ["α"description, from=1-5, to=2-4] ["z", from=4-2, to=4-4] ["φ"description, from=5-1, to=4-2] ["⌟"anchor=center, pos=0.125, draw=none, from=2-2, to=3-4] ["Fz"', from=5-1, to=5-5] [""name=1, anchor=center, inner sep=0, "F f"pos=0.6, from=1-5, to=5-5] ["α"description, from=5-5, to=3-4] [""description, draw=none, from=0, to=1]Therefore, the functor:F/A→'/BF(f E→ A)↦(FEFAB)F(g(f E→ A)→(f' E'→ A))↦(Fg(Ffφ)→(Ff'φ))extends to a lax tangent morphism:(F,α)/φ(,)/A→(',')/Bwhose distributive law is defined by the natural transformation θ_f FÂf→'B̂Ff.For starters, let's prove the compatibility between θ and the projections:FÂf 'B̂FfFf ["FpÂ"', from=1-1, to=2-1] ["pB̂_F", from=1-2, to=2-1] ["θ", from=1-1, to=1-2]which corresponds to the diagram:FÂE 'B̂FE F E ' FEFE FE [""name=0, anchor=center, inner sep=0, "θ", from=1-1, to=1-2] ["v_F", from=1-2, to=2-2] ["p_F", from=2-2, to=3-2] ["Fv"', from=1-1, to=2-1] ["Fp"', from=2-1, to=3-1] [""name=1, anchor=center, inner sep=0, Rightarrow, no head, from=3-1, to=3-2] [""name=2, anchor=center, inner sep=0, "α"description, from=2-1, to=2-2] ["(θ,α;v)"description, draw=none, from=0, to=2] ["(α;p)"description, draw=none, from=1, to=2]Let's take into consideration the compatibility diagram between θ and the zero morphisms:FÂf 'B̂FfFf ["θ", from=1-1, to=1-2] ["FzÂ", from=2-1, to=1-1] ["zB̂_F"', from=2-1, to=1-2]To show that, first, consider the diagram:FÂE 'B̂FEF E ' FEFE FE [""name=0, anchor=center, inner sep=0, "θ", from=1-1, to=1-4] ["v_F"', from=1-4, to=2-3] [""name=1, anchor=center, inner sep=0, "FzÂ", from=3-1, to=1-1] ["Fv", from=1-1, to=2-2] [""name=2, anchor=center, inner sep=0, "α"', from=2-2, to=2-3] ["Fz"', from=3-1, to=2-2] [""name=3, anchor=center, inner sep=0, Rightarrow, no head, from=3-1, to=3-4] ["z_F", from=3-4, to=2-3] [""name=4, anchor=center, inner sep=0, "zB̂_F"', from=3-4, to=1-4] ["(α,θ;v)"description, draw=none, from=2, to=0] ["(z;v)"description, draw=none, from=2-3, to=4] ["(z;v)"description, draw=none, from=2-2, to=1] ["(α;z)"description, draw=none, from=2, to=3]Thus FzÂθ v_F=zB̂_Fv_F and from the universality of v_F we conclude that FzÂθ=zB̂_F, as expected. The next step is to prove the compatibility with the sum morphism:FÂ_2f 'B̂_2Ff FÂf 'B̂Ff["θ×θ", from=1-1, to=1-2] ["FsÂ"', from=1-1, to=2-1] ["sÂ_F", from=1-2, to=2-2] ["θ"', from=2-1, to=2-2]Thus, consider the following diagram:FÂ_2E 'B̂_2FEF_2E '_2FEF E ' FE FÂE 'B̂FE[""name=0, anchor=center, inner sep=0, "θ"', from=4-1, to=4-4] ["v_F", from=4-4, to=3-3] [""name=1, anchor=center, inner sep=0, "FsÂ"', from=1-1, to=4-1] ["Fv"', from=4-1, to=3-2] [""name=2, anchor=center, inner sep=0, "α", from=3-2, to=3-3] [""name=3, anchor=center, inner sep=0, "sB̂_F", from=1-4, to=4-4] ["v_F× v_F"', from=1-4, to=2-3] [""name=4, anchor=center, inner sep=0, "s_F", from=2-3, to=3-3] ["Fv× Fv", from=1-1, to=2-2] [""name=5, anchor=center, inner sep=0, "Fs"', from=2-2, to=3-2] [""name=6, anchor=center, inner sep=0, "α×α", from=2-2, to=2-3] [""name=7, anchor=center, inner sep=0, "θ×θ", from=1-1, to=1-4] ["(s;v)"description, draw=none, from=4, to=3] ["(α,θ;v)"description, draw=none, from=7, to=6] ["(α,θ;v)"description, draw=none, from=2, to=0] ["(α;s)"description, draw=none, from=6, to=2] ["(s;v)"description, draw=none, from=1, to=5]Thus, FsÂθ v_F=(θ×θ)sB̂_Fv_F and from the universality of v_F we conclude that FsÂθ=(θ×θ)sB̂, as expected. Let's prove the compatibility with the lift:FÂf'B̂Ff F(Â)^2f 'B̂FÂf ('B̂)^2Ff["θ", from=1-1, to=1-3] ["FlÂ"', from=1-1, to=2-1] ["lB̂_F", from=1-3, to=2-3] ["θ_Â"', from=2-1, to=2-2] ["'B̂θ"', from=2-2, to=2-3]As before, consider the following diagram:FÂE'B̂FEF E' FEF^2E ' F E '^2FEF E 'B̂F E 'B̂' FE F(Â)^2E'B̂FÂE('B̂)^2FE[""name=0, anchor=center, inner sep=0, "θ", from=1-1, to=1-5] [""name=1, anchor=center, inner sep=0, "lB̂_F", from=1-5, to=5-5] ["v_' F"', from=4-4, to=3-4] [""name=2, anchor=center, inner sep=0, "FlÂ"', from=1-1, to=5-1] ["v_F"', from=1-5, to=2-4] ["l_F", from=2-4, to=3-4] ["Fv"', from=1-1, to=2-2] [""name=3, anchor=center, inner sep=0, "α", from=2-2, to=2-4] ["Fl"', from=2-2, to=3-2] ["FÂv"pos=0.3, from=5-1, to=4-2] ["Fv_", from=4-2, to=3-2] ["α_", from=3-2, to=3-3] ["'α", from=3-3, to=3-4] [""name=4, anchor=center, inner sep=0, "θ_Â"', from=5-1, to=5-3] [""name=5, anchor=center, inner sep=0, "'B̂θ"', from=5-3, to=5-5] ["'B̂v"'pos=0.3, from=5-5, to=4-4] ["'B̂Fv"description, from=5-3, to=4-3] ["'B̂α", from=4-3, to=4-4] ["θ_", from=4-2, to=4-3] ["v_F"description, from=4-3, to=3-3] [""description, draw=none, from=3-3, to=4-4] ["(α,θ;v)"description, draw=none, from=3-3, to=4-2] ["(α,θ;v)"description, draw=none, from=0, to=3] ["(α;l)"description, draw=none, from=3, to=3-3] ["(α,θ;v)"description, draw=none, from=5, to=4-4] [""description, draw=none, from=4-2, to=4] ["(l;v)"description, draw=none, from=3-4, to=1] ["(l;v)"description, draw=none, from=3-2, to=2]Therefore, θ lB̂_F'B̂vv_' F=FlÂθ_Â'B̂θ'B̂vv_' F. By the universality of 'B̂vv_' F we conclude that θ lB̂_F=FlÂθ_Â'B̂θ, as expected. Finally, let's prove the compatibility with the canonical flip:F(Â)^2f 'B̂FÂf ('B̂)^2Ff F(Â)^2f 'B̂FÂf ('B̂)^2Ff["FcÂ"', from=1-1, to=2-1] ["cB̂_F", from=1-3, to=2-3] ["θ_Â"', from=2-1, to=2-2] ["'B̂θ"', from=2-2, to=2-3] ["θ_Â", from=1-1, to=1-2] ["'B̂θ", from=1-2, to=1-3]Thus:F(Â)^2E'B̂FÂE('B̂)^2FEF E 'B̂F E 'B̂' FEF E ' F E '^2FEF^2E ' F E '^2FEF E 'B̂F E 'B̂' FE F(Â)^2E'B̂FÂE('B̂)^2FE["v_' F"', from=5-4, to=4-4] [""name=0, anchor=center, inner sep=0, "FcÂ"', from=1-1, to=6-1] ["c_F", from=3-4, to=4-4] ["Fc"', from=3-2, to=4-2] ["FÂv"pos=0.3, from=6-1, to=5-2] ["Fv_", from=5-2, to=4-2] ["α_", from=4-2, to=4-3] ["'α", from=4-3, to=4-4] [""name=1, anchor=center, inner sep=0, "θ_Â"', from=6-1, to=6-3] [""name=2, anchor=center, inner sep=0, "'B̂θ"', from=6-3, to=6-5] ["'B̂v"'pos=0.3, from=6-5, to=5-4] ["'B̂Fv"description, from=6-3, to=5-3] ["'B̂α", from=5-3, to=5-4] ["θ_", from=5-2, to=5-3] ["v_F"description, from=5-3, to=4-3] [""description, draw=none, from=4-3, to=5-4] ["(α,θ;v)"description, draw=none, from=4-3, to=5-2] [""name=3, anchor=center, inner sep=0, "cB̂_F", from=1-5, to=6-5] ["'B̂v"', from=1-5, to=2-4] ["v_' F", from=2-4, to=3-4] ["FÂv"', from=1-1, to=2-2] ["Fv_"', from=2-2, to=3-2] ["v_F"description, from=2-3, to=3-3] ["α_"', from=3-2, to=3-3] ["'α"', from=3-3, to=3-4] ["θ_"', from=2-2, to=2-3] ["'B̂α"', from=2-3, to=2-4] ["'B̂Fv"description, from=1-3, to=2-3] [""name=4, anchor=center, inner sep=0, "θ_Â", from=1-1, to=1-3] [""name=5, anchor=center, inner sep=0, "'B̂θ", from=1-3, to=1-5] ["(α,θ;v)"description, draw=none, from=3-3, to=2-2] [""description, draw=none, from=3-3, to=2-4] ["(α;c)"description, draw=none, from=3-3, to=4-3] ["(α,θ;v)"description, draw=none, from=2, to=5-4] [""description, draw=none, from=5-2, to=1] ["(c;v)"description, draw=none, from=4-2, to=0] ["(c;v)"description, draw=none, from=4-4, to=3] [""description, draw=none, from=2-2, to=4] ["(α,θ;v)"description, draw=none, from=2-4, to=5]This proves that θ_Â'B̂θ cB̂_F'B̂vv_' F=FcÂθ_Â'B̂θ'B̂vv_' F. Finally, using the universality of 'B̂vv_' F we conclude that θ_Â'B̂θ cB̂_F=FcÂθ_Â'B̂θ, as expected. Proposition <ref> allows us to lift morphisms of tangent pairs to the corresponding slice tangent categories. The next step is to find sufficient conditions so that the corresponding tangent morphism over the slice categories is strong. This will play a key role in the next section. Let's introduce a definition. Given two tangent pairs (,;A) and (',';B), a morphism of tangent pairs (F,α;φ)(,;A)→(',';B) is Cartesian if the following diagrams:F E ' FE F A ' FA["F f"', from=1-1, to=2-1] ["' Ff", from=1-2, to=2-2] ["α"', from=2-1, to=2-2] ["α", from=1-1, to=1-2]FA' FAB' B["φ"', from=1-1, to=2-1] ["'φ", from=1-2, to=2-2] ["z"', from=2-1, to=2-2] ["z_F", from=1-1, to=1-2]are pullback diagrams, and moreover the functor F preserves the pullbacks of Equation (<ref>). Concretely, this last condition means that for every morphism f E→ A of , the diagram:FÂE F EFAF A["F f", from=1-2, to=2-2] ["Fz"', from=2-1, to=2-2] ["Fv_f", from=1-1, to=1-2] ["Ff^"', from=1-1, to=2-1]must be a pullback diagram.A Cartesian morphism of tangent pairs (F,α;φ)(,;A)→(',';B) lifts to the slice tangent categories as a strong tangent morphism. Concretely, this means that the natural transformation θ_f FÂf→'B̂Ff is invertible.Consider the following diagram:FÂE F E ' FEF AFA ' FAB ' B["Fv", from=1-1, to=1-2] ["α", from=1-2, to=1-3] ["FÂf"', from=1-1, to=3-1] ["Fz", from=3-1, to=2-2] ["F f"', from=1-2, to=2-2] ["α", from=2-2, to=3-3] [""name=0, anchor=center, inner sep=0, "' Ff", from=1-3, to=3-3] ["z_F"', from=3-1, to=3-3] ["φ"', from=3-1, to=4-1] [""name=1, anchor=center, inner sep=0, "z"', from=4-1, to=4-3] ["'φ", from=3-3, to=4-3] ["⌟"anchor=center, pos=0.125, draw=none, from=1-1, to=2-2] ["⌟"anchor=center, pos=0.125, draw=none, from=1-2, to=0] ["⌟"anchor=center, pos=0.125, draw=none, from=3-1, to=1]where we used that Fzα=z_F. Thanks to the Cartesianity of (F,α;φ) this is a pullback diagram, since it is formed by pullback diagrams. However, by definition, θ is defined by the diagram:scale=.7,center'B̂FE' FEB F E ' FAFÂE' B FAB ["Fv", from=4-2, to=3-3] ["α", from=3-3, to=2-4] ["FÂf", from=4-2, to=5-2] ["' Ff", from=2-4, to=3-4] ["φ", from=5-2, to=6-2] ["z"', from=6-2, to=4-4] ["'φ", from=3-4, to=4-4] ["v_F", from=1-1, to=2-4] ["'B̂(Ffφ)"', from=1-1, to=3-1] ["z"'pos=0.7, from=3-1, to=4-4] ["θ"', dashed, from=4-2, to=1-1] [Rightarrow, no head, from=6-2, to=3-1]However, the top and the right rectangular sides of this triangular diagram are pullbacks, so θ must be an isomorphism. The next is a key concept for our discussion. A tangent category with terminal object is a tangent category (,) equipped with a terminal objectso that the unique morphism → is an isomorphism. We also denote bythe 2-category of tangent categories with terminal objects, lax tangent morphisms and natural transformations compatible with the lax distributive laws. In the following, we denote bythe (unique up to unique isomorphism) terminal object of a category. Moreover, for any object A, the unique morphism from A tois denoted by ! A→. It is straightforward to see that a tangent category (,) with terminal object is sliceable overand that the slice tangent category (,)/∗ is isomorphic to (,) via (! A→)↦ A. This observation allows us to define the following pseudofunctor:→(,)(̄,;∗)((F,α)(,)→(','))(̄F,α;F)(,;∗)→(',';∗)Thanks to Proposition <ref>, the operation which takes a tangent pair (,;A) to its slice tangent category extends to a functor. Observe that the slice tangent category of a tangent pair (,;A) is equipped with a terminal object, the terminal object being the identity over A. With this in mind, we are able to define the following pseudofunctor:→(,;A)↦(,)/A((F,α;φ)(,;A)→(',';B))(̄F,α)/φ(,)/A→(',')/Bandare not strict functors but rather pseudofunctors. This comes from the fact that terminal objects and slice tangent structures are defined only up to unique isomorphisms. Thus, the associators and unitors are defined by these unique isomorphisms. We can finally characterize the operation which takes a tangent pair to its slice tangent category as an adjunction between pseudofunctors. The pseudofunctors ⇆ form an adjunction whose left adjoint is , the right adjoint is , the unit (U,η)(,)→((,))=(,)/∗, as a lax tangent morphism between tangent categories with terminal objects, is the isomorphism:U→/∗U(A)↦(! A→)U(f A→ B)↦(f(! A→)→(! B→))η(U( A))=(! A→)(! A→)=(U(A))and the counit (C,ϵ;φ)((,;A))=((,)/A,𝕀_A)→(,;A) is the morphism of tangent pairs:C(,)/A↦(,)C(f E→ A)↦ EC(g(f E→ A)→(f' E'→ A))↦(g E→ E')ϵ C(Â(f E→ A))=ÂE E=(C(f E→ A))φ C(𝕀_A A→ A)=AATo prove the result we need to show that the unit (U,η) and the counit (C,ϵ;φ) fulfill the triangle identities. Let's start by considering the following diagram:(,) (((,)))(,)["(U,η)", from=1-1, to=1-2] ["(C,ϵ;φ)_", from=1-2, to=2-2] [Rightarrow, no head, from=1-1, to=2-2]for a tangent category (,) with terminal object. However, it is straightforward to realize that the underlying tangent morphisms (C,ϵ) and (U,η) of (C,ϵ;φ)_ and (U,η) define the equivalence between (,) and (,)/ and that, by the universality of the terminal object, that the composition of the comparison morphisms φ=𝕀_ and ! U→ is the identity over the terminal object. Similarly, by considering the diagram:(,;A) (((,;A)))(,;A)["(U,η)_", from=1-1, to=1-2] ["(C,ϵ;φ)", from=1-2, to=2-2] [Rightarrow, no head, from=1-1, to=2-2]for a tangent pair (,;A), it is straightforward to show the underlying tangent morphisms of (C,ϵ;φ) and (U,η)_ define the equivalence between (,)/A and ((,)/A)/𝕀_A and that the composition of the comparison morphisms gives the identity. Finally, notice that the unit is always an isomorphism.§ THE SLICE TANGENT CATEGORIES OF THE AFFINE SCHEMES OVER AN OPERADThe previous section was dedicated to characterizing the slicing of tangent categories via the adjunction between two pseudofunctors. A similar phenomenon happens in the realm of operads: given an operadand a -algebra A the enveloping operad  ofover A is the operad whose category of algebras is equivalent to the coslice category of _ under A.The goal of this section is to prove that these two phenomena are two faces of the same coin: the geometric tangent category of the enveloping operad ofover A is equivalent to the slice tangent category of the geometric tangent category ofover A.Let's start by recalling the definition of the enveloping operad of a pair (;A). We advise the interested reader to consult <cit.>, <cit.>, or <cit.>. For this purpose, recall that since the category of algebras of an operadis cocomplete, each operad has an initial algebra, which corresponds to the R-module (0) together with structure map (m)(0)^ m→(0) defined by the operadic composition. This allows us to introduce an operation ↦(;(0)) between operads and operadic pairs. Notice that for a operadic pair we mean a pair (;A) formed by an operadand a -algebra A. Moreover, given two operadic pairs (;A) and (;B) a morphism of operadic pairs(f;φ)(;A)→(;B) is a morphism of operads f→ together with a morphism of -algebras φ A→ f^B, f^_→_ being the pullback functor induced by f. Operadic pairs together with their morphisms form a category that we denote by . So, we have:→()(̄;(0))(f↦)(̄f;!(0)→ f^(0)) ! being the unique morphism of -algebras induced by the universality of the initial algebra (0). Concretely, ! sends an element u∈(0) to f_0(u). Similarly as for tangent pairs (see Remark <ref>), also operadic pairs can be regarded as objects in the category of elements of a fibration. Consider the pseudofunctor ^→ which sends each operad to the corresponding category of algebras. Via the Grothendieck construction, this is equivalent to a fibration ∫_→ and the category of elements ∫_ is equivalent to the categoryof operadic pairs.admits a left adjoint(cf. <cit.>), which sends an operadic pair (;A) to the corresponding enveloping operad (;A)Â. Following the description provided by <cit.>,  is generated by the symbols (μ;a_1 a_k|, for every μ∈(m+k), a_1 a_k∈ A and every non-negative integer k (when k=0, (μ| are the only terms) which fulfill the following relations:(μ;a_1 ν(a_i a_i+n) a_k+n|=(μø_iν;a_1 a_k+n|for μ∈(m+k), ν∈(n) and a_1 a_k+n∈ A, where we used the notation μø_iν for μ(1_ ν 1_). In particular, it is not hard to see that Â(0)≅ A. So, the functorsends a morphism of operadic pairs (f,φ)(;A)→(;B) to the morphism of operads (f;φ)Â→B̂ defined on generators as follows:(μ;a_1 a_k|↦(f(μ);φ(a_1) φ(a_k)|From this description of the enveloping operad, it is not hard to see that an algebra B of the enveloping operad  is precisely given by a -algebra C^B, C→ being the canonical inclusion μ↦(μ|, together with amorphism of -algebras A→ C^B induced by the structure map A=Â(0)→ C^B of B.Conversely, every morphism of -algebras f A→ B induces over B a Â-algebra structure defined as follows:(μ;a_1 a_k|(b_1 b_m)μ̄_B(f(a_1) f(a_k),b_1 b_m)for μ∈(m+k), a_1 a_k∈ A and b_1 b_m∈ B. This proves that the category of Â-algebras is equivalent to the coslice category of -algebras over A (cf. <cit.>). §.§ The geometric tangent category of the enveloping operadTheorem <ref> establishes thatandform an adjunction and from our discussion on the enveloping operad we also know that alsoandform an adjunction. We would like to comparewithandwith . However,is a left adjoint, whileis a right adjoint and similarly,is a right adjoint andis a left adjoint. To solve this issue, we transpose the adjunction ⊣ to the opposite categories. To compare these functors, notice that ^ extends to operadic pairs as follows:^^→^(;A)(̄();A)^((f,φ)(;A)→(;B))(̄^(f)=(f^,α^);φ^ A←φ^B)(();A)→(();B)Note that, since _ is cocomplete, () is sliceable. The following diagram:^ ^["^"', from=1-1, to=2-1] ["^", from=1-2, to=2-2] ["", from=1-1, to=1-2] [""', from=2-1, to=2-2]commutes.It is straightforward to see that, for an operad :^((())=(();(0))=(())=(^())and for a morphism of operads f→:^((f))=^(f;! f^(0)←(0))==(f^,α^;!(0)→ f^(0))=(f^,α^)=(^(f))So, the diagram commutes. Thanks to Lemma <ref> we can now also compare the functorsand . Crucially, to do that we are going to use that ⊣ (on the opposite categories) and that ⊣ form adjunctions. In general, given a square diagram as follows:∙ ∙ ∙ ∙["F'", shift left=2, from=1-1, to=1-2] ["U'", shift left=2, from=1-2, to=1-1] ["F", shift left=2, from=2-1, to=2-2] ["U", shift left=2, from=2-2, to=2-1] ["G"', from=1-1, to=2-1] ["H", from=1-2, to=2-2]with (η,ϵ) F⊣ U and (η',ϵ') F'⊣ U' forming adjunctions, then if the diagram:∙ ∙ ∙ ∙["F'", from=1-1, to=1-2] ["F", from=2-1, to=2-2] ["G"', from=1-1, to=2-1] ["H", from=1-2, to=2-2]commutes, then, by using mates, we can define the following natural transformation:Gø U'Uø Fø Gø U'=Uø Hø F'ø U'Uø HA priori, there is no reason to conclude that such a natural transformation is a natural isomorphism. In order to prove that the natural transformation induced by the adjunctions ⊣, ⊣, and by Lemma <ref> is an isomorphism, we need to show that the counit of ⊣ induces a Cartesian morphism of tangent pairs over the geometric tangent pairs. The counit, regarded as a morphism of , (C,ϵ)(;A)→((;A))=(^A;Â(0)) of the adjunction ⊣ induces a Cartesian morphism of tangent pairs:^(C,ϵ)^(Â;Â(0))→^(;A)Let's start by recalling the definition of the counit. C→ is the morphism of operads which includesinto  by mapping μ∈(m) into (μ|∈Â(m). Moreover, ϵ A→ C^Â(0) is the isomorphism A∋ a↦(1_;a|∈ C^Â(0), where 1_∈(1) is the unit of . To see that this is an isomorphism, notice that the generators of Â(0) are all the symbols (μ;a_1 a_m| for every μ∈(m) and a_1 a_m∈ A, but thanks to the relations (<ref>) we also have:(μ;a_1 a_m|=(1_(μ);a_1 a_m|=(1_;μ(a_1 a_m)|So, with the identification a=(1_;a| we have that Â(0) is equal to A. Notice also that, given a Â-algebra B, C^B is the -algebra over B with structure map defined by:μ(b_1 b_m)(̄μ|_B(b_1 b_m)To distinguish between the different tangent structures, for this proof we adopt the following convention: we denote bythe geometric tangent structure of , by B̂ the slice tangent structure over B, and by _A the geometric tangent structure of Â.The Cartesianity of ^(C,ϵ) means that for a morphism f B→ E of Â-algebras the diagrams in the category of -algebras:C^T_AB C^T_AE C^B C^(_A)B̂E["C^z_A"', from=1-1, to=2-1] ["C^v_f", from=1-2, to=2-2] ["C^_Af", from=1-1, to=1-2] ["C^f_∗"', from=2-1, to=2-2]C^B C^_ABC^E C^_AE["α^", from=1-1, to=1-2] ["α^"', from=2-1, to=2-2] [" Cf"', from=1-1, to=2-1] ["C^_Af", from=1-2, to=2-2]AA C^Â(0) C^Â(0)["z", from=1-1, to=1-2] ["ϵ", from=1-2, to=2-2] ["ϵ"', from=1-1, to=2-1] ["z_C^"', from=2-1, to=2-2]are all pushout diagrams, where f_ is the morphism defined by the pushout diagram in _Â:T_AB T_AEB(_A)B̂E["z_A"', from=1-1, to=2-1] ["v_f", from=1-2, to=2-2] ["_Af", from=1-1, to=1-2] ["f_∗"', from=2-1, to=2-2] ["⌟"anchor=center, pos=0.125, rotate=180, draw=none, from=2-2, to=1-1]Notice that the third diagram is trivially a pushout since ϵ is an isomorphism. Let's focus on the first diagram and let's consider two morphisms g C^ E→ K and h C^A→ K of -algebras satisfying the commutativity of the diagram:C^T_AB C^T_AE C^B C^(_A)B̂E K["C^z_A"', from=1-1, to=2-1] ["C^v_f", from=1-2, to=2-2] ["C^_Af", from=1-1, to=1-2] ["C^f_∗"', from=2-1, to=2-2] ["g"', bend right, from=2-1, to=3-3] ["h", bend left, from=1-2, to=3-3]So, we have that:g(b)=h(f(b))h(f̣(b))=0for every b∈ B. Recall that Â-algebras are equivalent to -algebra morphisms with A for domain. Since B is a Â-algebra, we obtain a -algebra morphism β A→ C^B. So, by post-composing by g we get a new -algebra morphism AC^BK, thus, we get a Â-algebra structure over K. Concretely, the structure map of this Â-algebra K̅ is defined by:(μ;a_1 a_k|_K̅(x_1 x_m)μ̄_K(g(β(a_1) g(β(a_k)),x_1 x_m)Moreover, we can also lift g to a morphism of Â-algebras g̅ B→K̅, defined simply by b↦ g(b). Let's now do the same for h: define a morphism of Â-algebras h̅_AE→K̅. To do so, note also that _AE is a Â-algebra, which corresponds to a morphism of -algebras γ A→ C^_AE but because f is a morphism of Â-algebras, we have that for any a∈ A:γ(a)=p_A(f(β(a)))=f(β(a))where we used that the projection E→_AE sends each element x∈ E to itself, _AE being generated by all x and all x̣. This implies that we can define h̅ as the morphism which sends each y∈_AE to h(y). To see that this is a morphism of Â-algebras note that:(μ;a_1 a_k|_K̅(h̅(y_1) h̅(y_m))= μ_K(g(β(a_1)) g(β(a_k)),h(y_1) h(y_m))= μ_K(h(f(β(a_1))) h(f(β(a_k))),h(y_1) h(y_m))= h(μ_C^_AE(f(β(a_1)) f(β(a_k)),y_1 y_m))= h(μ_C^_AE(γ(a_1) γ(a_k),y_1 y_m))= h((μ;a_1 a_k|__AE(y_1 y_m))where we used that g(b)=h(f(b)), for any b∈ B. Moreover, note that C^K̅=K, C^g̅=g and that C^h̅=h. Thus, we now have the following commutative diagram in _Â:T_AB T_AEB(_A)B̂E K̅["z_A"', from=1-1, to=2-1] ["v_f", from=1-2, to=2-2] ["_Af", from=1-1, to=1-2] ["f_∗"', from=2-1, to=2-2] ["g̅"', bend right, from=2-1, to=3-3] ["h̅", bend left, from=1-2, to=3-3] ["⌟"anchor=center, pos=0.125, rotate=180, draw=none, from=2-2, to=1-1] ["[g,h]"description, dashed, from=2-2, to=3-3]To see that recall that g(b)=h(f(b)) and that h(f̣(b))=0, which precisely implies the commutativity of this diagram. Therefore, we have a unique morphism [g,h](_A)B̂E→K̅ of Â-algebras. So, let's introduce:[g,h]C̄^([g,h]) C^(_A)B̂E→ C^K̅=KHowever:C^f[g,h]=C^fC^([g,h])=C^(f[g,h])=C^g̅=gC^v_f[g,h]=C^v_fC^([g,h])=C^(v_f[g,h])=C^h̅=hFinally, suppose that r C^(_A)B̂E→ K is a second morphism of -algebra such that (C^f)r=g and (C^v_f)r=h. However, in a similar fashion we can also lift r to a morphism of Â-algebras r̅(_A)B̂E→K̅ such that C^r̅=r. But this implies that r̅=[g,h] and thus, r=C^r̅=C^([g,h])=[g,h]. This proves that the first diagram is a pushout.Finally, let's prove that the diagram which expresses the naturality of α^ is also a pushout. The first step is to lift α^ to a morphism of Â-algebras α^ so that C^(α^)=α^. Secondly, we are going to show that α^ is a coequalizer morphism from direct inspection, and finally, we use that C^ preserves the universality property of α^ to conclude our result.Let's start by noticing that, since B is a Â-algebra it corresponds to a morphism of -algebras β A→ C^B. Moreover, using the projection we obtain a morphism AC^B C^B of -algebras which defines a new Â-algebra C^B. Concretely, this is the Â-algebra defined over C^B whose structure map is defined by:(μ;a_1 a_k|(x_1 x_m)μ̄_ C^B(β(a_1) β(a_k),x_1 x_m)Then, it is not hard to see that α^ can be lifted to a morphism of Â-algebras α^ C^B→_AB, which sends an element y∈ C^B to α^(y)∈_AB. Recall also that, by construction, α^ sends the generators b and ḅ of C^B to the corresponding generators b and _̣Ab of C^_AB.By direct inspection we see that the Â-algebra _AB is generated by all b∈ B and by symbols _̣Ab for b∈ B, satisfying the following properties:(μ;a_1 a_k|__AB(b_1 b_m)=(μ;a_1 a_k|_B(b_1 b_m)=μ_C^B(β(a_1) β(a_k),b_1 b_m)_̣A((μ;a_1 a_k|(b_1 b_m))=∑_j=1^m(μ;a_1 a_k|(b_1 _̣Ab_j b_m)=∑_j=1^mμ_C^_AB(β(a_1) β(a_k),b_1 _̣Ab_j b_m)Similarly, it is not hard to see that C^B is also generated by b∈ B and by symbols ḅ, for b∈ B, satisfying the following properties:(μ;a_1 a_k|_ C^B(b_1 b_m)=μ_ C^B(β(a_1) β(a_k),b_1 b_m)=μ_C^B(β(a_1) β(a_k),b_1 b_m)(̣μ(b_1 b_m))=∑_j=1^mμ(b_1 ḅ_j b_m)It is clear from this that the relations of _AB imply the ones of C^B. Since α^ sends generators to corresponding generators, this implies that _AB can be represented as a quotient algebra of C^B over a specific ideal I, that is _AB≅ C^B/I, and that α^ is the quotient map C^B→ C^B/I. Direct inspection shows that the ideal I is generated by all the _̣A(β(a)) for every a∈ A, that is in _AB, _̣A(β(a))=0.Using a similar argument as the one we used to prove that the first diagram was a pushout, we conclude also that α^ is a quotient map C^B→ C^_AB, so that C^_AB is a quotient algebra of C^B over an ideal I generated by _̣A(β(a))=0.Let's now come back to the naturality diagram and consider g C^E→ K and h C^_AB→ K as follows:C^B C^_ABC^E C^_AEK ["α^*", from=1-1, to=1-2] ["α^*"', from=2-1, to=2-2] [" C^f"', from=1-1, to=2-1] ["C^_Af", from=1-2, to=2-2] ["g"', bend right, from=2-1, to=3-3] ["h", bend left, from=1-2, to=3-3]This implies that:h(b)=g(f(b))h(_̣Ab)=g(f(ḅ))=g(f̣(b))for every b∈ B. Notice that, since E is a Â-algebra, we can also define a morphism of -algebras γ A→ E and that since f is a morphism of Â-algebras we have that f(β(a))=γ(a). So, to lift g to C^_AE we need to show that g(γ̣(a))=0, however, we have the following:g(γ̣(a))= g(f̣(β(a))= g(ḥ(_̣Aβ(a)))= 0where we used that _̣Aβ(a)=0. This finally proves that we can lift g to C^E/I=C^_AE, that is we find a morphism [g,h] C^_AB→ K. We leave to the reader to prove that such a morphism is the unique morphism which makes commuting the following diagram:C^B C^_ABC^E C^_AEK ["α^*", from=1-1, to=1-2] ["α^*"', from=2-1, to=2-2] [" C^f"', from=1-1, to=2-1] ["C^_Af", from=1-2, to=2-2] ["g"', bend right, from=2-1, to=3-3] ["h", bend left, from=1-2, to=3-3] ["[g,h]"description, dashed, from=2-2, to=3-3]This concludes the proof. We can finally prove the main result of this paper. Consider the tangent morphism obtained as follows:^øøø^ø≅≅ø^øøø^This defines an equivalence of pseudofunctors which makes commutative the following diagram:^ ^["^"', from=1-1, to=2-1] ["^", from=1-2, to=2-2] [""', from=1-2, to=1-1] ["", from=2-2, to=2-1]By Theorem <ref>, (U,η) is an equivalence of tangent categories. Moreover, thanks to Lemma <ref>, ^(C,ϵ) is a Cartesian morphism of tangent pairs. By Lemma <ref>,maps Cartesian morphisms into strong tangent morphisms. Thus, (^(C,ϵ)) is strong. Finally, thanks to <cit.> the functorial component of (^(C,ϵ)) is an isomorphism between the categories of Â-algebras and the coslice category of -algebras under A, i.e. the slice category _^/A. Therefore, (^(C,ϵ)) is an equivalence of tangent categories.Given an operadand a -algebra A, the geometric tangent category of the enveloping operad  ofover A is equivalent, as a tangent category, to the slice tangent category over A of the geometric tangent category of . In formulas:(Â)=()/A Thanks to this characterization, we can now understand the vector fields over a Â-algebra. For this purpose, recall that for amorphism of -algebras β A→ B and a B-module M (see Section <ref> for details) an β-relative derivation is a derivation δ B→ M, i.e. an R-linear morphism which satisfies the Leibniz rule:δ(μ(b_1 b_m))=∑_k=1^mμ(b_1 δ(b_k) b_m)and moreover δøβ=0. For an operad , a -algebra A, and a Â-algebra B, the vector fields over B in the geometric tangent category of  are in bijective correspondence with β-relative derivations, β A→ C^B being the morphism of -algebras corresponding to the Â-algebra B.Recall that in <cit.> it was proved that vector fields in a geometric tangent category of an operad correspond to derivations over the operadic algebras. Concretely, a vector field v A→ A, regarded a morphism of -algebras, corresponds to a derivation δ_v A→ A defined by:δ_v(a)v̄(ạ)Viceversa, a derivation δ defines a vector field v_δ A→ A by:v(a)āv(ạ)δ̄(a)Thanks to Theorem <ref>, we have that (Â)≅()/A, thus, given a morphism β A→ C^B, by definition of the slice tangent category, the tangent bundle functor  of ()/A is given by the coequalizer (in the category of -algebras):C^AC^B ÂB["β", shift left=2, from=1-1, to=1-2] ["β zp"', shift right=2, from=1-1, to=1-2] ["v_β", dashed, from=1-2, to=1-3]or equivalently, by the pushout diagram:AC^BAÂB["v_β", from=1-2, to=2-2] ["β", from=1-1, to=1-2] ["z"', from=1-1, to=2-1] ["β_∗"', from=2-1, to=2-2] ["⌟"anchor=center, pos=0.125, rotate=180, draw=none, from=2-2, to=1-1]This implies that ÂB is the quotient of C^B by the ideal generated by β̣(a), for every a∈ A. Therefore, a vector field vÂB→ B corresponds to a derivation δ_v B→ B defined by δ_v(b)v̄(ḅ), and satisfying the following:δ_v(β(a))=v(β̣(a))=0that is a β-relative derivation of B. Conversely, a β-relative derivation δ B→ B being a derivation over B, defines a vector field v_δ C^B→ C^B over C^B by v_δ(b)b̄ and v_δ(ḅ)=δ(b), but since δ is β-relative, v_δ(β̣(a))=δ(β(a))=0, thus v_δ lifts to ÂB→ B.§.§ The differential bundles of affine schemes over an operadIn <cit.>, the classification of differential objects for the geometric tangent category of an operadwas given. Roughly speaking, differential objects in a tangent category, first introduced by Cockett and Cruttwell in <cit.>, are the objects whose tangent bundle is trivial. In the tangent category of (connected) finite-dimensional smooth manifolds, differential objects correspond to the manifolds ℝ^m, for all integers m. For the geometric tangent category () of an operad , differential objects are in bijective correspondence with (1)-left modules, where we recall that (1) becomes a unital and associative ring, once equipped with the unit and the composition of .A related concept is the notion of differential bundles, introduced by Cockett and Cruttwell in <cit.>. Roughly speaking, differential bundles are bundles whose fibres are differential objects (cf. <cit.>). More precisely, a differential bundle over A∈ in a tangent category (,) consists of a morphism q E→ A which admits pullbacks along any other morphism B→ A, together with a zero morphism z_q A→ E, a sum morphism s_q E_2→ E, E_2 being the pullback of q along itself, and a vertical lift l_q E→ E satisfying a similar universality property of the vertical lift of the tangent structure . In <cit.>, MacAdam proved that in the tangent category of finite-dimensional smooth manifolds, differential bundles are precisely vector bundles.[There is a slight difference between vector bundles and differential bundles over smooth manifolds. Vector bundles are defined as fibre bundles whose typical fibre is a vector space. In general, differential bundles don't have a typical fibre and, when the manifold is not connected, they allow different connected components to have fibres with different dimensions. The two notions coincide for connected smooth manifolds.] We also recall that a linear morphism f(q E→ A)→(q' E'→ A) of differential bundles over A∈ is a morphism f E→ E' compatible with the lifts.In this section, we are going to prove an important result: differential bundles over an operadic affine scheme A in the geometric tangent category () of an operadare equivalent to A-modules in the operadic sense. We recall that a module over a -algebra A consists of an R-module M equipped with a collection of R-linear morphisms (m+1) A^ m M→ M satisfying an equivariance condition with respect to the symmetric action, and associativity and unitality with respect to the structure map of A. We invite the interested reader to consult <cit.>, <cit.> and <cit.> for a detailed definition of modules over operadic algebras.First, we prove that the correspondence between differential objects and left (1)-modules shown in <cit.> extends to a functorial equivalence between the category _() of differential objects and linear morphisms of the geometric tangent category () ofand the opposite of the category of left (1)-modules. We also prove that this equivalence is indeed an equivalence of tangent categories.To understand what is the tangent structure over _(1)^, notice that, for any associative and unital R-algebra A, _A is a semi-additive category, that is it has finite biproducts, denoted by ⊕. Thus, it comes with the canonical tangent structure _A, whose tangent bundle functor is the diagonal functor _AMM̄⊕ M. It is straightforward to see that _A is left-adjoint to itself, thus it also defines an adjoint tangent structure _A over the opposite category _A^.It is interesting to note that (see <cit.>) given an associative and unital R-algebra A, the geometric tangent category of the associated operad A^∙ whose only non-trivial entry is A^∙(1)=A, is precisely (_A^,_A). So, in particular, (_(1)^,_(1)) is the geometric tangent category of (1)^∙.Note also that this construction extends to operadic algebras. To see that take into account a -algebra A and let _(A) be its enveloping algebras. Concretely, the enveloping algebra _(A) of A corresponds to the associative and unital algebra Â(1) which satisfies the following property: the category of modules over A is equivalent to the category of left modules over _(A). Thus, let A^∙ be the operad whose only non-trivial entry is A^∙(1)_(A). Thus, (A^∙)≅(__(A)^,__(A))≅(_A^,_A). For an operad , the tangent category _() of differential objects and linear morphisms of the geometric tangent category () ofis equivalent to the geometric tangent category ((1)^∙)=(_(1)^,_(1)) associated with the associative and unital R-algebra (1).First, recall that (1) is the enveloping algebra of the initial -algebra (0) (cf. <cit.>), thus _(1)≅_(0). Recall also that in <cit.> it was proved the existence of a functor, for every -algebra A, _A_A→ A/_, which sends an A-module M, in the operadic sense, to a morphism of -algebras A→_AM. In particular, this defines a functor _A_A→_, which maps each M to _AM. In <cit.> it was proved that, for every (0)-module M, _(0)M comes equipped with a canonical differential structure, so that _(0)M is a differential object of (). Conversely, for a differential object A∈(), there is a canonical vertical lift l A→ A (regarded as a morphism of -algebras). In particular, l defines a derivation over A, as follows:δ_l(a)↦ l(ạ)It was shown that the image of δ_l gives a (0)-module UA and that the correspondence M↦_(0)M and A→ UA are inverses to each other, up to a canonical isomorphism. It is not hard to prove that this correspondence extends to a correspondence between linear morphisms. In particular, this means that given a (0)-linear morphism f M→ N of (0)-modules, _(0)f is again linear, in the sense that is compatible with the lifts of the corresponding differential objects. Similarly, given a linear morphism of differential objects g A→ B (regarded as a morphism of -algebras), define Ug as the morphism whose domain is the image of δ_l, l being the vertical lift of A. So, for each a∈ A, Ug(δ_l(a))ḡ(l(ạ)). However, since g is compatible with the lifts, we have that g(l(ạ))=l'(g̣(a))=δ_l'(g(a)), l' being the vertical lift of B. Thus, Ug is well-defined and also (0)-linear. Finally, this correspondence is functorial and it extends to an equivalence of categories. Finally, notice that the tangent structure over differential objects reduces to a Cartesian differential structure (cf. <cit.>), thus the tangent bundle functorsends a differential object A to A× A, being × the Cartesian product. Moreover, since all morphisms are linear, the same is true for morphisms as well, i.e. f≅ f× f. However, Cartesian products in () are coproducts in _ and _(0) preserves coproducts, thus _(0) is isomorphic to the tangent bundle functor over _(0)^. Similarly, _(0) maps all the natural transformations of the tangent structureto the ones of _(0). This concludes the proof. Cockett and Cruttwell in <cit.>) proved that differential bundles over an object A∈ of a tangent category (,) are equivalent to differential objects of the slice tangent category (,)/A of (,) over A. It is not hard to see that this correspondence extends to an equivalence of tangent categories:(,;A)≅((,)/A)between the tangent category (,;A) of differential bundles over A and the tangent category ((,)/A) of differential objects of the slice tangent category (,)/A. Moreover, this equivalence restricts to linear morphisms, that is:_(,;A)≅_((,)/A)whereindicates that morphisms are only linear morphisms (cf. <cit.>).Let's denote by _(;A) the tangent category of differential bundles and linear morphisms over a -affine scheme A in the geometric tangent category () of an operad . Letbe an operad and A a -affine scheme. Then the tangent category _(;A) of differential bundles over A and linear morphisms in the geometric tangent category ofis equivalent to the geometric tangent category of the operad A^∙:_(;A)≅(A^∙)≅(_A^,_A)In particular, differential bundles over A are equivalent to A-modules in the operadic sense and linear morphisms of differential bundles over A are equivalent to A-linear morphisms of A-modules (in the opposite of the category of A-modules).Take into account an operadand a -algebra A. Then, the tangent category _(;A) of differential bundles over A and linear morphisms in the geometric tangent category () ofis equivalent to the tangent category _(()/A) of differential objects and linear morphisms of the slice tangent category ()/A. Thanks to Theorem <ref>, ()/A≅(Â),  being the enveloping operad ofover A. By Proposition <ref>, differential objects over (Â) are Â(1)-left modules; in particular, _((Â))≅(Â(1)^∙), but Â(1) is the enveloping algebra of A (cf. <cit.>), thus (Â(1)^∙)≅(A^∙):_(;A)= _(();A)Diff. bundles are diff. objects in the slice tangent cat. ≅ _(()/A)) Theorem <ref> ≅ _((Â)) Proposition <ref> ≅ (Â(1)^∙)Â(1)=_(A) (cf. <cit.>) ≅ (A^∙)This concludes the proof.§ CONCLUSIONThe main results of this paper are the following:Theorem <ref> In Section <ref> we gave a new characterization for the operation which takes a tangent pair (,;A) to its associated slice tangent category (,)/A in terms of the adjunction ⊣.Theorem <ref> In Section <ref> we proved that the geometric tangent category of the enveloping operad of a -algebra A is equivalent to the slice tangent category ()/A of the geometric tangent category ofover A.Theorem <ref> In Section <ref> we classified differential bundles over operadic affine schemes in the geometric tangent category of an operad . We showed that differential bundles correspond to modules over the -algebras.We also proved some minor but striking results:Propositions <ref> and <ref> We showed that tangent categories are organized in a double category whose horizontal and vertical morphisms are respectively lax and colax tangent morphisms. We also classified conjunctions in this double category in terms of a colax and a lax tangent morphism whose underlying functors form an adjunction and whose distributive laws are mates along this adjunction.Propositions <ref> and <ref> We proved that the operation which takes an operad to its algebraic tangent category extends to a pair of functors ^ and _!.Proposition <ref> We proved that the operation which takes an operad to its geometric tangent category extends to a pair of functors ^ and _!.Lemma <ref> We proved that Cartesian morphisms of tangent pairs lift to the slice tangent categories as strong tangent morphisms.Corollary <ref> We classified vector fields over the geometric tangent category of the enveloping operad  as relative derivations.§.§ Future workThis paper is not just a natural continuation of <cit.> but also the beginning of a fruitful program of research dedicated to understanding the intimate relationship between operads and the geometrical features of their corresponding operadic affine schemes. The classification of differential bundles, of vector fields (already covered in <cit.>), and the classification of the geometric tangent category of the enveloping operads represent the starting point of this program. Here is a list of some possible future directions of research:*Classifications of connections. Connections, introduced in <cit.> are probably one of the most important geometrical tools available in a tangent category;*Classifications of principal bundles and principal connection. Principal bundles and principal connections were first introduced in the context of tangent categories by Cruttwell during a talk at Foundational Methods in Computer Science 2017 (General connections in tangent categories - FMCS 2017);*Study of sector forms and cohomology for operadic affine schemes (cf. <cit.>);*Study of curve objects and of differential equations for operadic affine schemes (cf. <cit.>);*An important application of this program is the study of associative affine schemes, which lead to a description of non-commutative algebraic geometry via tangent categories;*An important construction in the theory of operads is Kozsul duality (cf. <cit.>). A natural question is what is the geometric tangent category of the Kozsul dual of an operad ;*There is a notion of distributive laws between operads which allows two operads to be composed together. An example of an operad obtained via a distributive law between the operadandis the operad , whose algebras are Poisson algebras. What kind of relationship exists between the geometric tangent categories of two operadsandand the one of the operad obtained by composingandprovided a distributive law between them?These are only a few of the possible new paths of research that this paper inspires.[title=Bibliography]
http://arxiv.org/abs/2310.18174v1
{ "authors": [ "Marcello Lanfranchi" ], "categories": [ "math.AG", "math.CT", "18F40, 18M70" ], "primary_category": "math.AG", "published": "20231027144211", "title": "The differential bundles of the geometric tangent category of an operad" }
Phase-space entropy cascade and irreversibility of stochastic heating in nearly collisionless plasma turbulence William D. Dorland January 14, 2024 ===============================================================================================================We analyze VeLO (versatile learned optimizer <cit.>), the largest scale attempt to train a general purpose “foundational” optimizer to date. VeLO was trained on thousands of machine learning tasks using over 4000 TPU months with the goal of producing an optimizer capable of generalizing to new problems while being hyperparameter free, and outperforming industry standards such as Adam. We independently evaluate VeLO on the MLCommons optimizer benchmark suite. We find that, contrary to initial claims: (1) VeLO has a critical hyperparameter that needs problem-specific tuning, (2) VeLO does not necessarily outperform competitors in quality of solution found, and (3) VeLO is not faster than competing optimizers at reducing the training loss. These observations call into question VeLO's generality and the value of the investment in training it. § INTRODUCTIONMeta-learning, or learning to learn, refers to the appealing vision of learning the learning algorithm itself, similarly to how deep learning replaced the tradition of handcrafted feature engineering <cit.>. Meta-learning has found compelling applications in various facets of AI. In particular, one notable application of meta-learning is to learn improved optimization strategies <cit.> that provide better or faster optimization than hand-crafted optimizers <cit.>. After initial successes in relatively small scale problems, researchers have recently focused on scaling learned optimizers <cit.>.A noteworthy example is VeLO <cit.>. Trained on a huge array of tasks with over 4000 TPU months,it aspires to be a `foundational' optimizer capable of solvingany new problems more rapidly than hand-designed optimizers such as Adam. VeLO claimed multiple remarkable abilities , such as being at least 4× faster than Adam on 50% of tasks in the VeLOdrome suite. If true, VeLO would eventually pay for its up-front training cost by accelerating learning across the community. Nevertheless, evaluating optimizers – especially learned optimizers – is itself a very difficult problem with multiple facets including iteration and time-efficiency, quality and generalisation of minima discovered, hyperparameter sensitivity <cit.>, and generalisation of the learned optimiser itself.In this work, we critically analyse VeLO's performance to understand if it is as effective as claimed. Our evaluation casts doubt on its claimed efficacy, and whether scaling-up training is the silver bullet for optimizer learning that it has been in other areas of AI. Our contributions are: (1) Validation of VeLO: We conduct a rigorous, independent evaluation of VeLO's performance using an extended analysis based on the https://MLCommons.org/en/groups/research-algorithms/MLCommons benchmark. (2) Claims Reassessment: Our empirical results challenge several key claims made in the original VeLO paper, specifically that of being hyperparameter-free, outperforming baselines in minimizing training objectives, and offering optimization speedups.(3) Introduction of Explicit Metrics: We introduce a set of carefully selected metrics that directly align with the fundamental objectives an optimizer should fulfil. These metrics serve as a standardized framework for comparing VeLO against other optimizers.§ PRELIMINARIESLet f_θ be a function parameterized by θ where θ is defined over some domain θ∈Θ. We refer to f_θ as the optimizee; the function being optimized. Performance of f_θ on a task 𝒯_i sampled from a distribution of tasks p(𝒯) can be measured by a loss function L_i(f_θ, 𝒯_i). The goal of learning is to find the minimizer θ^*=_θ∈ΘL_i(f_θ,𝒯_i). Gradient descent minimizes the loss function by producing a sequence of updates in the form:θ_t+1=θ_t-α_t∇_θ L_i(f_θ,𝒯_i)Learning to optimize strategies reformulate gradient descent as θ_t+1=θ_t+g(L_i(f_θ,𝒯_i)), which recovers standard gradient descent when g(·) is a simple scaling g(L_i(f_θ,𝒯_i))=-α_t∇_θ L_i(f_θ,𝒯_i). These approaches assume that performance can be improved by paramaterizing the function g with some learnable parameters λ, e.g., defining a small MLP. Learning-to-optimize is usually formulated as a bi-level optimization problem where the goal is to the learn optimizer g so that the optimizee f achieves low loss on some task distribution after learning.More specifically:λ^*=_λ∑_i=1^Mℒ(θ^*_i(λ),λ,𝒯_i) s.t. θ^*_i(λ)=_θ L_i(θ,λ,𝒯_i)where Eq. <ref> is solved with the learnable optimizer Eq. <ref>, and the optimizer learning objective is in Eq. <ref>.Compactly, the gradient of the loss, after t steps, on a given sampled task would then be <cit.>:d L_t/dλ=∂ L_t/∂λ+∑_k=1^T∂ L_t/∂θ_t(∏_i=k^T∂θ_i/∂θ_i-1)∂θ_k/∂λ,which allows the optimizer to be learned with gradient descent. § BACKGROUND AND MOTIVATIONLearning Optimizers Searching for simple and symbolic update rules for training neural networks dates back to the 90's <cit.>. More recently, <cit.> parameterized the optimization algorithm as an LSTM which acts coordinate-wise on the inner-loop problem. Various work has since explored the design space of learning optimizers. The space spans a) the parameterization of the learned optimizer including it's IO representation, b) the meta-training task distribution, c) meta-optimizers (optimizer used to update the learned optimizer) and d) the outer-loop objective function for estimating the learned optimizer performance.Parameterizations included LSTMs <cit.>, hierarchical RNNs <cit.>, MLPs <cit.>, transformers <cit.> and hyper-networks <cit.>. Tree structured search spaces <cit.>, domain-specific languages <cit.> and evolutionary strategies <cit.> were also explored. Search spaces and black-box parameterizations can be learned using various techniques such as gradient-based meta-learning <cit.>, or evolutionary strategies <cit.>. Meta-loss functions also vary between inner-loop training <cit.>, validation loss <cit.>, or more complex objectives that measure resource-efficiency <cit.> and speed <cit.>.Benchmarking optimizersBenchmarking optimizers – especially in deep learning – is extremely challenging, as there are many facets to optimizer quality including iterations and clock-time to convergence, quality of the solution found in non-convex problems, generalisation of the final solution to a validation or testing set, hyperparameter sensitivity, consistency of performance across different workloads, etc <cit.>. As discussed in <cit.>, this is the reason behind multiple apparently contradictory claims in the literature, and the lack of consensus on benchmarks and metrics compared to other areas of machine learning and AI. All this makes it challenging to compare optimizers as they may excel at one facet while falling down in another.For the reasons discussed above, apples-to-oranges comparisons are common in the literature, and can lead to misleading conclusions. For example, comparing optimizer performance without controlling hyperparameter tuning or HPO objective <cit.>. This has led to a few attempts to establish common evaluation frameworks for optimizers, notably MLCommons <cit.>, which can control for HPO.Benchmarking learned optimizersThis challenge of optimizer evaluation is further exacerbated when considering benchmarking of learned optimizers, as the cost of optimizer learning, and the robustness of the learned optimizer to diverse and out-of-distribution tasks open up additional important criteria. As optimizer learning is a costly process, most learned optimizers justify themselves with amortization arguments: The idea that the up-front cost of optimizer learning can be paid off by the learned optimizer's improved solution to multiple subsequent tasks. However, the learned optimizer needs to be applied on new tasks for this justification to hold, as good solutions to the training tasks have already been found during optimizer training. Thus, the practical value of a learned optimizer is intrinsically intertwined with both its efficacy and how well it generalizesto new tasks. All this makes fairly benchmarking learned optimizers even harder than handcrafted optimizers. The VeLO optimizer <cit.> aspired to achieve both efficient optimization and cross-task generalization by large scale optimizer training on a huge problem suite. It then evaluated the resulting optimizer on the VeLOdrome task suite<cit.> and an early version of the MLCommons optimizer benchmark suite <cit.> where it claimed to provide decisive efficiency improvements over competitors, thus justifying its huge up-front training cost. This paper critically evaluates these claims.§ VELO: VERSATILE LEARNED OPTIMIZERVeLO Architecture VeLO <cit.> is a learned optimizer trained with the outer-objective (Eq <ref>) of minimizing the training loss. The learned optimizer is parameterized as a hierarchical hypernetwork; a per-tensor LSTM that generates the parameters for a per-parameter MLP. The per-tensor hypernetwork operates on features aggregated from each parameter tensor, i.e: neural network layer. VeLO optimizer states and inputs include current iteration number, momentum at different timescales, squared gradients, adafactor-style accumulator, loss exponentially-moving average features, and tensors rank. VeLO Training The meta-training task distribution included MLPs, CNNs, ResNets, ViTs, auto-encoders, variational auto-encoders, RNNs, and vanilla Transformers of various sizes. The architectures included dynamic configurations such as initialization and activation functions. Standard training datasets for image and language domains such as 16×16 ImageNet, CIFAR 10 and 100,Fashion MNIST, LM1B, and Wikipedia English among others. The meta-optimizer used was standard evolutionary strategywith antithetic-samples <cit.>. Meta-training spanned a total of 4000 TPU months with an online HPO procedure divided across 4 phases. Problem sizes and training unroll lengths were gradually increased over a curriculum which was found to improve meta-generalization.VeLO Claims Some key VeLO claims are (a) achieving a 4× speedup over learning rate-tuned Adam on 50% of tasks while being 16× times faster on 14% of VeLOdrome suite of tasks (<cit.>, Fig. 1). (b) out-performing hyperparameter tuned Adam on a suite of tasks from the https://MLCommons.org/en/groups/research-algorithms/MLCommons algorithms track in terms of the training loss (<cit.>, Sec. 4.2), (c) out-performing hyperparameter tuned Adam's generalization (validation loss) on the same benchmark (<cit.>, App. G.7). It can be seen that VeLO's claims span learned optimizer benchmarking practical objectives of (a) training speedups and (b) absolute performance gains on both train and validation metrics while (c) meta-generalizing to new tasks distributions including VeLOdrome and MLCommons benchmark, which is a key justification behind amoritizing VeLO's meta-training cost.Caveats Besides the inputs discussed in the architecture paragraph above, VeLO needs one special input: It must be prompted with the total training steps it is expected to run for in order to initialize its states. This is then used to estimatethe fraction of training remaining online during learning.For an explanation of how to control for this factor of variation fairly, we refer the reader to appendix <ref>.§ BENCHMARK DESIGNTo examine VeLO's claims, our point of departure is the most recent time-to-result benchmark by MLCommons <cit.>.Comparing training curves to measure speedups is ill-posed. Therefore, theMLCommons <cit.> protocol measures learning speed by fixing a performance target (e.g., loss), and measuring time/steps taken for an optimizer to reach this target.Since VeLO also reported improved solution quality, we extend this protocol to the complementary perspective of fixing an optimization time/step budgetand measuring the loss achieved at this point.Baselines and Workloads The original VeLO paper mainly compared with Adam. For more thorough evaluation, we train several GD variants, namely SGD with Heavy Ball Momentum, SGD with Nesterov Momentum, Adam, NAdam (Adam with Nesterov Momentum) and NAdamW. We train all baselines with default hyperparameters as reported in appendix <ref>. All algorithms are trained for a maximum allowed budget, either runtime or steps, on 4 workloads from the MLCommons benchmark, namely ResNet-50 on Imagenet, GNN on OGBG, DLRM on Criteo-1TB and U-Net on FastMRI. Measuring Training Speedups The key evaluation hyperparameter in MLCommons is the notion of a performance target (e.g., in units of loss) that defines a successful optimization. We can then measure speedups in terms of the wall-clock time or number of iterations taken to reach the target. Establishing performance targets is somewhat involved in the MLCommons methodology. First one sets amaximum allowed runtime in wall-clock or step count for each workload, runs multiple trials of all algorithms for the full budget, and then measures the performance of all algorithms trials at 75% of the maximum allowed budget. Then, for each algorithm on each workload, the median performance is selected, and the best performing algorithm defines the target for the workload. This translates to target_w=max_a{median_s{L_a, s, w}} for all w∈𝒲 where L_a, s, w is the performance metric of interest achieved by algorithm a on trial s and workload w.Subsequently we can measure the time/steps t_a,s,w that the sth trial of any given algorithm a takes to reach the target performance level target_w on workload w. To aggregate results, we can employ performance profiles <cit.>. Denote algorithms by 𝒜={a_1,a_2,..,a_k} and workloads as 𝒲={w_1,w_2,..,w_n}. Then, given a workload w, we record the median time/steps taken for algorithm a to achieve the performance target across all trials/seeds as t_a,w=median_s{t_a,s,w}. Then, to score an algorithm â on a workload w, the performance ratio is defined as:r_â,w=t_â,w/min_a∈𝒜t_a,wThe performance profile ρ_â(τ) for an algorithm â on a random workload w drawn uniformly from 𝒲 is the probability of having a performance ratio r_â,w of at most τ:ρ_â(τ)=(1/n) × |{w:r_â,w≤τ}|Following <cit.>, the final score B_a for each algorithm integrates the performance profile over a pre-defined range r_max resembling a space of τs and normalized by r_max-1. This means that an algorithm that is consistently the fastest across all workload would have a score of 1.[13]r0.43< g r a p h i c s >Illustration of optimizer learning metrics: Time/steps to performance target vs performance achieved at time budget. In summary, for individual benchmarks w and algorithm a, we report time-to-target t_a,w. We measure both wall-clock-time to target (denoted time-control condition), and steps to target (step-control condition). To aggregate across benchmarks we report the aggregate MLCommons score B_a. Measuring Training Quality While MLCommons mainly focuses on training speedup, the complementary metric to is quality of solution found within a certain time/step budget. To this end, we also assess training and validation performancep_a,w (e.g., loss, accuracy) after algorithm a reaching a certain time/step quota for workload w (denoted performance-control).For the specific workload budgets and targets found, please see Appendix <ref>. The training speedup vs training-quality metrics are illustrated schematically in Figure <ref>.§ EXPERIMENTSWe now set out to assess whether VeLO's claims are justified, and hence whether its large up-front training cost can be justified from an amortization perspective.Specifically, we ask the following questions:(Q1) Is VeLO hyperparameter free as claimed? (Q2) Does VeLO indeed outperform existing hand-crafted optimizers on training and validation loss minimization as claimed? (Q3) Does VeLO indeed provide dramatically faster optimization than standard baselines? Conclusion 1: VeLO is Not Hyperparameter Free but Hyperparameter Sensitive. Recall that the VeLO has one user-defined input: It requires prompting with the total number of steps (Sec <ref> and <cit.>). It will accelerate (attempt to converge faster, but possibly reach a worse minima) if prompted with fewer steps. We study the MLCommons time-to-performance target protocol for different values of this hyperparameter. We consider prompting with steps corresponding to either 100%or 75% of MLCommons wall-clock max runtime. From the results in Table <ref> we see that the prompt is actually a key hyperparameter. For example, the 75% prompt reaches the Criteo training target before timeout, while the 100% prompt doesn't succeed in time. Meanwhile the 100% prompt reaches the OGBG training target before timeout, while the 75% prompt does not (it behaves too greedily and converges to a poor optimum worse than the required performance target). Overall VeLO is in fact sensitive to the number of steps hyperparameter, often crucially so.Conclusion 2: VeLO Does Not Outperform Baselines in Minimizing Both Training and Validation Losses. VeLO reported outperforming hand-crafted optimizers in terms of achieving lower training and validation losses, both on VeLOdrome and MLCommons (algorithm) benchmark suites. But the associated experiments on MLCommons compared against Adam alone <cit.>. Meanwhile, <cit.> observe that different optimizers often `win' on different benchmarks. So we directly compare VeLO against a range of off-the-shelf optimizers with default hyperparameters in terms of optimization quality after a fixed step-budget on MLCommons. From the results in Table <ref> (see full details in Appendix <ref>), we see a different picture: VeLO is not a consistent winner in either train or validation loss achieved, despite that we conducted no HPO at all on the baselines. We attribute this discrepancy two factors: (1) VeLO <cit.> evaluating insufficient competitors in their original comparison – since as we see different competitors win on different benchmarks/metrics. (2) VeLO's evaluation primarily focused on VeLOdrome benchmarks, which were reportedly more similar to VeLO's training distribution <cit.>, and focused less on the MLCommons suite which was reportedly more different. To the extent that this is the explanation, it suggests that VeLO is not as general purpose as claimed, and thus undermines the amortization argument used to justify its up-front training cost.Conclusion 3: VeLO Does Not Provide Faster Training. VeLO claims substantially faster training. It was trained for the objective of fast training loss minimisation, and empirically observed to also provide fast validation loss minimisation. However, again these original claims were largely based on theVeLOdrome benchmark (which may be unrealistically easy, as discussed in the previous section), and in terms of MLCommons they were based on comparison to Adam alone. We now compare VeLO to a range of off-the-shelf optimizers with default hyperparameters on our four MLCommons tasks using the time/steps to performance target protocol of MLCommons. The MLCommons benchmark results presented in Table <ref> in terms of the aggregate MLCommons score B_a, which integrates over the algorithms' performance profiles (see Appendix <ref> and <ref> for details).Surprisingly, VeLO is far from best in training speed (which might be expected given it is optimised for training efficacy), although it surpasses some baselines in speed of minimising the validation loss. VeLO's loss to Adam in training efficiency we attribute to (1) weak generalisation to the MLCommons task suite, and (2) Adam's default learning rate decay schedule potentially being more effective than the outcome of the amount of HPO applied with Adamin in <cit.>. VeLO's comparative success in validation is potentially attributable to several MLcommons workloads being in the overfitting regime[A regime where fully minimizing the training loss ultimately worsens validation performance], soVeLO's less effective minimisation of the train loss can lead to better validation than competitors. (Note that while we measure validation performance, all optimizers are run with default parameters and not tuned on validation metrics.). This is particularly the case in the time-control condition because since Velo is slower per-iteration than the baselines, it runs fewer iterations than baselines when using a wall clock-time budget, and thus effectively benefits from early stopping compared to the baselines. Finally, returning to the hyperparameter sensitivity issue from Experiment 1, we also compare VeLO with 75% of the total step-prompt and see a noticeable impact in the score distribution.[16]r0.52Optimizer speed evaluation (MLCommons score B_a, (↑), Eq. <ref>). Time-To-Result and Steps-To-Results are reported when fixing wall-clock time and steps respectively across train and validation targets.Optimizer 2cTrain Scores 2cValidation Scores (lr)2-3 (lr)4-5Time Step Time Step NAdam 0.24 0.25 0.00 0.23 NAdamW 0.36 0.25 0.49 0.36 Adam 1.00 1.00 0.24 0.50 Nesterov 0.00 0.00 0.00 0.00 HeavyBall 0.00 0.00 0.00 0.00 VeLO 0.19 0.00 0.71 0.39 VeLO Short 0.16 0.03 0.74 0.59 § CONCLUSIONLearned optimizers have shown substantial success on narrowly defined task distributions. VeLO scaled up optimizer learning to train a foundational optimizer on a vast task distribution at huge cost. The vision was that it would then generalize to arbitrary machine learning workloads, and outperform hand-crafted optimizers, thus justifying its up-front training cost. We were initially optimistic and excited to see this in action. However ultimately, our independent evaluation on the MLCommons optimizer benchmark called into question most of VeLO's big claims of being hyperparameter free, and providing improved and faster optimization.We extend sincere gratitude to Frank Schneider, Zach Nado, and George Dahl for their support. During the course of this research, they provided clarifications on conceptual ideas regarding the design and implementation of the MLCommons benchmark.This work was also supported by the Edinburgh International Data Facility (EIDF) and the Data-Driven Innovation Programme at the University of Edinburgh.§ BENCHMARK DETAILSSetting Maximum Allowed Wall-Clock Time Across Hardware The MLCommons benchmark is based on fixing a maximum allowed wall-clock time for each workload, denoted time-control condition. To transfer this maximum allowed wall-clock runtime across hardware, we compute the ratio between time per step of algorithms on their hardware (8×V100) and ours (1×A100-80GB or 2×A100-80GB). Then, this ratio is used as a multiplier factor of the maximum allowed wall-clock time for each workload.To get time per step for the original V100 hardware, we use total number of steps each algorithm runs for and the equivalent wall-clock time for those steps as supplemented by the authors in table 28 in <cit.>. For a given workload, the bold time entry is the maximum allowed wall-clock runtime. For the algorithm with this bold-entry, the steps it runs for in the wall-clock time can be found in the Steps row. For reference, we copy the numbers here as in table <ref>.To transfer the wall-clock time, we execute the implementation of the algorithm with this bold time entry as foundhttps://github.com/MLCommons/algorithmic-efficiency/tree/main/reference_algorithms/target_setting_algorithmshere on our hardware for 5% of it's steps. Then, we compute our time-per-step on both 1× and 2× A100 GPUs. Finally, we use the ratio between the V100 and A100 time-per-step as a factor multiplication of the wall-clock time. The time per step hardware benchmarking results for A100 GPUs are shown in table <ref>. To maximally utilize our infrastructure, we use 2 GPUs for FastMRI and ImageNet workloads and 1 GPU for Criteo and OGBG experiments. The final maximum allowed wall-clock runtime are also reported in table <ref>.Established Targets in Maximum Allowed Time/Steps We set targets and measure time/steps to reach those targets as standardized in the MLCommons benchmark. To set the self-tuning regime targets, we use the methodology introduced in section <ref> as done originally by <cit.>. We set separate targets for the time-control and step-control conditions. Targets and maximum allowed runtimes for both time-control and step-control conditions in table <ref>. The tables also include the maximum allowed wall-clock time or maximum allowed steps to run for.Measuring Training Quality For measuring training quality, we measure the final performance p_a,w achieved within a certain time/step budget. The time and steps budgets used are the maximum allowed wall-clock time and maximum allowed steps used for the time-to-result benchmark. These maximum runtime are presented in table <ref> as maximum allowed steps and maximum allowed time for step-control and time-control conditions respectively.§ MANAGING VELO INPUTSVeLO requires total steps at input to intialize optimizer states. This is used to compute percentage of remaining training, a feature input to the LSTM hypernetwork. We can follow two different approaches to provide this input to VeLO. First, we could refactor the benchmark and VeLO implementation to provide the percentage of time remaining directly as input while the experiment is running. This would be computed as the ratio of time remaining and total allowed runtime. The remaining time can be computed directly from the benchmark https://github.com/MLCommons/algorithmic-efficiency/blob/main/submission_runner.pyaccumulated_submission_time variable from the MLCommons benchmark which is updated every step by the https://github.com/MLCommons/algorithmic-efficiency/blob/main/algorithmic_efficiency/profiler.pyprofiler. A simpler approach is estimating the steps VeLO can take within the maximum allowed wall-clock time. We opt for the latter. In table <ref>, we provide the estimates over two runs of VeLO for 5% of the step hints discussed in appendix <ref>.The implementation is written in jax. Jax compiles the computational graphs using XLA. We ommit the compilation times from the estimates since they take insignificant ratio from the whole training runtime but can potentially influence the estimate over 5% of the runtime. To explain the rows in table <ref>, we first run VeLO for a fixed number of steps corresponding to row Steps Run. These are the same steps used earlier in appendix <ref>. Then, we measure the total runtime as reported in Observed Runtime (sec). The Time Per Step (sec), the ratio of the first and second row are used to estimate the hyperparameter, Estimated Total Steps. Subsequently, we average the total steps VeLO can fit in the runtime over the two estimates. We train the workloads using VeLO from start to finish once and then update the total steps for each workload given the actual observed steps and run for two more trials. Since we take median over trials, the evaluation of VeLO is insensitive to any outliers produced by the estimates. For the performance-control condition, where we run for a total fixed number of steps, we run VeLO from start to finish given the maximum allowed steps in table <ref>. Meanwhile, for the VeLO Short run, denoted also as VeLO (75%) in table <ref>, which is prompted with 75% with the steps VeLO can run in the maximum allowed wall-clock time, we use 75% of the steps reported in the final row of table <ref>.§ DEFAULT HYPERPARAMETERSFor all default hyperparameters used for Adam and SGD variants, please refer to table <ref>. The learning rate schedule consists of a linear warmup followed by cosine decay as illustrated in figure <ref>. The schedule requires a total number of steps to operate on. We set the total steps of the schedule to 75% of the step hint provided by the MLCommons benchmark for each workload. The step hint is approximately the total steps the SGD variants can run for given the maximum allowed wall-clock time of the benchmark. We set the warmup and cosine decay steps to the first 5% of the schedule steps and the remaining 95% respectively.§ TIME-CONTROLLED EXPERIMENTS RESULTS §.§ Time-To-Result Measurements §.§ ImageNet Name train/loss train/accuracy validation/loss validation/accuracy Adam 0.0481 ± 0.00531 98.6401 ± 0.16370 1.9405 ± 0.00915 69.8780 ± 0.03027 Heavy Ball 0.1961 ± 0.06906 94.1552 ± 2.11355 1.7592 ± 0.05631 66.1827 ± 0.22902 NAdam 0.0558 ± 0.00040 98.3976 ± 0.05396 1.9516 ± 0.00162 70.0233 ± 0.13590 NAdamW 0.0479 ± 0.00101 98.7139 ± 0.01168 1.6328 ± 0.00854 71.4420 ± 0.06245 Nesterov 0.1572 ± 0.04028 95.4427 ± 1.31832 1.7840 ± 0.00779 66.2933 ± 0.34269 VeLO 0.0862 ± 0.00715 97.4058 ± 0.27273 1.5358 ± 0.02996 72.9073 ± 0.09617 VeLO Short 0.1445 ± 0.00158 95.6785 ± 0.04413 1.3775 ± 0.00333 73.2160 ± 0.10806 HPO 0.5380 ± 0.00695 92.0088 ± 0.23923 1.1170 ± 0.00383 77.4887 ± 0.11420 §.§ FastMRI Name train/loss train/ssim validation/loss validation/ssim Adam 0.2702 ± 0.00693 74.2444 ± 0.51236 0.2850 ± 0.00002 72.6139 ± 0.01243 Heavy Ball 0.2806 ± 0.00298 72.8935 ± 0.20797 0.2897 ± 0.00014 71.9518 ± 0.05269 NAdam 0.2692 ± 0.00396 74.3033 ± 0.50080 0.2851 ± 0.00022 72.6006 ± 0.03608 NAdamW 0.2750 ± 0.00102 73.4651 ± 0.09084 0.2851 ± 0.00015 72.5916 ± 0.04572 Nesterov 0.2809 ± 0.00508 72.9652 ± 0.48978 0.2898 ± 0.00005 71.9132 ± 0.02163 VeLO 0.2737 ± 0.00324 74.0819 ± 0.41933 0.2851 ± 0.00008 72.6663 ± 0.00503 VeLO Short 0.2763 ± 0.00128 73.6923 ± 0.09914 0.2850 ± 0.00026 72.6646 ± 0.02622 HPO 0.2716 ± 0.00371 74.2704 ± 0.30959 0.2851 ± 0.00067 72.6110 ± 0.13964§.§ Criteo-1TB Name train/loss validation/loss Adam 0.1222 ± 0.00098 0.1237 ± 0.00005 Heavy Ball 0.1268 ± 0.00110 0.1279 ± 0.00094 NAdam 0.1237 ± 0.00280 0.1255 ± 0.00317 NAdamW 0.1226 ± 0.00059 0.1237 ± 0.00003 Nesterov 0.1296 ± 0.00173 0.1298 ± 0.00153 VeLO 0.1232 ± 0.00039 0.1240 ± 0.00005 VeLO Short 0.1236 ± 0.00024 0.1242 ± 0.00003 HPO 0.1219 ± 0.00085 0.1237 ± 0.00012§.§ OGBGPlease note that on HPO results, 2 trials out of 3 were unstable, hence, the missing standard deviations.Name train/loss train/mAP validation/loss validation/mAP Adam 0.0165 ± 0.00045 76.3886 ± 1.51418 0.0515 ± 0.00021 27.3651 ± 0.18156 Heavy Ball 0.0329 ± 0.00012 31.9587 ± 0.42343 0.0461 ± 0.00022 23.0116 ± 0.01945 NAdam 0.0174 ± 0.00026 74.3218 ± 1.07672 0.0509 ± 0.00027 27.2395 ± 0.63613 NAdamW 0.0196 ± 0.00181 68.3208 ± 5.45994 0.0483 ± 0.00157 27.6925 ± 0.23413 Nesterov 0.0323 ± 0.00040 33.1559 ± 0.62190 0.0458 ± 0.00024 23.6963 ± 0.38710 VeLO 0.0164 ± 0.00037 76.6425 ± 0.92113 0.0510 ± 0.00017 27.4374 ± 0.30907 VeLO Short 0.0180 ± 0.00058 72.5836 ± 0.96198 0.0491 ± 0.00064 28.2645 ± 0.35253 HPO 0.0205 ± nan 58.8443 ± nan 0.0463 ± nan 28.9687 ± nan § STEP-CONTROLLED EXPERIMENTS RESULTS §.§ Steps-To-Result Measurements §.§ ImageNet Name train/loss train/accuracy validation/loss validation/accuracy Adam 0.0470 ± 0.00587 98.6554 ± 0.22522 1.9438 ± 0.00717 69.8487 ± 0.05294 Heavy Ball 0.2582 ± 0.02502 92.2350 ± 0.80171 1.6870 ± 0.02763 66.2940 ± 0.31674 NAdam 0.0571 ± 0.00060 98.3279 ± 0.04514 1.9528 ± 0.00120 69.9993 ± 0.11780 NAdamW 0.0470 ± 0.00106 98.7786 ± 0.04481 1.6345 ± 0.01098 71.4537 ± 0.07961 Nesterov 0.2512 ± 0.02606 92.4685 ± 0.86639 1.7022 ± 0.03822 66.3390 ± 0.35794 VeLO 0.1046 ± 0.00342 96.8478 ± 0.11937 1.5017 ± 0.02693 72.9120 ± 0.09714 HPO 0.5542 ± 0.00117 91.5016 ± 0.06893 1.1154 ± 0.00098 77.4423 ± 0.08693§.§ FastMRI Name train/loss train/ssim validation/loss validation/ssim Adam 0.2702 ± 0.00693 74.2438 ± 0.51237 0.2850 ± 0.00003 72.6131 ± 0.01279 Heavy Ball 0.2809 ± 0.00282 72.8451 ± 0.14562 0.2899 ± 0.00008 71.9194 ± 0.02081 NAdam 0.2692 ± 0.00396 74.3011 ± 0.49641 0.2851 ± 0.00022 72.5980 ± 0.04064 NAdamW 0.2750 ± 0.00102 73.4640 ± 0.09133 0.2851 ± 0.00016 72.5903 ± 0.04674 Nesterov 0.2811 ± 0.00505 72.9825 ± 0.45841 0.2899 ± 0.00004 71.9274 ± 0.00075 VeLO 0.2804 ± 0.00008 73.5760 ± 0.01010 0.2851 ± 0.00022 72.6630 ± 0.02184 HPO 0.2716 ± 0.00381 74.2566 ± 0.29555 0.2850 ± 0.00080 72.6028 ± 0.13814§.§ Criteo Name train/loss validation/loss Adam 0.1225 ± 0.00015 0.1237 ± 0.00005 Heavy Ball 0.1293 ± 0.00045 0.1299 ± 0.00062 NAdam 0.1239 ± 0.00302 0.1256 ± 0.00315 NAdamW 0.1220 ± 0.00018 0.1237 ± 0.00005 Nesterov 0.1301 ± 0.00096 0.1305 ± 0.00067 VeLO 0.1229 ± 0.00034 0.1240 ± 0.00008 HPO 0.1222 ± 0.00058 0.1238 ± 0.00022§.§ OGBG Name train/loss train/mAP validation/loss validation/mAP Adam 0.0165 ± 0.00033 76.2472 ± 0.74703 0.0515 ± 0.00020 27.3603 ± 0.18697 Heavy Ball 0.0341 ± 0.00026 29.8791 ± 0.24316 0.0466 ± 0.00038 22.3993 ± 0.03845 NAdam 0.0173 ± 0.00033 74.3650 ± 0.94344 0.0509 ± 0.00027 27.2383 ± 0.63573 NAdamW 0.0197 ± 0.00196 68.1042 ± 5.87819 0.0483 ± 0.00157 27.6927 ± 0.23119 Nesterov 0.0331 ± 0.00008 31.8606 ± 0.30560 0.0462 ± 0.00027 22.9570 ± 0.22694 VeLO 0.0153 ± 0.00088 79.4321 ± 1.70572 0.0522 ± 0.00126 27.5886 ± 0.47789 HPO 4.1961 ± 3.61664 61.3348 ± nan 4.3632 ± 3.73861 28.9854 ± nan
http://arxiv.org/abs/2310.18191v1
{ "authors": [ "Fady Rezk", "Antreas Antoniou", "Henry Gouk", "Timothy Hospedales" ], "categories": [ "cs.LG", "cs.AI", "math.OC" ], "primary_category": "cs.LG", "published": "20231027150400", "title": "Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months" }
High precision loss measurements at PTB]High precision calibration setup for loss measurements in electrical steel sheets^1 Physikalisch-Technische Bundesanstalt, Bundesallee 100, 38116 Braunschweig, Germany [email protected] manuscript version 26.10.2023We present details on the current measurement setup at PTB used for high precision loss calibrations in the frequency range 50 Hz to 1 kHz. A combination of analog and digital feedback control is utilized in accordance with the standard. A detailed measurement uncertainty (MU) analysis based on a systematic model equation is presented and inter-dependencies of model parameters are discussed. Experimental results obtained at 50 Hz on NO and GO Epstein samples show excellent agreement between statistical and systematic MU estimation and confirm the MU model analysis. Furthermore, we investigate the influence of external parameters on the loss measurements, like sample loading scheme and the value of maximum demagnetization polarization.Keywords: metrology, traceability, power loss, electrical steel sheets, Epstein frame, single sheet tester, measurement uncertainty§ INTRODUCTION§.§ general intro to loss measurements: Electrical steel sheets are utilized in generators, transformers and engines. The need to save energy at all levels implies to measure loss figures of electrical steel sheets with highest precision. In addition, the design of electrical machines is done by computer simulations with finite element methods (FEMs). Important input parameters for FEM are high precision characteristics of magnetic steel sheets, including power loss under varying temperature and frequency. Since energy conversion is a Billion $$ business, even small improvements have a large positive impact on the economy and they help to mitigate climate change. §.§ PTB standards and beyondThe national metrology standard for loss calibrations in Germany is realized at PTB with record low measurement uncertainties (MUs) and traceability to SI units. Experimental setups contain customized electronics and procedures, and MU contributions are rigorously analyzed by statistical and systematic methods. On a scientific level, high precision experimental data open the opportunity to identify contributions to MUs in loss measurements that don't originate within the experimental method, but are caused by the preconditions of magnetic steel sheet samples. Recent round robin comparisons on SST and Epstein samples carried out among national metrology institutes (NMIs) reach similarconclusions<cit.>. Systematic investigations as presented here, carry the following benefits: i) increased understanding of influence factors to loss measurements besides those already mentioned in normative standards and their addenda<cit.>, ii) ability to reduce and eliminate those influences by setting more restrictive boundary conditions during measurements, and iii) achievement of higher precision, reproducibility, and comparability of loss data. Similar studies have been reported before<cit.> demonstrating the importance of the topic.Updated calibration routines further increased the quality of the experimental data <cit.>. Beneficiaries of the improvements are all parties using electrical steel sheets, from steel producers to manufacturers of electrical engines and generators. Ultimately, end-users of electrical energy are benefiting with lower consumption costs and a reduced impact on the environment. The article is organized as follows. The experimental setup for traceable calibrations of the power loss at PTB is described in section <ref>. Section <ref> discusses in detail the estimation of MUs of power loss, followed by systematic investigations of sample effects and preconditions in section <ref>. The manuscript is finalized with Section <ref>: Summary and conclusions. § MEASUREMENT SETUP AT PTB §.§ Normative standardsCurrently, there exist two standard setups for the magnetic circuit in power loss measurements: Epstein frame and single sheet tester (SST)<cit.>. Both circuits mimic an unloaded transformer and they have advantages depending on the type of material characterized<cit.>. In the Epstein frame, strips of normalized length are arranged in primary and secondary coils forming a square with overlapping strip edges<cit.>. The design goes back to Epstein's work<cit.> and the method is well established within the community of steel producers and their customers that mostly characterize non grain oriented (NO) electrical steel sheets. The 2^nd setup to measure power losses was introduced in 1992 <cit.> for grain-oriented (GO) steel sheets. Here the magnetic circuit consists of a low loss yoke with a 50 cm by 50 cm large steel sheet placed within. The magnetic length l_m is considered to be better defined in an SST compared to the traditional Epstein frame, because magnetic flux paths in the four corners of the Epstein arrangement largely depend on the domain structure of the material. Although the SST and Epstein method are described in detail in the IEC standards 60404 <cit.>, the practical realization of measurement electronics for input and output quantities can vary between laboratories <cit.>. §.§ Technical realization at PTB At PTB, a hybrid control setup is currently used that was developed by Lüdke and Ahlers to originally measure amorphous materials at f = 50 Hz <cit.>. It consists of combined analog and digital feedback control loops that produce wave forms close to sinusoidal shape for the secondary voltage U_2 as required by standards<cit.>. Later, the system was adapted to NO and GO electrical steel sheet measurements and extended to measure at higher frequencies. The schematics of the setup is shown in Fig. <ref>. The initial voltage U_ini is provided by a 16 bit wave form generator (WFG) and amplified by a high power amplifier (A) working up to 70 V and 20 A with bandwidth 2.8 kHz. At polarization values J well below saturation, analog feedback control is sufficient for maintaining total harmonic distortion THD = √(U^2-U_1^2)/U of the secondary voltage below 0.1 %. U denotes the root-mean-square (RMS) value of the voltage and U_i the RMS of the i-th harmonic contribution. The secondary voltage U_2nd is collected with an ADC-card that is clock and sampling rate synchronized with the WFG. Synchronization keeps full knowledge of the phase shift φ between U_1st and U_2nd, because the loss P is phase corrected and φ increases significantly at high J values. At polarizations above 0.55 T, the digital feedback is activated and Fourier analysis reveals fundamental and even and odd-order higher harmonics U_i of U_2nd. Scaled with a damping factor, all odd-order harmonic contributions of U_2nd are added under 180 to U_ini. According to standard, U_2nd is considered suitable during the loss measurement, if the corresponding form factorF = U_2nd/U of the RMS value to the average absolute value U is within 1.10 < F < 1.12. A perfect sine wave gives π/√(8). During measurements at PTB, the form factor F deviation is significantly smaller than 0.1 %, and it only exceeds 1.2 % for J values close to magnetic saturation.Data are collected while sweeping the excitation from small to high J in about 1 mT steps. Since contributions of higher harmonics change only gradually, U_2nd of the previous data point is used to generate U_ini of the next one. In recent years, the original setup <cit.> was adapted to routine calibrations of losses in NO and GO materials. Minor modifications included the extension to a higher frequency range to 200 Hz for SST, and to 1 kHz for Epstein frame measurements.Note, attention must be paid on how to generate the measurement frequency f. Its uncertaintyu(f) depends on the properties of the WFG, specifically the combination of internal clock rate, clock divider, and sampling rate. It leads to a varying number of data points per one fundamental sine wave. §.§ Model equation for loss The specific loss is estimated according to the equations given in the standards<cit.>. Data analysis by Fast-Fourier transform (FFT) gives Fourier coefficients in the frequency domain f. The specific lossPs = ∑_1^n1/2[Re(U_i)*Re(I_i)-Im(U_i)*Im(I_i)]N1/N2/2m_effis calculated by summation over all odd harmonic contributions i. N1 and N2 denote the turns of the primary and secondary circuit, and m_eff is the effective magnetic length.As outlined in <cit.>, values of P_s are obtained for varying J close but not exactly at target value J_tar. However, interpolation is carried out by a polynomial fit up to 3rd order.Equation<ref> shows that the loss P_s includes all higher harmonic contributions that are not removed by the hybrid control. This is in accordance to the standard.§.§ Discussion of equivalency of form factor and harmonic distortion The standard requires a form factor (F) of π/√(8) within 1.2 %. This regulation is based on the measurement capabilities at the time of introduction. Today, better physical properties like values for higher harmonic content are more appropriate. Fig. <ref> shows the form factor as a function of the TDM value for only odd harmonic contributions to secondary signal, i.e. a perfect magnetic hysteresis without DC offset. TDM values smaller 2 % are equivalent to 1 % form factor deviation as required by the standard <cit.>. During calibrations, a total harmonic distortion factor of less than 0.5 % is considered a sufficiently good value. § MEASUREMENT UNCERTAINTIES §.§ Extended model equationTo estimate measurement uncertainties, equation (<ref>) is modified <cit.> with a correction factor 1/x withx = β (f/f_tar)^2(F/F_tar)^2+(1-β)f/f_tar.The factor x takes into account the contribution of a non ideal form factor F and the frequency f to the MU. It is assumed that the loss P_s = P_dyn + P_hyst consists of a dynamic (eddy current) contribution and magnetic domain contributions. The ratio β is P_dyn/P_s. In first approximation, the dynamic loss P_dyn shows a quadratic and the hysteretic loss P_hyst a linear frequency dependence. In addition, P_dyn depends quadratically on the form factor.Due to a slight deviation of the measured J_meas≠ J_tar to the target polarization, the loss value P_s(J_tar) is never obtained exactly in one measurement. That adds a correction term P_s(J_tar) = P_s(J_meas)· (J_tar/J_meas)^α to the equation with α being the generic exponent for the P_s(J) dependence that usually ranges between 1.5 and 2 for NO and GO material <cit.>. Here, α is estimated from experimental data in a narrow polarization range around J_tar, including 6-7 data points. Non perfect air flux compensation in the measurement is considered by introducing a small relative deviation factor γ = δ M_c/M_c of the mutual inductance correction M_c. According to standard <cit.>, M_c is compensated mechanically, but still needs to be considered for MU analysis. This leads to the correction term1-μ_0Ĥ· A_t/Ĵ· Aγwith A=m/4lρ being the sample cross section and A_t being the effective cross-sectional area of the secondary winding. ρ, and m denote the density, and mass of the sample, respectively. The magnetic path length l_m is defined in the standard for Epstein frames, SSTs and ring core measurements.§ SYSTEMATIC INVESTIGATION OF FACTORS REDUCING DATA REPRODUCIBILITY Next, we demonstrate for GO and NO electrical steel sheets different factors that alter loss data and lead to deviations not covered by MUs. If not mentioned otherwise, data was taken on Epstein samples using the same frame with 100 turns per leg. This way, other influences can be minimized and data is inter comparable. Demagnetization and measurement frequency was 50 Hz and themaximum demagnetization polarization was 1.7 T and 1.9 T for NO and GO material, respectively. §.§ One time Epstein frame loadingRepeated measurements on one GO Epstein sample loaded into the Epstein frame once, are shown in Fig. <ref> for low polarization at 1 T and high values at 1.9 T. Data reproduce excellent and MUs obtained according to GUM type B method fully cover statistical deviation indicated as σ line in bothcases. Additional data is shown in the supplementary material.§.§ Multiple Epstein frame loadingIt is expected that small changes of the position of individual sheets within the Epstein frame have an effect on the loss data, because the magnetic flux that leaves one sheet and penetrates the next at the corner of strips uses different grain paths. In a rough assumption, all those effects average over large surface areas, however, grain size in GO material is up to cm size and this could have an influence on the loss estimate. Therefore, we conducted test measurements and removed the sample after each measurement and put it back for the next one. Note, a specific loading pattern is used, e.g. all 4th strip number are located in the same pile. Measured loss data is shown in Fig.<ref> for 1 T, and 1.9 T polarization on NO electrical steel sheets. More data is in the suppl. mat. The scattering for low polarization is larger for reloaded Epstein samples compared to not reloading the frame (Fig. <ref>), but still within the systematic MU estimation. With increasing polarization, the scattering effect is less pronounced. In the case of GO material, we observe reduced loss values compared to Fig. <ref> as expected, and significantly enhanced scattering.§.§ Maximum demagnetization polarization Next, we investigated the influence of the demagnetization process before each measurement on the loss. Fig. <ref> shows loss data at three different polarizatons: 1 T, 1.3 T, and 1.9 T as a function of maximum demagnetization J_demag values. Scattering of the loss data is most pronounced for 1 T and 1.3 T data, but shows signs of saturation for J_demag higher 1.8 T. The loss at 1.9 T is not affected by the demagnetization pocess, because the magnetic domains are fully aligned in the sheet during one full hysteresis loop.§ SUMMARY AND CONCLUSIONS The experimental setup used at PTB for loss measurements up to 1 kHz frequency is described in detail. It uses analog and digital feedback control to obtain sinusoidal waveform in the secondary circuit.Future improvements of the capability should include an extension to higher frequencies above 1 kHz that are requested by industrial stakeholders. Since the phase shift between primary and secondary circuit increases with higher frequencies, the new setup should include an amplifier with larger power and current output. This can not be accomplished by modifications of the existing setup. Another drawback of the current setup is its susceptibility to unwanted and dangerous resonances of the power amplifier. A fully digital instead of hybrid feedback control avoids this problem and covers the full catalog of requirements for loss data calibrations.We furthermore presented a detailed MU analysis based on a systematic model equation and discussed inter-dependencies of model parameters. Experimental results obtained at 50 Hz measurements of NO and GO Epstein samples find excellent agreement between statistical and systematic MU estimation and confirm the MU model analysis.One of the recurring problems in SST and Epstein calibrations is the conversion factor between both sets of data that deviate significantly depending on the material. SST data has higher reproducibility than Epstein data, because the magnetic length is better defined. However, SSTs require large (50 cm x 50 cm) sheet samples. SST with smaller dimension could be a reasonable alternative to have small specimen and keep the reproducibility of SST data. FFT analysis of the measurement signal allows to estimate only the contribution of the fundamental to the power loss P_s. This way, the form factor could be replaced in the standard.Further systematic investigations of loss in Epstein and SST samples should include temperature studies in the range allowed by standard (23± 5) ^∘ C, and loss dependence on the demagnetization frequency. Later effect is known especially for GO material as domain refinement, where higher frequencies lead to reduced domain width and smaller magnetic loss.This research work was partially supported by the 19ENG06 HEFMAG project, which was funded by the EMPIR program, and co-financed by the Participating States and the European Union’s Horizon 2020 research and innovation program.iopart-num
http://arxiv.org/abs/2311.00716v1
{ "authors": [ "K. Pfnuer", "J. Luedke", "K. Hoffmann", "F. Weickert" ], "categories": [ "physics.ins-det" ], "primary_category": "physics.ins-det", "published": "20231026202443", "title": "High precision calibration setup for loss measurements in electrical steel sheets" }
[footnoteinfo]The authors acknowledge financial support from Grant PID2022-137909NB-C21 funded by MCIN/AEI/ 10.13039/501100011033.First]Jacob R. GoodmanSecond]Leonardo J. Colombo [First]J. Goodman is with Antonio de Nebrija University, Departamento de Informática, Escuela Politécnica Superior, C. de Sta. Cruz de Marcenado, 27, 28015, Madrid, Spain. email: [email protected][Second]L.Colombo is with Centre for Automation and Robotics (CSIC-UPM), Ctra. M300 Campo Real, Km 0,200, Arganda del Rey - 28500 Madrid, Spain. email:[email protected] In this work we study the reduction by a Lie group of symmetries of variational collision avoidance probelms of multiple agents evolving on a Riemannian manifold and derive necessary conditions for the reduced extremals. The problem consists of finding non-intersecting trajectories of a given number of agents, among a set of admissible curves, to reach a specified configuration, based on minimizing an energy functional that depends on the velocity, covariant acceleration and an artificial potential function used to prevent collision among the agents.Variational problems on Riemannian Manifolds, Collision avoidance, Potential functions, Reduction by symmetries.§ INTRODUCTIONDimensionality reduction for large scale systems has become an active problem of interest within the automatic control and robotics communities. In multi-agent systems, guidance and trajectory planning algorithms for coordination while optimizing qualitative features for the system of multiple robots are determined by solutions of nonlinear equations which demand a high-computational costs along its integration. The construction of methods for the reduction of dimensionality permits fast computations for the generation of optimal trajectories in the collision avoidance motion of multi-agent systems. Methods for trajectory tracking and estimation algorithms for pose and attitude of mechanical systems evolving on Lie groups are commonly employed for improving accuracy on simulations, as well as to avoid singularities by working with coordinate-free expressionsin the associated Lie algebra of the Lie group to describe behaviors in multi-agent systems (i.e., a set of equations depending on an arbitrary choice of the basis for the Lie algebra). More recently, this framework has been used for cooperative transportation <cit.>, <cit.>.Optimization problems on Lie groups have a long history <cit.> and have been applied to many problems in control engineering. In practice, many robotic systems exhibit symmetries that can be exploited to reduce some of the complexities of system models, for instance degrees of freedom. Symmetries in optimal control for systems on Lie groups have been studied in <cit.>, <cit.>, <cit.>, <cit.> among many others, mainly for applications in robotic and aerospace engineering, and in particular, for spacecraft attitude control and underwater vehicles <cit.>. While most of the applications of symmetry reduction provided in the literature focus on the single agent situation, only a few works studied the relation between multi-agent systems and symmetry reduction (see for instance the early work on the topic <cit.>), in this work we consider symmetry reduction of multi-agent systems in the necessary conditions for optimality obtained via a variational problem on Lie groups, with a decentralized communication topology determined by an undirected graph, i.e., the information between the agents is only shared between nearest neighbors.Riemannian polynomials are smooth and optimal in the sense that they minimize the average square magnitude of some higher-order derivative along the curve. This quantity is often related to the magnitude of the controller in control engineering applications (which itself is related to energy consumption). Moreover, Riemannian polynomials carry a rich geometry with them, which has been studied extensively in the literature (see <cit.> for a detailed account of Riemannian cubics and <cit.> for some results with higher-order Riemannian polynomials).It is often the case that—in addition to interpolating points—there are obstacles or regions in space that need to be avoided. In this case, a typical strategy is to augment the action functional with an artificial potential term that grows large near the obstacles and small away from them (in that sense, the trajectories that minimize the action are expected to avoid the obstacles). This was done for instance in <cit.>, <cit.>, <cit.>, <cit.> where necessary conditions for extrema in obstacle avoidance problems on Riemannian manifolds were derived. In addition to applications to interpolation problems on manifolds and to energy-minimum problems on Lie groups and symmetric spaces endowed with a bi-invariant metric <cit.>, and extended in <cit.>, <cit.> and <cit.> for the collision avoidance task and hybrid systems in <cit.>. Reduction of necessary conditions for the obstacle avoidance problem were studied in <cit.> and sufficient conditions for the problem were studied in <cit.>. In this paper, we build on the previous studies by considering the problem of reduction by a Lie group of symmetries necessary conditions for optimality in the variational collision avoidance problem on Lie groups endowed with a left-invariant metric. Finally, a brief study of the reduction by symmetries of the collision avoidance problem in the case of bi-invariant metrics is considered.§ BACKGROUND ON RIEMANNIAN MANIFOLDS AND GLOBAL ANALYSIS §.§ Background on Riemannian manifoldsLet (Q, < ·, ·>) be an n-dimensional Riemannian manifold, where Q is an n-dimensional smooth manifold and < ·, ·> is a positive-definite symmetric covariant 2-tensor field called the Riemannian metric. That is, to each point q∈ Q we assign a positive-definite inner product <·, ·>_q:T_qQ× T_qQ→ℝ, where T_qQ is the tangent space of Q at q and <·, ·>_q varies smoothly with respect to q. The length of a tangent vector is determined by its norm, defined by v_q=<v_q,v_q>^1/2 with v_q∈ T_qQ. For any p ∈ Q, the Riemannian metric induces an invertible map ·^♭: T_p Q → T_p^∗ Q, called the flat map, defined by X^♭(Y) = <X, Y> for all X, Y ∈ T_p Q. The inverse map ·^♯: T_p^∗ Q → T_p Q, called the sharp map, is similarly defined implicitly by the relation <α^♯, Y> = α(Y) for all α∈ T_p^∗ Q. Let C^∞(Q) and Γ(TQ) denote the spaces of smooth scalar fields and smooth vector fields on Q, respectively. The sharp map provides a map from C^∞(Q) →Γ(TQ) via f(p) = df_p^♯ for all p ∈ Q, where f is called the gradient vector field of f ∈ C^∞(Q). More generally, given a map V: Q ×⋯× Q → (with m copies of Q), we may consider the gradient vector field of V with respect to i^th component as _i V(q_1, …, q_m) =U(q_i), where U(q) = V(q_1, …, q_i-1, q, q_i+1, …, q_m) for all q, q_1, …, q_m ∈ Q.Vector fields are a special case of smooth sections of vector bundles. In particular, given a vector bundle (E, Q, π) with total space E, base space Q, and projection π: E → Q, where E and Q are smooth manifolds, a smooth section is a smooth map X: Q → E such that π∘ X = id_Q, the identity function on Q. We similarly denote the space of smooth sections on (E, Q, π) by Γ(E). A connection on (E, Q, π) is a map ∇: Γ(TQ) ×Γ(E) →Γ(TQ) which is C^∞(Q)-linear in the first argument, -linear in the second argument, and satisfies the product rule ∇_X (fY) = X(f) Y + f ∇_X Y for all f ∈ C^∞(Q),X ∈Γ(TQ),Y ∈Γ(E). The connection plays a role similar to that of the directional derivative in classical real analysis. The operator ∇_X which assigns to every smooth section Y the vector field ∇_XY is called the covariant derivative (of Y) with respect to X.Connections induces a number of important structures on Q, a particularly ubiquitous such structure is the curvature endomorphism, which is a map R: Γ(TQ) ×Γ(TQ) ×Γ(E) →Γ(TQ) defined by R(X,Y)Z := ∇_X∇_YZ-∇_Y∇_XZ-∇_[X,Y]Z for all X, Y ∈Γ(TQ),Z ∈Γ(E).The curvature endomorphism measures the extent to which covariant derivatives commute with one another. We now specialize our attention to affine connections, which are connections on TQ. Let q: I → Q be a smooth curve parameterized by t ∈ I ⊂, and denote the set of smooth vector fields along q by Γ(q). Then for any affine connection ∇ on Q, there exists a unique operator D_t: Γ(q) →Γ(q) (called the covariant derivative along q) which agrees with the covariant derivative ∇_q̇W̃ for any extension W̃ of W to Q. A vector field X ∈Γ(q) is said to be parallel along q if D_t X≡ 0. The covariant derivative allows to define a particularly important family of smooth curves on Q called geodesics, which are defined as the smooth curves γ satisfying D_t γ̇ = 0. Moreover, geodesics induce a map exp_q:T_qQ→ Q called the exponential map defined by exp_q(v) = γ(1), where γ is the unique geodesic verifying γ(0) = q and γ̇(0) = v. In particular, exp_q is a diffeomorphism from some star-shaped neighborhood of 0 ∈ T_q Q to a convex open neighborhood ℬ (called a goedesically convex neighborhood) of q ∈ Q. It is well-known that the Riemannian metric induces a unique torsion-free and metric compatible connection called the Riemannian connection, or the Levi-Civita connection. Along the remainder of this paper, we will assume that ∇ is the Riemannian connection. For additional information on connections and curvature, we refer the reader to <cit.>. When the covariant derivative D_t corresponds to the Levi-Civita connection, geodesics can also be characterized as the critical points of the length functional L(γ) = ∫_0^1 γ̇dt among all unit-speed piece-wise regular curves γ: [a, b] → Q (that is, where there exists a subdivision of [a, b] such that γ is smooth and satisfies γ̇ 0 on each subdivision). If we assume that Q is complete (that is, (Q, d) is a complete metric space), then by the Hopf-Rinow theorem, any two points x and y in Q can be connected by a (not necessarily unique) minimal-length geodesic γ_x,y. In this case, the Riemannian distance between x and y can be defined by d(x,y)=∫_0^1d γ_x,y/d s(s)ds.Moreover, if y is contained in a geodesically convex neighborhood of x, we can write the Riemannian distance by means of the Riemannian exponential as d(x,y)=_x^-1y. §.§ Sobolev Spaces of CurvesOne often views finite-dimensional smooth manifolds as spaces which are locally diffeomorphic to ℝ^n for some n ∈. Infinite-dimensional manifolds are defined in much the same way, with ℝ^n being replaced by some infinite-dimensional topological vector space equipped with some additional structure that allows for the notion of smoothness. Common choices include locally convex topological vector spaces, Fréchet spaces, Banach spaces, and Hilbert spaces (in decreasing order of generality), which are known as the model spaces for the manifold. Each type of model space comes with its own advantages and disadvantages, and is often determined by the problem of interest. In this thesis, the most natural choice in model space turns out to be Hilbert spaces (particularly Sobolev spaces).Let I ⊂ℝ be a closed interval and L^2(I, ℝ^n) denote the space of square integrable functions f: I →ℝ^n. That is, f ∈ L^2(I, ℝ^n) if and only if ∫_I f(x)^2 dx < +∞, where · denotes the Euclidean norm on ℝ^n. Equivalently, we could say that f = (f^1, …, f^n) is of class L^2(I, ℝ^n) if and only if f^i is of class L^2(I, ℝ) for all 1 ≤ i ≤ n. L^2 becomes a Hilbert space when equipped with the inner product < f, g> = ∫_I ( f(t) · g(t))dt, where where f · g denotes the "dot product" on ℝ^n.Let k ≥ 1 and consider functions f, φ: [a, b] →ℝ^n such that d^j/dt^jφ(a) = d^j/dt^jφ(b) = 0 for all 0 ≤ j ≤ k. It follows via integration by parts that∫_a^b (f(t) ·d^k/dt^kφ(t))dt = (-1)^k ∫_a^b (d^k/dt^k f(t) ·φ(t) )dt,The left-hand side of the above equation still makes sense if we assume f only to be integrable on [a, b]. If there exists an integrable function g: [a, b] →ℝ^n such that ∫_a^b (f(t) ·d^k/dt^kφ(t))dt = (-1)^k ∫_a^b (g(t) ·φ(t) )dtfor all φ:[a, b] →ℝ^n which vanish at the endpoints along with its first k derivatives, we refer to g as the k^th weak derivative of f, and often denote g = d^k/dt^k f when there is no confusion. We define the Sobolev spaceH^k(I, ℝ^n) := {f: I →ℝ^n| fisC^k-1 and hask^th weak derivative inL^2(I, ℝ^n) }.It is well-known that H^k(I, ℝ^n) becomes a Hilbert space when equipped with the inner product <f, g>_H^k = ∑_j=0^k∫_I (d^j/dt^jf(t) ·d^j/dt^jg(t))dt = ∑_j=0^k < d^j/dt^jf, d^j/dt^jg>_L^2.The inner product < ·, ·>_H^k induces the norm f _H^k = [∑_j=0^k∫_I d^j/dt^jf(t)_ℝ^n^2 dt]^1/2,where ·_ℝ^n is the Euclidean norm on ℝ^n. An alternative way to construct H^k(I, ℝ^n) is to define it as the completion of the space of smooth functions C^∞(I, ℝ^n) with respect to the norm ·_H^k. It can be shown that the two characterizations of H^k(I, ℝ^n) are indeed equivalent. Let Q be a finite-dimensional smooth manifold. We define H^k(I, Q) as the set of curves q: I → H such that for all coordinate charts (U, φ) on Q such that q(I') ⊂ U for some I' ⊂ I, the chart representation φ∘ q: I' →ℝ^(Q) is of Sobolev class H^k(I', ℝ^(Q)). It can be seen that H^k(I, Q) is an infinite-dimensional smooth manifold modelled on H^k(I, ℝ^(Q)) (<cit.>). It should be noted that H^k(I, Q) is not in general a Hilbert space, as it not generally a vector space and thus has no well-defined inner product structure. However, the tangent space T_q H^k(I, Q) of Sobolev class H^k vector fields along a curve q on Q may be identified with H^k(I, ℝR^(Q)) and hence is a Hilbert space.Suppose that (Q, g) is a finite-dimensional complete Riemannian manifold and q ∈ H^k(I, Q). We equip the vector space Γ(q) of smooth vector fields along q with the normX_H^k_g := [∑_j=0^k ∫_I D_t^j X ^2_g dt ]^1/2where D_t^j denotes the j^th covariant derivative along q with respect to the Levi-Civita connection (by convention, we take D_t^0 X = X), and ·_g denotes the norm induced by the Riemannian metric g. Denote the completion of Γ(q) under ·_H^k_g by H^k_g(q). Consider an orthonormal basis of parallel vector fields {ξ_i}∈Γ(q) with respect to g. It follows that if X = X^i ξ_i, then D_t^j X = X^i (j)ξ_i for all j ∈. If we let A · B := A^i B^i for all A, B ∈Γ(q), where A = A^i ξ_i and B = B^i ξ_i, then we haveX_H^k_g := [∑_j=0^k ∫_I (X^(j)· X^(j)) dt ]^1/2,from which it follows that H^k_g(I, Q) can be identified with H^k(I, ℝ^n). Hence any complete Riemannian metric on Q induces a Hilbert structure associated to Γ(q) that coincides with that of T_q H^k(I, Q). Moreover, it follows that the inner product <X, Y>_H^k_g := ∑_j=0^k ∫_I g(D_t^j X, D_t^j Y) dt, varies smoothly across the tangent spaces, and hence is a Riemannian metric on H^k(I, Q)[For any Hilbert manifold M modelled on a Hilbert space H, any Riemannian structure placed on M is locally equivalent to the Hilbert structure on H. In particular, for any local coordinate chart (φ, U) on M and point x ∈ U, there exists a unique bounded, positive-definite, self-adjoint operator g(x) on H such that <X, Y>_M = <g(x)φ_∗(X), φ_∗(Y)>_H for all X, Y ∈ T_x M. Moreover, the map x ↦ g(x) is smooth on U.].In applications, especially when ultimately interested in solutions to some order 2k ODE, it is often the case that you wish to consider curves of some specified regularity which satisfy a set of boundary values. For example, curves of Sobolev class H^2k regularity whose first k (covariant) derivatives (including positions as k=0) satisfy some specified boundary conditions. For that reason, we define the path space:Ω^(2k) = {q ∈ H^2k([a, b], Q) | q(a) = q_a,q(b) = q_b,D_t^j q̇(a) = ξ^j_a andD_t^j q̇(b) = ξ^j_bforj=0,1,…, k-1 }where q_a, q_b ∈ Q and ξ^j_a ∈ T_q_a Q and ξ^j_b ∈ T_q_b Q for all 1 ≤ j ≤ k-1. It is easy to see that Ω^(2k) is the inverse image of ((q_a, q_b), (ξ^0_a, ξ^0_b), … (ξ^k-1_a, ξ^k-1_b) ) under the mapF: H^2k([a, b], Q) → TQ^k given by F(q) = ((q(a), q(b)), (q̇(a), q̇(b)), …, (D^k-1q̇(a), D^k-1q̇(b)) ). Moreover, it can be shown that F is a smooth submersion, from which it follows by the implicit function theorem that Ω^(2k) is a closed submanifold of H^2k([0, 1], Q), and hence inherits its Hilbert structure. The tangent space T_q Ω^(2k) can be indentified with the space X ∈H^2k_g(q) of vector fields in H^2k_g(q) which vanish at the endpoints along with their first k covariant derivatives. Hence we may equip Ω^(2k) with the Riemannian structure (<ref>). We also consider the special case Ω^(1) = {q ∈ H^1([a, b], Q)| q(a) = q_a,q(b) = q_b}which is itself a closed submanifold of H^1([a, b], Q), and is of particular importance for geodesics. We will occasionally use the notation Ω^(1), [a, b]_q_a, q_b(Q) (and similar for higher-order path spaces) when it is necessary to refer to the boundary conditions, underlying manifold, and interval of integration. §.§ Riemannian geometry on Lie Groups Let G be a Lie group with Lie algebra := T_e G, where e is the identity element of G. The left-translation map L: G × G → G provides a group action of G on itself under the relation L_gh := gh for all g, h ∈ G. Given any inner-product < ·, ·>_ on , left-translation provides us with a Riemannian metric < ·, ·> on G via the relation:< X_g, Y_g > := < g^-1 X_g, g^-1 Y_g >_,for all g ∈ G, X_g, Y_g ∈ T_g G. Such a Riemannian metric is called left-invariant, and it follows immediately that there is a one-to-one correspondence between left-invariant Riemannian metrics on G and inner products on , and that L_g: G → G is an isometry for all g ∈ G by construction. Any Lie group equipped with a left-invariant metric is complete as a Riemannian manifold. In the remainder of the section, we assume that G is equipped with a left-invariant Riemannian metric.In the following L_g^∗ stands for the push-forward of L_g, which is well-defined because L_g: G → G is a diffeomorphism for all g ∈ G. We call a vector field X on G left-invariant if L_g∗ X = X for all g ∈ G, and we denote the set of all left-invariant vector fields on G by 𝔛_L(G). It is well-known that the map ϕ: →𝔛_L(G) defined by ϕ(ξ)(g) = L_g∗ξ for all ξ∈, g ∈ G is an isomorphism between vector spaces. This isomorphism allows us to construct an operator ∇^: ×→ defined by:∇^_ξη := ∇_ϕ(ξ)ϕ(η)(e),for all ξ, η∈, where ∇ is the Levi-Civita connection on G corresponding to the left-invariant Riemannian metric < ·, ·>. Although ∇^ is not a connection, we shall refer to it as the Riemannian -connection corresponding to ∇ because of the similar properties that it satisfies:∇^: ×→ is -bilinear, and for all ξ, η, σ∈, the following relations hold:(1) ∇_ξ^η - ∇_η^ξ = [ ξ, η]_, (2) < ∇_σ^ξ, η> + <ξ, ∇_σ^η> = 0. We may consider the Riemannian -connection as an operator ∇^: C^∞([a, b], )× C^∞([a, b], ) → C^∞([a, b], ) in a natural way,namely, if ξ, η∈ C^∞([a, b], ), we can write (∇^_ξη)(t) := ∇^_ξ(t)η(t) for all t ∈ [a, b]. With this notation, Lemma <ref> works identically if we replace ξ, η, σ∈ with ξ, η, σ∈ C^∞([a, b], ).Given a basis {A_i} of , we may write any vector field X on G as X = X^i ϕ(A_i), where X^i: G →, where we have adopted the Einstein sum convention. If X is a vector field along some smooth curve g: [a, b] → G, then we may equivalently write X = X^i g A_i, where now X^i: [a, b] → and g A_i =: L_g A_i. We denote Ẋ = Ẋ^i A_i, which may be written in a coordinate-free fashion via Ẋ(t) = d/dt(L_g(t)^-1 ∗X(t) ). We now wish to understand how the Levi-Civita connection ∇ along a curve is related to the Riemannian -connection ∇^. This relation is summarized in the following result <cit.>. Consider a Lie group G with Lie algebraand left-invariant Levi-Civita connection ∇. Let g: [a,b] → G be a smooth curve and X a smooth vector field along g. Then the following relation holds for all t ∈ [a, b]:D_t X(t) = g(t)(Ẋ(t) + ∇_ξ^η(t) ).The Riemannian -connection satisfies:∇_ξ^η = 1/2( [ξ, η]_ - ^†_ξη - ^†_ηξ), for all ξ, η∈.§ THE COLLISION AVOIDANCE TASK We now switch our attention to multi-agent systems and the collision avoidance task. Consider a set 𝒱 consisting of s≥ 2 agents on Q, a complete and connected Riemannian manifold. The configuration of each agent at any given time is determined by the element q_i(t)∈ Q, i=1,…,s. The neighboring relationships are described by an undirected time-invariant graph 𝔾 = (𝒱, ℰ) with edge set ℰ⊆𝒱×𝒱. The set of neighbors 𝒩_i for the agent i∈𝒱 is given by 𝒩_i={j∈𝒱:(i,j)∈ℰ}. An agent i∈𝒱 can measure its Riemannian distance from other agents in the subset 𝒩_i ⊆𝒱. The assumptions that Q is complete and connected and 𝔾 is undirected and time-invariant will remain for the remainder of the paper.For i=1,...,s, and points ξ_i, η_i ∈ TQ, consider the sets Ω_i := Ω_ξ_i, η_i^(2), [a, b] and the functional J_ on Ω := Ω_1×···×Ω_s defined by:J_(q_1,q_2,…,q_s)= 1/2∑_i=1^s ∫_a^b (|| D q̇_i/dt(t)||^2 + ∑_(i, j) ∈ℰV_ij(q_i(t),q_j(t)))dt.where V_ij:Q× Q→ℝ is a smooth non-negative function called an artificial potential satisfying the symmetry relations V_ij = V_ji and V_ij(p, x) = V_ij(x, p) for all (i,j)∈ℰ and (p,x) ∈ Q× Q. The sets Q^s := Q × Q ×… Q (s times) and Ω are (complete) Riemannian manifolds when equipped with the respective product metrics. Moreover, Ω is an infinite-dimensional Hilbert manifold with model space (H^2([a, b], ℝ^n))^s, which may be identified with the path space Ω^(2) of Q^s. With this identification, it is clear that J_ may be identified with J: Ω^(2)→, where V: Q^s → is given by V(q) = V(q^1, …, q^s) := ∑_j∈𝒩_iV_ij(q_i(t),q_j(t)). It follows that the analysis done in <cit.> also applies to J_ and its minimizers. In particular, the necessary conditions for optimality take the following form: A curve q = (q_1,...,q_s) ∈Ω is a critical point of J_ if and only if for each i ∈𝒱, q_i ∈Ω_i is smooth and for all t ∈ [0, T] satisfiesD^3_t q̇_i + R(D_t q̇_i,q̇_i)q̇_i=-∑_j∈𝒩_i_1 V_ij(q_i(t),q_j(t)),where R denotes the curvature endomorphism on Q and _1 V_ij denotes the gradient vector field of V_ij with respect to the first argument.§ REDUCTION BY SYMMETRY ON LIE GROUPS In this section, we reduce the necessary conditions (<ref>) by symmetry on a Lie group G equipped with a left-invariant Riemannian metric. This process amounts to left-translating the necessary conditions on G to some equivalent set of equations on the Lie algebra , together with a reconstruction equation. In Section <ref>, we will additionally consider the special case that G admits a bi-invariant metric, and show how the gradient vector field of the artificial potential can be calculated explicitly in the collision avoidance problem. Reduced collision avoidance extremals for rigid body motions in (3) are considered as an example.§.§ Reduction of Necessary Conditions We wish to obtain Euler-Poincaré equations corresponding to (<ref>) under the following assumption:G1: Q = G is a connected Lie group endowed with a left-invariant Riemannian metric and corresponding Levi-Civita connection ∇.To do this, we first must understand the forms that D_t^3 ġ and R(D_t ġ, ġ)ġ take when left-translated to curves in the Lie algebra . This is summarized in the following lemma: Let g: [a, b] → G be a smooth curve and set ξ^(0) := g^-1ġ. Recursively define ξ^(i) = ξ̇^(i-1) + ∇^_ξ^(0)ξ^(i-1) for i = 1,2. Then,D^3_t ġ = g(ξ̇^(2) + ∇^_ξ^(0)ξ^(2)),R(D_t ġ, ġ)ġ = g R(ξ^(1), ξ^(0))ξ^(0).It's clear from Lemma <ref> and Lemma <ref> that ξ^(i) = g^-1D_t^i ġ for i = 0,1,2. One more application of Lemma <ref> to D_t^2 ġ = g ξ^(2) yields equation (<ref>). Equation (<ref>) follows immediately by the fact that R is a left-invariant tensor field and observing that R(D_t ġ, ġ)ġ = R(gξ^(1), gξ^(0))gξ^(0) = g R(ξ^(1), ξ^(0))ξ^(0).The quantities calculated in Lemma <ref> may be substituted directly into equation (<ref>). If g were a Riemannian cubic polynomial (i.e., in the case that V ≡ 0), we would immediately obtain reduced equations on the Lie algebra . However, we still must handle the artificial potential. We may left-translate the gradient potential directly to the Lie algebra. That is, we write V(g) = L_g^∗∘ L_g^-1 ∗ V(g), and substitute this directly into (<ref>) along with equations (<ref>) and (<ref>) to obtain the reduced equations. First, we show that the Riemannian distance d is invariant under left-translation, which follows immediately by the left-invariance of the metric: d(gq, gp) = d(q, p) for all g,q, p ∈ G. Since G is complete as a Riemannian manifold, there exists a geodesic γ: [0, 1] → G which minimizes the length functional L(c) = ∫_0^1 ċ(t) dt among all smooth curves c:[0, 1] → G satisfying c(0) = p,c(1) = q. Moreover, we have d(p, q) = L(γ) by equation (<ref>). By left-invariance of the metric, we then have that d(p, q) = L(γ) = L(gγ) ≥ d(gp, gq), since in particular g γ is a smooth curve such that gγ(0) = gp,gγ(1) = gq. On the other hand, there exists some geodesic γ^∗ such that L(γ^∗) = d(gp, gq), and so d(gp, gq) = L(γ^∗) = L(g^-1γ^∗) ≥ d(p, q). It follows that d(p, q) = d(gp, gq).For the purposes of collision avoidance, the particular sub-potentials V_jk will take the form V_jk(g_j, g_k) = f(d^2(g_j, g_k)), where d^2: G × G → is the square of the Riemannian distance on G. That is, we have V_jk(hg_j, hg_k) = V_jk(g_j, g_k) for all h ∈ G, (j,k) ∈ℰ, from which it is clear that V is left-invariant with respect to left-translation on G^s. Observe that g_j^-1_1 V_jk(g_j, g_k) = _1 V_jk(e, g_j^-1 g_k). This motivates the definition h_jk = g_j^-1 g_k, from which we find that ḣ_jk = -g_j^-1ġ_j g_j^-1 g_k + g_j^-1ġ_k = -ξ_j h_jk + h_jkξ_k, where ξ_i := g_i^-1ġ_i for all i = 1, …, s. This leads to the following result: Suppose that Q = G^s, where G satisfies assumption G1. Then g = (g_1, …, g_s) ∈ C^∞([a,b], G^s) satisfies (<ref>) if and only if ξ^(0)_j := g^-1_j ġ_j and h_jk := g^-1_j g_k solve:ḣ_jk = -ξ^(0)_j h_jk + h_jkξ_k^(0),ξ̇^(i)_j= ξ^(i+1)_j - ∇^_ξ^(0)_jξ^(i)_j,ξ̇^(2)_j + ∇_ξ^(0)_j^ξ^(2)_j + R(ξ^(1)_j, ξ^(0)_j )ξ^(0)_j= -∑_r ∈𝒩_j_1 V_jr(e, h_jr),for i=0,1, and for all j = 1, …, s and k ∈𝒩_j. Proposition <ref> can be considered as a special case of Euler-Poincaré reduction for second order Lagrangians (that is, Lagrangians defined on the second order tangent bundle T^2 G). This was studied on Lie groups in <cit.>, where the corresponding higher order Euler-Poincaré equations were obtained. Using the Riemannian formalism, we bypass the necessity to work with higher-order tangent bundles, and obtain equations evolving Lie algebrarather than its dual ^∗. Also Proposition <ref>can be seen as the second-order extension of the collision avoidance problem on Lie groups considered in <cit.>.§ REDUCTION ON LIE GROUPS WITH BI-INVARIANT METRICS §.§ Bi-invariant metrics Now we wish to discuss another important class of Riemannian metrics on a Lie group, the so-called bi-invariant (or -invariant) metrics. These are the Riemannian metrics < ·, ·> on G which are both left- and right-invariant. Unlike left- and right-invariant metrics, not every Lie group G admits a bi-invariant metric. The following result from <cit.> provides necessary and sufficient conditions for the existence of a bi-invariant metric. A connected Lie group admits a bi-invariant metric if and only if it is isomorphic to the Cartesian product of a compact Lie group and a finite-dimensional vector space. Moreover, such a metric is unique up to scalar multiplication. Despite this limitation, many important examples of Lie groups satisfy the conditions. In particular, (3) is a compact Lie group, and ℝ^3 is a finite-dimensional vector space. Bi-invariant metrics have many nice properties that greatly simplify calculations in practice. First, it is clear that for all g ∈ G,X, Y ∈ T_gG, we have <X, Y> = <gXg^-1, gYg^-1> = <_g X, _g Y> where :G×𝔤→𝔤 is the adjoint operator (this is why such metrics are also called -invariant). Let ξ, η, σ∈. Then, <η, σ> = <_(tξ)η, _(tξ)σ>. Differentiating at t =0, we see that 0 = <_ξη, σ> + <η, _ξσ>, which implies that <^†_ξη, σ> = <-_ξη, σ>. Hence, ^†_ξη = -_ξη = [η, ξ] for all ξ, η∈.Consider a Lie group G equipped with a bi-invariant metric. Let ∇ be the Levi-Civita connection and ∇^ be the corresponding Riemannian -connection. Then: * ∇^_ξη = 1/2 [ξ, η],* R(ξ, η)σ = 1/4[[ξ, η],σ],for all ξ, η, σ∈. §.§ Reduction on Lie Groups with Bi-invariant Metrics Necessary conditions for optimality can be simplified dramatically in the case that the metric <·, ·> is bi-invariant. In particular, we view equipping G with such a bi-invariant metric as a strengthening of assumption G1:G2 : G is a connected Lie group equipped with a bi-invariant Riemannian metric and corresponding Levi-Civita connection ∇. Using Lemma <ref>, we obtain the following corollary to Proposition <ref>.Suppose that G^s satisfies assumption G2. Then g_j ∈Ω satisfies that ξ^(0)_j := g^-1_j ġ_j and h_jk := g^-1_j g_k solve:ḣ_jk = -ξ^(0)_j h_jk + h_jkξ_k^(0) ⃛ξ_j + [ξ_j, ξ̈_j] = -∑_r ∈𝒩_j_1 V_jr(e, h_jr). From Proposition <ref>, it follows that if we take ξ^(0) = ξ, then g ∈Ω solves (<ref>) if and only if:ḣ_jk = -ξ^(0)_j h_jk + h_jkξ_k^(0),fori=0,1,ξ̇^(i)_j= ξ^(i+1)_j - 1/2[ξ_j, ξ_j^(i)] ξ̇_̇j̇^(2) + 1/2[ξ_j, ξ_j^(2)] - 1/4[[ξ_j^(1), ξ_j ], ξ_j ] =-∑_r ∈𝒩_j_1 V_jr(e, h_jr).Now observe that, from Lemmas <ref> and <ref>:ξ̇_̇j̇ = ξ_j^(1) - 1/2[ξ_j, ξ_j ] = ξ_j^(1),ξ̈_̈j̈ = ξ̇_̇j̇^(1) = ξ_j^(2) - 1/2[ξ_j, ξ̇_j ],⃛ξ_j = ξ̇_j^(2) - 1/2[ξ_j, ξ̈_j].Substituting these into the necessary conditions, we obtain (<ref>) and (<ref>).
http://arxiv.org/abs/2310.18389v1
{ "authors": [ "Jacob R. Goodman", "Leonardo J. Colombo" ], "categories": [ "math.OC", "cs.SY", "eess.SY", "math.DG", "math.DS" ], "primary_category": "math.OC", "published": "20231027132220", "title": "Reduction of Necessary Conditions for the Variational Collision Avoidance Problem" }
Extending finite free actions of surfaces Rubén A. Hidalgo January 14, 2024 ========================================= This paper provides statistical sample complexity bounds for score-matching and its applications in causal discovery. We demonstrate that accurate estimation of the score function is achievable by training a standard deep ReLU neural network using stochastic gradient descent. We establish bounds on the error rate of recovering causal relationships using the score-matching-based causal discovery method of <cit.>, assuming a sufficiently good estimation of the score function. Finally, we analyze the upper bound of score-matching estimation within the score-based generative modeling, which has been applied for causal discovery but is also of independent interest within the domain of generative models. § INTRODUCTIONScore matching <cit.>, an alternative to the maximum likelihood principle for unnormalized probability density models with intractable partition functions, has recently emerged as a new state-of-the-art approach that leverages machine learning for scalable and accurate causal discovery from observational data <cit.>. However, the theoretical analysis and guarantees in the finite sample regime are underexplored for causal discovery even beyond score-matching approaches.Contributions:In this work, we give the first sample complexity error bounds for score-matching using deep ReLU neural networks. With this, we obtain the first upper bound on the error rate of the method proposed by <cit.> to learn the topological ordering of a causal model from observational data. Thanks to the wide applicability of score-matching in machine learning, we also discuss applications to the setting of score-based generative modeling. Our main contributions are:* We provide the analysis of sample complexity bound for the problem of score function estimation in causal discovery for non-linear additive Gaussian noise models which has a convergence rate of log n/n with respect to the number of data. Importantly, our results require only mild additional assumptions, namely that the non-linear relationships among the causal variables are bounded and that the score function is Lipschitz. To the best of our knowledge, this is the first work to provide sampling complexity bounds for this problem.* We provide the first analysis of the state-of-the-art topological ordering-based causal discovery method SCORE <cit.> and provide a correctness guarantee for the obtained topological order. Our results demonstrate that the algorithm's error rate converges linearly with respect to the number of training data. Additionally, we establish a connection between the algorithm's error rate and the average second derivative (curvature) of the non-linear relationships among the causal variables, discussing the impact of the causal model's inherent characteristics on the algorithm's error rate in identification.* We present sample complexity bounds for the score function estimation problem in the standard score-based generative modeling method, ScoreSDE <cit.>. In contrast to previous results <cit.>, our bounds do not rely on the assumption of low-dimensional input data, and we extend the applicability of the model from a specific encoder-decoder network architecture to a general deep ReLU neural network. =-1High-level motivation and background:Causal discovery and causal inference refer to the process of inferring causation from data and reasoning about the effect of interventions. They are highly relevant in fields such as economics <cit.>, biology <cit.>, and healthcare <cit.>. In particular, some causal discovery methods aim to recover the causal structure of a problem solely based on observational data. The causal structure is typically represented as a directed acyclic graph (DAG), where each node is associated with a random variable, and each edge represents a causal mechanism between two variables. Learning such a model from data is known to be NP-hard <cit.>. Traditional approaches involve testing for conditional independence between variables or optimizing goodness-of-fit measures to search the space of possible DAGs. However, these greedy combinatorial optimization methods are computationally expensive and difficult to extend to high-dimensional settings.=-1An alternative approach is to reframe the combinatorial search problem as a topological ordering task <cit.>, where nodes are ordered from leaf to root. This can significantly speed up the search process in the DAG space. Once a topological ordering is found, a feature selection algorithm can be used to prune potential causal relations between variables, resulting in a DAG. Recently, <cit.> proposed the SCORE algorithm, which utilizes the Jacobian of the score function to perform topological ordering. By identifying which elements of the Jacobian matrix of the score function remain constant across all data points, leaf nodes can be iteratively identified and removed. This approach provides a systematic way to obtain the topological ordering and infer the causal relations within the entire model. This method has achieved state-of-the-art results on multiple tasks <cit.> and has been extended to improve scalability <cit.> also using diffusion models <cit.> and to non-Gaussian noise <cit.>. Interestingly, these approaches separate the concerns of statistical estimation of the score function from the causal assumption used to infer the graph (e.g., non-linear mechanisms and additive Gaussian noise). This opens an opportunity to study the convergence properties of these algorithms in the finite data regime, which is generally under-explored in the causal discovery literature. In fact, if we had error bounds on the score estimate without additional complications from causal considerations, we could study their downstream effect when the score is used for causal discovery.Unfortunately, this is far from trivial as the theoretical research on score matching lags behind its empirical success and progress would have far-reaching implications. Even beyond causal discovery, error bounds on the estimation of the score functions would be useful for score-based generative modeling (SGM) <cit.>. These have achieved state-of-the-art performance in various tasks, including image generation <cit.> and audio synthesis <cit.>. There has been significant research investigating whether accurate score estimation implies that score-based generative modeling provably converges to the true data distribution in realistic settings <cit.>. However, the error bound of score function estimation in the context of score-based generative modeling remains an unresolved issue due to the non-convex training dynamics of neural network optimization. Notations:We use the shorthand [n]:= {1,2,…, n } for a positive integer n. We denote by a(n) ≲ b(n): there exists a positive constant c independent of n such that a(n) ⩽ c b(n). The Gaussian distribution is 𝒩(μ, σ^2) with the μ mean and the σ^2 variance. We follow the standard Bachmann–Landau notation in complexity theory e.g., 𝒪, o, Ω, and Θ for order notation. Due to space constraints, a detailed notation is deferred to <ref>. § PRELIMINARIESAs this paper concerns topics in score matching estimation, diffusion models, neural network theory, and causal discovery, we first introduce the background and problem setting of our work.§.§ Score matching For a probability density function p(x), we call the score function the gradient of the log density with respect to the data x. To estimate the score function ∇log p(x), we can minimize the ℓ_2 loss over the function space 𝒮.min_s∈𝒮𝔼_p [s(x) - ∇log p(x) ^2] ,ŝ= min_s∈𝒮𝔼_p [s(x) - ∇log p(x) ^2] . The corresponding objective function to be minimized is the expected squared error between the true score function and the neural network:J_ESM(s, p(x)) = 𝔼_p(x)[1/2s(x) - ∂log p(x)/∂x^2] ,We refer to this formulation as explicit score matching (ESM).Denoising score matching (DSM) is proposed by  <cit.> to convert the inference of the score function in ESM into the inference of the random noise and avoid the computing of the second derivative. For the sampled data x, x̂ is obtained by adding unit Gaussian noise to x. i.e. x̂ = x + ϵ,ϵ∼𝒩(0, σ^2 I).We can derive the conditional probability distribution and its score function:p(x̂|x) = 1/(2π)^d/2σ^dexp(-x - x̂^2/2σ^2) ,∂log p(x̂|x)/∂x̂ = x - x̂/σ^2 .Then the DSM is defined by:J_DSM(s, p(x,x̂)) = 𝔼_p(x,x̂)[1/2s(x̂ - ∂log p(x̂|x)/∂x̂) ^2] = 𝔼_p(x,x̂)[1/2s(x̂) - x - x̂/σ^2^2] . <cit.> have proven that minimizing DSM is equivalent to minimizing ESM and does not depend on the particular form of p(x̂|x) or p(x).§.§ Neural network and function space In this work, we consider a standard depth-L width-m fully connected ReLU neural network. Formally, we define a DNN with the output s_l(x) in each layers_l(x) ={xl=0 , ϕ(⟨W_l, s_l-1(x) ⟩) 1≤ l ≤ L-1,⟨W_L, s_L-1(x) ⟩l=L , .where the input is x∈ℝ^d, the output is s_L( x) ∈ℝ^d, the weights of the neural networks are W_1 ∈ℝ^m × d, W_l ∈ℝ^m× m, l = 2,… ,L-1 and W_L ∈ℝ^d × m. The neural network parameters formulate the tuple of weight matrices W := {W_i }_i=1^L ∈{ℝ^m× d× (ℝ^m× m)^L-2×ℝ^d× m}. The 𝒮 denotes the function space of <ref>.The ϕ = max(0,x) is the ReLU activation function. According to the property ϕ(x) = xϕ^'(x) of ReLU, we have s_l = D_lW_ls_l-1, where D_l is a diagonal matrix defined as below.For l∈ [L-1] and k ∈ [m], the diagonal sign matrix D_l is defined as: (D_l)_k,k = 1{ (W_ls_l-1)_k ≥ 0 }.Initialization: We make the standard random Gaussian initialization [W_l]_i,j∼𝒩(0,2/m) for l ∈ [L-1] and [W_L]_i,j∼𝒩(0,1/d). §.§ Causal discoveryIn this paper, we follow the setting in <cit.> and consider the following causal model, a random variable x∈ℝ^d is generated by:x^(i) = f_i(PA_i(x)) + ϵ_i ,i ∈ [d],where f_i is a non-linear function, ϵ_i ∼𝒩(0, σ_i^2) and PA_i(x) represent the set of parents of x^(i) in x. Then we can write the probability distribution function of x as:p(x) = ∏_i=1^d p(x^(i)|PA_i(x)) .For such non-linear additive Gaussian noise models <ref>, <cit.> provides <ref> to learn the topological order by score matching as follows: §.§ Score-based generative modeling (SGM) In this section, we give a brief overview of SGM following <cit.>.§.§.§ Score-based generative modeling with SDEsForward process: The success of previous score-based generative modeling methods relies on perturbing data using multiple noise scales, and the proposal of the diffusion model is to expand upon this concept by incorporating an infinite number of noise scales. This will result in the evolution of perturbed data distributions as the noise intensity increases, which will be modeled through a stochastic differential equation (SDE).dx_t = f(x_t, t)d t + g_t dw,x_0∼ p_0 .The expression describes x_t, where the standard Wiener process (also known as Brownian motion) is denoted as w, the drift coefficient of x_t is represented by a vector-valued function called f, and the diffusion coefficient of x_t is denoted as g_t, a scalar function. In this context, we will refer to the probability density of x_t as p_t, and the transition kernel from x_s to x_t as p_st(x_t|x_s), where 0 ≤ s < t ≤ T. The Ornstein–Uhlenbeck (OU) process is a Gaussian process that is both time-homogeneous and a Markov process. It is distinct in that its stationary distribution is equivalent to the standard Gaussian distribution γ^d on ℝ^d. Reverse process: We can obtain samples of x_0∼ p_0^SDE by reversing the process starting from samples of x_T∼ p_T^SDE. An important finding is that the reversal of a diffusion process is a diffusion process as well. It operates in reverse time and is described by the reverse-time SDE:dx_t = (f(x_t, t) - g_t^2∇_xlog p_t(x_t))d t + g_t dw .When time is reversed from T to 0, w is a standard Wiener process with an infinitesimal negative timestep of d t. The reverse diffusion process can be derived from <ref> once the score of each marginal distribution, ∇log p_t(x_t), is known for all t. By simulating the reverse diffusion process, we can obtain samples from p_0^SDE.Some special settings: In order to simplify the writing of symbols and proofs, in this work we choose that f(x_t, t) = -1/2x_t and g(t) = 1 which has been widely employed in prior research <cit.> for theoretical analysis in Ornstein–Uhlenbeck process in score-based generative modeling.§.§.§ Score matching in diffusion model We aim to minimize the equivalent objective for score matching:min_s∈𝒮∫_0^T w(t) 𝔼_x_0∼ p_0[𝔼_x_t∼ p_0t (x_t|x_0)[∇_x_tlog p_0t (x_t|x_0) - s(x_t,t)_2^2 ]]dt . The transition kernel has an analytical form ∇_x_tlog p_0t (x_t|x_0) = -x_t-α(t)x_0/h(t), where α(t) = e^-t/2 and h(t) = 1- α(t)^2 = 1 - e^-t.The empirical score matching loss is:min_s∈𝒮ℒ̂(s) = 1/n∑_i=1^nℓ(x_(i);s) ,where the loss function ℓ(x_(i);s) is defined as:ℓ(x_(i);s) = 1/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x_(i))[∇_x_tlog p_0t (x_t|x_0 = x_(i)) - s(x_t,t)_2^2 ]dt .Here we choose w(t) = 1/T-t_0, and we define the expected loss ℒ(·) = 𝔼_x∼ p_0[ℒ̂(·)].§ THEORETICAL RESULTS FOR CAUSAL DISCOVERYIn this section, we state the main theoretical results of this work. We present the assumptions on non-linear additive Gaussian noise causal models in <ref>. Then, we present the sample complexity bound for score matching in causal discovery in <ref>. In <ref> we provide the upper bound on the error rate for causal discovery using the <ref>. The full proofs of <ref> and <ref> are deferred to <ref> and <ref>, respectively. §.§ Assumptions[Lipschitz property of score function]The score function ∇log p(·) is 1-Lipschitz.Remark: The Lipschitz property of the score function is a standard assumption commonly used in the existing literature <cit.>. However, for causal discovery, this assumption limits the family of mechanisms that we can cover.[Structural assumptions of causal model]Let p be the probability density function of a random variable x defined via a non-linear additive Gaussian noise model <ref>. Then, ∀ i ∈ [d] the non-linear function is bounded, | f_i| ≤ C_i. And ∀ i,j∈ [d], if j is one of the parents of i, i.e. x^(j)⇒ x^(i), then there exist a constant C_m that satisfy:𝔼_p(x)( ∂^2 f_i(PA_i(x))/∂ x^(j) 2^2 ) ≥ C_m σ_i^2 .Remark: This is a novel assumption that we introduce, relating the average second derivative of a mechanism (related to its curvature) to the noise variance of the child variable. This will play a crucial yet intuitive role in our error bound: identifiability is easier when there is sufficient non-linearity of a mechanism with respect to the noise of the child variable. Consider the example of a quadratic mechanism, where the second derivative is the leading constant of the polynomial. If this constant is small (e.g., close to zero), the mechanism is almost linear and we may expect that the causal model should be harder to identify. Similarly, if the child variable has a very large variance, one may expect it to be more difficult to distinguish cause from effect, as the causal effect of the parent is small compared to the noise of the child. According to <ref>, we can derive the identified ability margin for leaf nodes and parent nodes. If a non-linear additive Gaussian noise model <ref> satisfies <ref>. Then, ∀ i,j∈ [d], we have:i is a leaf⇒Var(∂ s_i(x)/∂ x^(i)) = 0, j is not a leaf⇒Var(∂ s_j(x)/∂ x^(j))≥ C_m.This lemma intuitively relates our identifiability margin with the decision rule of SCORE <cit.> to identify leaves. Non-leaf nodes should have the variance of their score Jacobian sufficiently far from zero. As one may expect, we will see in Theorem <ref> that the closer C_m is to zero, the more likely it is that the result of the algorithm will be incorrect given finite samples. §.§ Error bound for score matching in causal discovery We are now ready to state the main result of the score matching in causal discovery. We provide the sample complexity bounds of the explicit score matching <ref> that using denoising score matching <ref> in <ref> for non-linear additive Gaussian noise models <ref>.Given a DNN defined by <ref> trained by SGD for minimizing empirical denoising score matching objective. Suppose <ref> and <ref>are satisfied. For any ε∈ (0,1) and δ∈ (0,1), if σ_i σ and C_i/σ_i 1 , ∀ i ∈ [d]. Then with probability at least 1- 2δ - 4exp(-d/32) - 2Lexp(-Ω(m)) - 1/nd over the randomness of initialization W, noise ϵ and ϵ_i, it holds that:J_ESM(ŝ, p(x)) ≲σ^2 dlog nd/nε^2log𝒩_c(1/n, 𝒮)/δ+1/n+dε^2 ,where the 𝒩_c(1/n, 𝒮) is the covering number of the function space 𝒮 for deep ReLU neural network. Remark: 1): To the best of our knowledge, our results present the first upper bound on the explicit sampling complexity of score matching for topological ordering <ref> in non-linear additive Gaussian noise causal models. This novel contribution provides valuable insights into the efficiency and effectiveness of utilizing score matching for topological ordering in non-linear additive Gaussian noise causal models.2): By choosing ε^2 = 1/√(n), the bound is modified to J_ESM(ŝ, p(x)) ≲σ^2 dlog nd/√(n)log𝒩_c(1/n, 𝒮)/δ. This expression demonstrates that the ℓ_2 estimation error converges at a rate of log n/√(n) when the sample size n is significantly larger than the number of nodes d.3): The bound is also related to the number of nodes d, the variance of the noise in denoising score matching σ and causal model σ_i, the covering number of the function space 𝒩_c(1/n, 𝒮), and the upper bound of the data C_d. If these quantities increase, it is expected that the error of explicit score matching will also increase. This is due to the increased difficulty in accurately estimating the score function. 4): <ref> is rooted in the generalization by sampling complexity bound. It is independent of the specific training algorithm used. The results are broadly applicable and can be seamlessly extended to encompass larger batch GD.Next, we will establish a connection between score matching and the precise identification of the topological ordering. §.§ Error bound for topological order in causal discoveryBased on the previously mentioned sample complexity bound of score matching, we establish an upper bound on the error rate of the topological ordering of the causal model obtained through <ref>.Given a DNN defined by <ref> trained by SGD with a step size η = 𝒪(1/poly(n,L)m log^2 m) for minimizing empirical score matching objective. Then under <ref>, for m ≥poly(n,L), with probability at least: 1-exp(-Θ(d))-(L+1)exp(-Θ(m))- 2nexp(-nC_m^2 d^2/2^4L+5(log m)^2 (m^2+d^2) ) ,over the randomness of initialization W and training data that <ref> can completely recover the correct topological order of the non-linear additive Gaussian noise model. Remark: 1): The foundation of <ref> rests upon <ref>, it can be seen as an embodiment of applying the upper bound of score matching for causal discovery. To the best of our knowledge, our results provide the first upper bound on the error rate of topological ordering in non-linear additive Gaussian noise causal models using <ref>.2): Considering that when md and L1 the probability degenerates to:1-Θ(e^-m)- 2nexp(-Θ(nC_m^2 /(log m)^2) ) .The first term of the error arises due to the initialization of the neural network. As for the second term of the error, if the number of training data n satisfies n/log n≳ (log m)^2, then it will have that 2nexp(-Θ(nC_m^2 /(log m)^2) ) ≲ 1. This implies that the second term of the error probability exhibits linear convergence towards 0 when n is sufficiently large. Therefore, when the sample size n/log n≳ (log m)^2, the contribution of the second term to the full error becomes negligible. 3): The theorem reveals that a smaller value of the constant C_m increases the probability of algorithm failure. This observation further confirms our previous statement that a smaller average second derivative of the nonlinear function makes it more challenging to identify the causal relationship in the model. Additionally, when the causal relationship is linear, our theorem does not provide any guarantee for the performance of <ref>.4): Consider the two variables case. If a child node is almost a deterministic function of its parents, the constant C_m can take on arbitrarily large values, according to <ref>. Consequently, the second term of the error probability, 2nexp(-Θ(nC_m^2 /(log m)^2) ), tends to zero. This implies that the errors in <ref> are primarily caused by the random initialization of the neural network. The identifiability of this setting is consistent with classical results <cit.>. Intuitively, as long as the non-linearity is chosen independently of the noise of the parent variable[ <cit.> have formalized independence of distribution and function via an information geometric orthogonality condition that refers to a reference distribution (e.g., Gaussian)], the application of the non-linearity will increase the distance to the reference distribution of the parent variable (in our case Gaussian). Note that for the derivative in Assumption <ref> to be defined, the parent node cannot be fully deterministic. 5): Instead of focusing on the kernel regime, we directly cover the more general neural network training. The kernel approach of <cit.> is a special case of our analysis. The basis of Theorem 2 lies in the proof of SGD/GD convergence of the neural network, These convergence outcomes also apply to BatchGD, as demonstrated in <cit.>. Hence, Theorem 2 can naturally be expanded to incorporate Batch GD as well.Proof sketch: The proof of <ref> can be divided into three steps. The first and most important step is to derive the upper bound of ∂ s_i(x)/∂ x^(i). Here, we utilize the properties of deep ReLU neural networks to derive the distribution relationship between features of adjacent layers, then accumulate them and combine it with the properties of Gaussian initialization, yielding the upper bound for ∂ s_i(x)/∂ x^(i). The second step is to use the upper bound of ∂ s_i(x)/∂ x^(i) obtained in the first step combined with the concentration inequality to derive the upper bound of the error of Var(∂ s_i(x)/∂ x^(i)). The third step is to compare the upper bound in the second step with <ref> to obtain the probability of successfully selecting leaf nodes in each step. After accumulation, we can obtain the probability that <ref> can completely recover the correct topological order of the non-linear additive Gaussian noise model.§ THEORETICAL RESULTS FOR SCORE-BASED GENERATIVE MODELING (SGM) In this section, we present the additional assumption required for the theoretical analysis of score matching in score-based generative modeling. Then, we provide the sample complexity bound associated with score matching in this framework. The full proof in this section is deferred to <ref>.[Bounded data]We assume that the input data satisfy x_2≤ C_d,x∼ p_0.Remark: Bounded data is standard in deep learning theory and also commonly used in practice <cit.>. Given a DNN defined by <ref> trained by SGD for minimizing empirical denoising score matching loss <ref>. Suppose <ref> and <ref> are satisfied. For any ε∈ (0,1) and δ∈ (0,1). Then with probability at least 1- 2δ - 2Lexp(-Ω(m)) over the randomness of initialization W and noise ϵ in denoising score matching, it holds:1/T-t_0∫_t_0^T∇log p_t(·) - ŝ(·,t)_ℓ^2(p_t)^2 dt ≲1/nε^2(d(T-log(t_0))/T-t_0+C_d^2)log𝒩_c(1/n, 𝒮)/δ+1/n + dε^2 ,where the 𝒩_c(1/n, 𝒮) is the covering number of the function space 𝒮 for deep ReLU neural network. Remark: 1): <ref> and <ref> study similar problems between causal discovery and score-based generative modeling and share similar techniques drawn from statistical learning theory and deep learning theory. These two domains are connected by a common theoretical foundation centered on the upper bound of score matching.2): Our result extends the results for score matching in diffusion models presented in <cit.> which rested on the assumption of low-dimensional data structures, employing this to decompose the score function and engineer specialized network architectures for the derivation of the upper bound. Our work takes a distinct route. Our conclusions are based on the general deep ReLU neural network instead of a specific encoder-decoder network and do not rely on the assumptions of low-dimensional data used in <cit.>. We harness the inherent traits and conventional techniques of standard deep ReLU networks to directly deduce the upper error bound. This broader scope allows for a more comprehensive understanding of the implications and applicability of score-based generative modeling in a wider range of scenarios.3): Similar to <ref>, by choosing ε^2 = 1/√(n), we can obtain the best bound 1/T-t_0∫_t_0^T∇log p_t(·) - ŝ(·,t)_ℓ^2(p_t)^2 dt ≲1/√(n)(d(T-log(t_0))/T-t_0+C_d^2)log𝒩_c(1/n, 𝒮)/δ. This expression demonstrates that the ℓ_2 estimation error converges at a rate of 1/√(n) when the sample size n is significantly larger than the dimensionality d and time steps T.4): The bound is also related to the data dimension d, the variance of the noise in denoising score matching σ, the covering number of the function space 𝒩_c(1/n, 𝒮), and the upper bound of the data C_d. If these quantities increase, it is expected that the error of explicit score matching will also increase. This is due to the increased difficulty in accurately estimating the score function. 5): When t_0=0, the theorem lacks meaning. However, when T ≫ t_01, the bound simplifies to d+C_d^2/√(n)log𝒩_c(1/n, 𝒮)/δ. This indicates that when T is sufficiently large, the loss estimated by the score function in the diffusion model becomes independent of time steps T.6): Similar to <ref>, the result of <ref> is also broadly applicable and can be seamlessly extended to encompass larger batch GD.§ NUMERICAL EVIDENCEWe conducted a series of experiments to validate the theoretical findings presented in the paper. We took inspiration from the code provided in<cit.> and employed the structural Hamming distance (SHD) between the generated output and the actual causal graph to assess the outcomes. The ensuing experimental outcomes for SHD, vary across causal model sizes d, sample sizes n, and C_m. The experimental results are shown in <ref>Analyzing the experimental outcomes, we find a notable pattern: higher values of C_m, augmented sample sizes n, and reduced model size d all contribute to the performance of <ref> which is consistent with the insights from <ref>. § RELATED WORK Score matching: Score Matching was initially introduced by <cit.> and extended to energy-based models by <cit.>. Subsequently, <cit.> proposed denoising score matching, which transforms the estimation of the score function for the original distribution into an estimation for the noise distribution, effectively avoiding the need for second derivative computations. Other methods, such as sliced score matching <cit.>, denoising likelihood score matching <cit.>, and kernel-based estimators, have also been proposed for score matching. The relationship between score matching and Fisher information <cit.>, as well as Langevin dynamics <cit.>, has been explored. On the theoretical side, <cit.> introduced the concept of "blindness" in score matching, while <cit.> compared the efficiency of maximum likelihood and score matching, although their results primarily focus on exponential family distributions. Our paper, for the first time, analyzes the sample complexity bounds of the score function estimating in causal inference. Causal discovery: The application of score methods for causal inference for linear additive models began with <cit.>, which proposed a causal structure recovery method based on topological ordering from the precision matrix (equivalent to the score in that setting). Under certain noise variance assumptions, their method can reliably recover the DAG in polynomial time and sample complexity. =-1In recent years, there have been numerous algorithms developed for causal inference in non-linear additive models. GraNDAG <cit.> aims to maximize the likelihood of the observed data under this model and enforces a continuous constraint to ensure the acyclicity of the causal graph <cit.> proposed a novel approach for causal inference which utilize score matching algorithms as a foundation for topological ordering and then employ sparse regression techniques to prune the DAG. Subsequently, <cit.> extended the method to non-Gaussian noise, <cit.> proposed to use diffusion models to fit the score function, and <cit.> proposed a new scalable score-based preliminary neighbor search techniques. Although advances have been achieved in leveraging machine learning for causal discovery, there is generally a lack of further research on error bounds. Other studies concentrate on broader non-parametric models but depend on various assumptions like faithfulness, restricted faithfulness, or the sparsest Markov representation <cit.>. These approaches employ conditional independence tests and construct a graph that aligns with the identified conditional independence relations <cit.>.=-1 Theoretical analysis of score-based generative modeling: Existing work mainly focuses on two fundamental questions: "How do diffusion models utilize the learned score functions to estimate the data distribution?" <cit.> and "Can neural networks effectively approximate and learn score functions? What are the convergence rate and bounds on the sample complexity?" <cit.>.Specifically, <cit.> and <cit.> studied the convergence guarantees of diffusion models under the assumptions that the score estimator is accurate under the ℓ_1 and ℓ_2 norms. Concurrently <cit.> and <cit.> extended previous results to distributions with bounded moments. <cit.> studied the distribution estimation guarantees of diffusion models for low-dimensional manifold data under the assumption that the score estimator is accurate under the ℓ_1 or ℓ_2 norms.However, these theoretical results rely on the assumption that the score function is accurately estimated, while the estimation of the score function is largely untouched due to the non-convex training dynamics. Recently, <cit.> provided the first sample complexity bounds for score function estimation in diffusion models. However, their result is based on the assumption that the data distribution is supported on a low-dimensional linear subspace and they use a specialized Encoder-Decoder network instead of a general deep neural network. As a result, a complete theoretical picture of score-based generative modeling is still lacking. § CONCLUSION AND LIMITATIONS In this work, we investigate the sample complexity error bounds of Score Matching using deep ReLU neural networks under two different problem settings: causal discovery and score-based generative modeling. We provide a sample complexity analysis for the estimation of the score function in the context of causal discovery for nonlinear additive Gaussian noise models, with a convergence rate of log n/n. Furthermore, we extend the sample complexity bounds for the estimation of the score function in the ScoreSDE method to general data and achieve a convergence rate of 1/n. Additionally, we provide an upper bound on the error rate of the state-of-the-art causal discovery method SCORE <cit.>, showing that the error rate of this algorithm converges linearly with respect to the number of training data.A core limitation of this work is limiting our results to the Gaussian noise assumption. In fact, non-linear mechanisms with additive non-gaussian noise are also identifiable under mild additional assumptions <cit.> and <cit.> already extended the score-matching approach of <cit.> to that setting. Relaxing this assumption would also allow us to apply our bounds to interesting corner cases, such as linear non-gaussian <cit.>, and non-gaussian deterministic causal relations <cit.>. It may be possible for this assumption to be relaxed in future work, but we argue that the added challenge, the significant difference in algorithms, and the standalone importance of the non-linear Gaussian case justify our focus.In addition, we make other assumptions that limit the general applicability of our bounds. In particular, the assumption of the Lipschitz property for the score function imposes a strong constraint on the model space. Further investigating the relationship between the noise, the properties of the nonlinear functions in the causal model <ref>, and the resulting Lipschitz continuity of the score function would be an interesting extension of this work.§ ACKNOWLEDGEMENTSWe are thankful to the reviewers for providing constructive feedback and Kun Zhang and Dominik Janzing for helpful discussion on the special case of deterministic children. This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043). This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. Francesco Locatello did not contribute to this work at Amazon. Corresponding author: Zhenyu Zhu.abbrvnat§ APPENDIX INTRODUCTIONThe Appendix is organized as follows: * In  <ref>, we provide a summary of the symbols and notations used throughout this paper.* In  <ref>, we provide some background to some of the content covered in this paper.* In <ref>, we present several relevant lemmas that are essential to the proofs in this paper.* In <ref>, we provide the proof of <ref>.* In <ref>, we provide the proof of <ref>.* In <ref>, we provide the proof of <ref>.* In <ref>, we provide the proof of <ref>.* In <ref>, we discuss the <ref>, the Lipschitz property of score function.* Finally, in <ref>, we discuss the broader impacts of this paper. § SYMBOLS AND NOTATION In the paper, vectors are indicated with bold small letters, and matrices with bold capital letters. To facilitate the understanding of our work, we include some core symbols and notation in <ref>.§ MORE BACKGROUNDS §.§ Covering number The basic idea of covering number is to approximate a function space with an infinite number of elements by a finite number of elements. It is used to describe how many elements (or subsets) in a given metric space can be "covered" with a finite number of reference elements (or reference subsets) to ensure that the entire space is covered. It is defined as follows: We assume there exists m = m(ϵ) elements f_1, …, f_m such that for any f ∈ℱ, ∃ i ∈{ 1, …, m } such that d (f, f_i) ≤ϵ. The minimal possible number m(ϵ) is the covering number of ℱ at precision ϵ. In learning theory, covering number can be used to bound the Rademacher complexity <cit.> then it is related to generalization.§.§ More backgrounds about <ref>The main source of inspiration of the <cit.> to design <ref> is the following lemma:Let p be the probability density function of a random variable x defined via a non-linear additive Gaussian noise model <ref>, and let s(x) = ∇log p(x) be the associated score function. Then, ∀ j∈ [d], we have: * j is a leaf ⇔ ∀x, ∂ s_j(x)/∂ x^(j) = c, with c ∈ℝ independent of x, i.e., Var(∂ s_j(x)/∂ x^(j)) = 0.* j is a leaf, i is a parent of j ⇔ s_j(x) depends on x^(i), i.e., Var(∂ s_j(x)/∂ x^(i)) ≠ 0.<ref> reveals the important properties of the nonlinear additive Gaussian noise model: for non-linear additive Gaussian noise models, leaf nodes (and only leaf nodes) have the property that the associated diagonal element in the score’s Jacobian is a constant. Therefore, by repeating this method and always removing the identified leaves, we can estimate a full topological order. This procedure is summarized in <ref>.§ RELEVANT LEMMASFor any ε∈ (0,1) and any target 1-Lipschitz function s̃ that defined on [0,1]^d with s̃(0) = 0, the architecture yields an approximation s∈𝒮 satisfying s- s̃_∞≤ε. The configuration of network architecture is:s_l _∞≤√(d) ,l ∈ [L],W_l _∞≤𝒪(1) ,l ∈ [L], L = 𝒪 (log1/ε +d) ,m = 𝒪 (1/ε^d) , ∑_l=1^LW_l _0≤𝒪(1/ε^d (log1/ε +d)) .Let W be an N × n matrix whose entries are independent standard normal random variables. Then for every t ≥ 0, with probability at least 1-2exp(-t^2/2), one has:s(A)_max≤√(N) + √(n) +t ,where the s(W)_max represent the largest singular value of W.If a causal model <ref> satisfies <ref>. Then with probability at least 1-1/n^2 d we have:x_2^2 ≤∑_i=1^d (C_i + 2σ_i√(log nd))^2.Firstly, we can derive that: x_2^2= ∑_i=1^d (x^(i))^2= ∑_i=1^d (f_i + ϵ_i)^2≤∑_i=1^d (C_i +|ϵ_i |)^2. Since ϵ_i ∼𝒩(0, σ_i^2), according to the tail bound of Gaussian distribution, with probability at least 1-exp(-t_i^2/2σ_i^2) we have | ϵ_i|≤ t_i. Thus:x_2^2 ≤∑_i=1^d (C_i + t_i)^2 , with probability at least 1-∑_i=1^dexp(-t_i^2/2σ_i^2).Choose t_i = 2σ_i√(log nd), then we have: x_2^2 ≤∑_i=1^d (C_i + 2σ_i√(log nd))^2, with probability at least 1-1/n^2 d.§ PROOF OF <REF>According to <ref>, we can derive that: log p(x) = ∑_i=1^dlog p(x^(i)|PA_i(x)) = -1/2∑_i=1^d(x^(i) - f_i(PA_i(x))/σ_i)^2 -1/2∑_i=1^dlog (2πσ_i^2) . Then: s_j(x) = f_j(PA_j(x)) - x^(j)/σ_j^2 + ∑_i ∈CH_j(x)∂ f_i(PA_i(x))/∂ x^(j)ϵ_i/σ_i^2 . If j is a leaf: ∂ s_j(x)/∂ x^(j) = -1/σ_j^2 , Var(∂ s_j(x)/∂ x^(j)) = 0 . If j is not a leaf: ∂ s_j(x)/∂ x^(j) = -1/σ_j^2 + ∑_i ∈CH_j(x)∂^2 f_i(PA_i(x))/∂ x^(j)2ϵ_i/σ_i^2 , where the PA_i(x) represent the set of parents of x^(i) in x. Then, according to the independence of ϵ_i: Var(∂ s_j(x)/∂ x^(j)) = ∑_i ∈CH_j(x)Var(∂^2 f_i(PA_i(x))/∂ x^(j)2ϵ_i/σ_i^2) ≥Var(∂^2 f_i(PA_i(x))/∂ x^(j)2ϵ_i/σ_i^2) ∀ i ∈CH_j(x) = 𝔼_p(x)( ∂^2 f_i(PA_i(x))/∂ x^(j) 2^2 )Var( ϵ_i/σ_i^2) ∀ i ∈CH_j(x)≥ C_m . Combine <ref>, which concludes the proof.§ PROOF OF THE ERROR BOUND OF SCORE FUNCTION ESTIMATE FOR THE CAUSAL MODEL (<REF>)Firstly, we use oracle inequality to decompose J_DSM(ŝ, p(x)), for any a ∈ (0,1) and a fixed function s, we have: J_DSM(ŝ, p(x)) = J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)) + (1+a)Ĵ_DSM(ŝ, p(x)) =J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)) + (1+a)inf_s∈𝒮Ĵ_DSM(s, p(x))≤ J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x))+ (1+a) (Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x))+(1+a)J_DSM(s, p(x))) = (J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)))+ (1+a)(Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x))) + (1+a)^2 J_DSM(s, p(x)) .First termFirstly, we define that: j_DSM(s, x, p(x)) = 𝔼_x̂∼ p(x̂|x)s(x̂) - ∂log p(x̂|x)/∂x̂_2^2 . For any s∈𝒮, we have:j_DSM(s, x, p(x)) = 𝔼_x̂∼ p(x̂|x)s(x̂) - ∂log p(x̂|x)/∂x̂_2^2≤ 2 𝔼_x̂∼ p(x̂|x)(s(x̂)_2^2+∂log p(x̂|x)/∂x̂_2^2)= 2 𝔼_x̂∼ p(x̂|x)(s(x̂)_2^2+x - x̂/σ^2_2^2). For the first part, recall that:x̂ = x + ϵ,ϵ∼𝒩(0, σ^2 I) . Then we have: x̂ - x/σ_2^2 ∼χ^2(d) . According to the Bernstein's inequality <cit.> and choose t = 1/2, we have: ℙ(| x̂ - x/σ_2^2/d-1| ≥1/2)≤ 2exp(-d/32) . Then we have: ℙ( x̂ - x_2 ≥σ√(3d/2)) = ℙ( x̂ - x_2^2 ≥3σ^2 d/2)= ℙ( x̂ - x/σ_2^2 /d≥3 /2) ≤ℙ(| x̂ - x/σ_2^2/d-1| ≥1/2)≤ 2exp(-d/32) . By <ref>, we have: x̂_2≤x̂ - x_2 + x_2≤σ√(3d/2) + √(∑_i=1^d (C_i + 2σ_i√(log nd))^2) , with probability at least 1- 2exp(-d/32)-1/n^2 d over the randomness of noise ϵ and ϵ_i.Then by <ref> and <cit.>[Lemma C.1]: 2 𝔼_x̂∼ p(x̂|x)s(x̂)_2^2 ≲σ^2 d + ∑_i=1^d (C_i + 2σ_i√(log nd))^2 . with probability at least 1-2exp(-d/32) - Lexp(-Ω(m))-1/n^2 d over the randomness of initialization W, noise ϵ and ϵ_i.For the second part: 2 𝔼_x̂∼ p(x̂|x)x - x̂/σ^2_2^2 = 2 𝔼_ϵ∼𝒩(0, σ^2 I)ϵ/σ^2_2^2 = 2 𝔼_ϵ' ∼𝒩(0, I)ϵ' _2^2 = 2 𝔼_ϵ”∼χ^2(d)ϵ” = 2d . Combine <ref>, we have: j_DSM(s, x, p(x))≤ 2 𝔼_x̂∼ p(x̂|x)(s(x̂)_2^2+x - x̂/σ^2_2^2) ≲ (σ^2+2)d + ∑_i=1^d (C_i + 2σ_i√(log nd))^2 , with probability at least 1-2exp(-d/32) - Lexp(-Ω(m))-1/n^2 d over the randomness of initialization W, noise ϵ and ϵ_i.According to the Bernstein-type concentration inequality <cit.>[Lemma 15], for δ∈ (0,1), a≤ 1 and τ >0, we have: J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)) ≲1+3/a/2n((σ^2+2)d + ∑_i=1^d (C_i + 2σ_i√(log nd))^2)log𝒩_c(τ, 𝒮)/δ+(2+a)τ , with probability at least 1- δ - 2exp(-d/32) - Lexp(-Ω(m))-1/n d over the randomness of initialization W, noise ϵ and ϵ_i. Second termAccording to the Bernstein-type concentration inequality <cit.>[Lemma 15] and <ref>, for δ∈ (0,1), a ≤ 1, τ >0 and a fixed function s, , we have: Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x)) ≲1+3/a/2n((σ^2+2)d + ∑_i=1^d (C_i + 2σ_i√(log nd))^2)log1/δ+(2+a)τ , with probability at least 1- δ - 2exp(-d/32) - Lexp(-Ω(m))-1/n d over the randomness of initialization W, noise ϵ and ϵ_i. Third termWe can derive that: J_DSM(s, p(x)) = J_ESM(s, p(x)) + J_DSM(s, p(x)) - J_ESM(s, p(x)) . According to <ref>, since the error term is invariant with respect to translations on ∇log p(·) and the homogeneity of the ReLU neural network, we can omit ∇log p(0) = 0 and rescale bound for the input data, for any ε∈ (0,1), there exists an approximation function s satisfying ∇log p(·) - s(·) _∞≤ε, then we have: J_ESM(s, p(x)) ≤dε^2/2 , with probability at least 1 -1/n d over the randomness of noise ϵ_i and satisfy the configuration of network architecture in <ref>. According to <cit.>, we have: J_DSM(s, p(x)) - J_ESM(s, p(x)) = 1/2𝔼_x𝔼_x̂∼ϕ (x|x )[∇_x̂logϕ (x|x )_2^2 ] - 1/2∇log p(·)_ℓ^2(p)^2. which is an absolute value that does not depend on s. So we can define that: E_1 1/2𝔼_x𝔼_x̂∼ϕ (x|x )[∇_x̂logϕ (x|x )_2^2 ] - 1/2∇log p(·)_ℓ^2(p)^2. So if we choose s is the approximation function that <ref> provide, then we have: J_DSM(s, p(x)) ≤dε^2/2 +E_1.Putting things togetherCombine all three terms, we have: J_DSM(ŝ, p(x))≤(J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)))+ (1+a)(Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x))) + (1+a)^2 J_DSM(s, p(x))≲(J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)))+ (1+a)(Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x))) + (1+a)^2 (dε^2/2 +E_1) = (J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)))+ (1+a)(Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x))) + (1+a)^2 dε^2/2 +(2a+a^2) E_1 + E_1 . Then: J_ESM(ŝ, p(x)) = J_DSM(ŝ, p(x)) - E_1≲(J_DSM(ŝ, p(x)) - (1+a)Ĵ_DSM(ŝ, p(x)))+ (1+a)(Ĵ_DSM(s, p(x)) - (1+a)J_DSM(s, p(x))) + (1+a)^2 dε^2/2 +(2a+a^2) E_1≲1+3/a/2n((σ^2+2)d + ∑_i=1^d (C_i + 2σ_i√(log nd))^2)log𝒩_c(τ, 𝒮)/δ+(2+a)τ +(1+a)(1+3/a/2n((σ^2+2)d + ∑_i=1^d (C_i + 2σ_i√(log nd))^2)log1/δ+(2+a)τ) + (1+a)^2 dε^2/2 +(2a+a^2) E_1 , with probability at least 1- 2δ - 4exp(-d/32) - 2Lexp(-Ω(m))-1/nd over the randomness of initialization W, noise ϵ and ϵ_i.Let a = ε^2, τ = 1/n, σ_i σ and C_i/σ_i 1 , ∀ i ∈ [d]. Then we have: J_ESM(ŝ, p(x)) ≲σ^2 dlog nd/nε^2log𝒩_c(1/n, 𝒮)/δ+1/n+dε^2 , with probability at least 1- 2δ - 4exp(-d/32) - 2Lexp(-Ω(m)) - 1/nd over the randomness of initialization W, noise ϵ and ϵ_i. § PROOF OF THE ERROR BOUND OF TOPOLOGICAL ORDERING USING THE SCORE ALGORITHM IN A CAUSAL MODEL (<REF>) We set the weights of the neural network after training are W. i.e. s(x) = W_Lϕ(W_L-1⋯ϕ(W_1x)⋯) . According to the standard chain rule and <cit.>[Lemma 3], we have: ∇_xs(x)^⊤=W_LD_L-1W_L-1⋯D_1W_1 . Let v_i be a one-hot vector with length d, with the i-th element is 1 and the rest of the elements are 0, then we have: ∂ s_i(x)/∂ x^(i)= v_i W_LD_L-1W_L-1⋯D_1W_1 v_i≤v_i _2 W_LD_L-1W_L-1⋯D_1 _2 W_1 _2 v_i _2= W_LD_L-1W_L-1⋯D_1 _2 W_1_2 = (W_LD_L-1W_L-1⋯D_1 _2+W_LD_L-1W_L-1⋯D_1 -W_LD_L-1W_L-1⋯D_1 _2)×(W_1_2 + W_1 - W_1 _2) (T_1+T_2)×(T_3+T_4) .Firstly, we focus on T_1. Define t_l(v) = D_lW_l ⋯D_1v, then for any vector v that satisfy v_2 = 1: W_LD_L-1W_L-1⋯D_1 v_2 = W_L t_L-1(v)_2= √(W_L t_L-1(v)_2^2) = √(W_L t_L-1(v)_2^2/t_L-1(v)_2^2t_L-1(v)_2^2/t_L-2(v)_2^2⋯t_2(v)_2^2/t_1(v)_2^2t_1(v)_2^2) .According to <cit.>[Lemma 2], we have:t_l(v)_2^2/t_l-1(v)_2^2∼2/mχ^2(ϱ ),∀ l = 2, ⋯ ,L-1, where ϱ∼Ber(m,1/2).According to <cit.>, with probability at least 1-exp(-Θ(m)) over the randomness of initialization W_l, we have: t_l(v)_2^2/t_l-1(v)_2^2≤ 4,∀ l = 2, ⋯ ,L-1. By the definition of chi-square distribution, we have: W_Lt_L-1_2^2 /t_L-1_2^2∼χ^2(d)/d , Similar, according to <cit.>, with probability at least 1-exp(-Θ(d)) over the randomness of initialization W_L, we have: W_Lt_L-1_2^2 /t_L-1_2^2≤ 2 . And we can derive that:t_1(v)_2^2 = D_1v_2^2 ≤( D_1 _2 v_2)^2 ≤ 1 . Combine <ref>, we have: W_LD_L-1W_L-1⋯D_1 v_2 = √(W_L t_L-1(v)_2^2/t_L-1(v)_2^2t_L-1(v)_2^2/t_L-2(v)_2^2⋯t_2(v)_2^2/t_1(v)_2^2t_1(v)_2^2)≤ 2^2L-1/2 , with probability at least 1-exp(-Θ(d))-(L-2)exp(-Θ(m)) over the randomness of initialization W.i.e. T_1 = W_LD_L-1W_L-1⋯D_1_2 ≤ 2^2L-1/2 , with probability at least 1-exp(-Θ(d))-(L-2)exp(-Θ(m)) over the randomness of initialization W.For a perturbation matrices satisfy T_4 = W_l - W_l _2 ≤ω = 𝒪(1/L^3/2) ,∀ l ∈ [L], by <cit.>, we obtain that for any integer s = 𝒪(mω^2/3L) and d ≤𝒪(m/Llogm), with probability at least 1-exp(-Ω(mlog m ω^2/3L )) over the randomness of initialization W, it holds that: T_2 = W_LD_L-1W_L-1⋯D_1 -W_LD_L-1W_L-1⋯D_1 _2 ≤𝒪(ω^1/3L^2√(mlog m)/√(d)) .For T_3, according to <ref>, we have that for every t ≥ 0, with probability at least 1-2exp(-t^2/2) over the randomness of initialization W_1, one has:T_3 = W_1_2 ≤√(2/m)(√(m) + √(d) +t) . Combine <ref>, choose t = √(m) we have: ∂ s_i(x)/∂ x^(i) ≤ (T_1+T_2)×(T_3+T_4) ≲(2^2L-1/2 + ω^1/3L^2√(mlog m)/√(d))×(1/L^3/2+√(2/m)(2√(m) + √(d)))≲2^L√(log m)(√(m) + √(d))/√(d) , with probability at least 1-exp(-Θ(d))-Lexp(-Θ(m)) -exp(-Ω(mlog m)) over the randomness of initialization W.Then, for (∂ s_i(x)/∂ x^(i)), we have that: (∂ s_i(x)/∂ x^(i))^2 ≲2^2Llog m(m + d)/d ,with probability at least 1-exp(-Θ(d))-Lexp(-Θ(m)) -exp(-Ω(mlog m)) over the randomness of initialization W.According to Hoeffding's inequality for bounded random variables <cit.>[Thmorem 2.2.6], we have that: | 1/n∑_i=1^n∂ s_i(x)/∂ x^(i) - 𝔼∂ s_i(x)/∂ x^(i)|≤C_m/12𝔼∂ s_i(x)/∂ x^(i) , with probability at least 1-exp(-Θ(d))-Lexp(-Θ(m)) -exp(-Ω(mlog m)) - 2exp(-Ω(nC_m^2 d^2/2^4L+5(log m)^2 (m^2+d^2) )), and | 1/n∑_i=1^n(∂ s_i(x)/∂ x^(i))^2 - 𝔼(∂ s_i(x)/∂ x^(i))^2 |≤C_m/4 , with probability at least 1-exp(-Θ(d))-Lexp(-Θ(m)) -exp(-Ω(mlog m)) - 2exp(-nC_m^2 d^2/2^4L+5(log m)^2 (m^2+d^2) ). Then we have: | Var(∂ s_i(x)/∂ x^(i)) - V̂âr̂(∂ s_i(x)/∂ x^(i)) | =| 𝔼(∂ s_i(x)/∂ x^(i))^2 - (𝔼∂ s_i(x)/∂ x^(i))^2 - ∑_i=1^n(∂ s_i(x)/∂ x^(i))^2 + (1/n∑_i=1^n∂ s_i(x)/∂ x^(i))^2|≤ | 𝔼(∂ s_i(x)/∂ x^(i))^2 -∑_i=1^n(∂ s_i(x)/∂ x^(i))^2 |+|- (𝔼∂ s_i(x)/∂ x^(i))^2+ (1/n∑_i=1^n∂ s_i(x)/∂ x^(i))^2|≤C_m/4 +|1/n∑_i=1^n∂ s_i(x)/∂ x^(i) - 𝔼∂ s_i(x)/∂ x^(i) | | 𝔼∂ s_i(x)/∂ x^(i)+ 1/n∑_i=1^n∂ s_i(x)/∂ x^(i) |≤C_m/2 ,with probability at least 1-exp(-Θ(d))-Lexp(-Θ(m)) -exp(-Θ(mlog m)) - 2exp(-nC_m^2 d^2/2^4L+5(log m)^2 (m^2+d^2) ).Thus, for i is a leaf and j is not a leaf, according to <ref> and <ref>, we have: Var(∂ s_j(x)/∂ x^(j)) - Var(∂ s_i(x)/∂ x^(i)) ≥ C_m . Then: V̂âr̂(∂ s_i(x)/∂ x^(i)) = V̂âr̂(∂ s_i(x)/∂ x^(i)) - Var(∂ s_i(x)/∂ x^(i)) + Var(∂ s_i(x)/∂ x^(i)) ≤C_m/2 + Var(∂ s_i(x)/∂ x^(i))≤Var(∂ s_j(x)/∂ x^(j)) - C_m/2 = Var(∂ s_j(x)/∂ x^(j)) - V̂âr̂(∂ s_j(x)/∂ x^(j)) + V̂âr̂(∂ s_j(x)/∂ x^(j)) -C_m/2≤C_m/2 + V̂âr̂(∂ s_j(x)/∂ x^(j)) -C_m/2 = V̂âr̂(∂ s_j(x)/∂ x^(j)) . with probability at least 1-exp(-Θ(d))-Lexp(-Θ(m)) -exp(-Θ(mlog m)) - 2exp(-nC_m^2 d^2/2^4L+5(log m)^2 (m^2+d^2) ).Considering all variables, then with probability at least: 1-exp(-Θ(d))-(L+1)exp(-Θ(m))- 2nexp(-nC_m^2 d^2/2^4L+5(log m)^2 (m^2+d^2) ) , that <ref> can completely recover the correct topological order of the non-linear additive Gaussian noise model. § PROOF OF THE ERROR BOUND OF SCORE FUNCTION ESTIMATE FOR THE SCORE-BASED GENERATIVE MODELING (<REF>) Firstly, we use oracle inequality to decompose ℒ(ŝ), for any a ∈ (0,1) and a fixed function s, we have: ℒ(ŝ) = ℒ(ŝ) - (1+a)ℒ̂(ŝ) + (1+a)ℒ̂(ŝ) =ℒ(ŝ) - (1+a)ℒ̂(ŝ) + (1+a)inf_s∈𝒮ℒ̂(s) ≤ℒ(ŝ) - (1+a)ℒ̂(ŝ) + (1+a) (ℒ̂(s) - (1+a)ℒ(s)+(1+a)ℒ(s)) = (ℒ(ŝ) - (1+a)ℒ̂(ŝ)) + (1+a)(ℒ̂(s) - (1+a)ℒ(s)) + (1+a)^2 ℒ(s) .First termFor any s∈𝒮, we have:ℓ(x;s) = 1/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)[∇_x_tlog p_0t (x_t|x_0 = x) - s(x_t,t)_2^2 ]dt = 1/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)(x_t-α(t)x/h(t) + s(x_t,t)_2^2 )dt≤3/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)[(x_t/h(t)_2^2 + α(t)x/h(t)_2^2 + s(x_t,t)_2^2 ) ]dt = 3/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)( x_t/h(t)_2^2 )dt + 3/T-t_0∫_t_0^T(α(t)x/h(t)_2^2 )dt + 3/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)(s(x_t,t)_2^2 )dt . For the first part, for forward process SDE <ref> we can easily derive that p_0t (x_t|x_0) ∼𝒩(α(t)x_0, h(t)I_d), where α(t) = e^-t/2 and h(t) = 1- α(t)^2.∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)( x_t/h(t)_2^2 )dt =∫_t_0^T𝔼_x_t∼𝒩(α(t)x_0, h(t)I_d)( x_t/h(t)_2^2 )dt =∫_t_0^T(∑_i=1^d[𝔼_x_t^(i)∼𝒩(α(t)x_0^(i), h(t))(x_t^(i)/h(t))^2 ] )dt =∫_t_0^T(∑_i=1^d[α(t)^2/h(t)^2(x_0^(i))^2 + 1/h(t)] )dt =∑_i=1^d(x_0^(i))^2 ∫_t_0^Tα(t)^2/h(t)^2dt + ∫_t_0^Td/h(t)dt ≤ T-t_0/Tt_0 C_d^2 + d(T-log(t_0)) . For the second part: ∫_t_0^Tα(t)x/h(t)_2^2 dt≤ C_d^2 ∫_t_0^Tα(t)^2/h(t)^2dt = C_d^2 T-t_0/Tt_0 . For the third part, by <cit.>[Lemma C.1]: ∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)(s(x_t,t)_2^2 )dtC_d^2(T-t_0) , with probability at least 1- Lexp(-Ω(m)) over the randomness of initialization W.Combine <ref>, we have: ℓ(x;s) = 3/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)( x_t/h(t)_2^2 )dt + 3/T-t_0∫_t_0^T(α(t)x/h(t)_2^2 )dt + 3/T-t_0∫_t_0^T𝔼_x_t∼ p_0t (x_t|x_0 = x)(s(x_t,t)_2^2 )dt≲3/T-t_0(T-t_0/Tt_0C_d^2 + d(T-log(t_0))+C_d^2T-t_0/Tt_0+C_d^2(T-t_0)) = 3d(T-log(t_0))/T-t_0+3C_d^2+6C_d^2/Tt_0 , with probability at least 1- Lexp(-Ω(m)) over the randomness of initialization W.According to the Bernstein-type concentration inequality <cit.>[Lemma 15], for δ∈ (0,1), a ≤ 1 and τ >0, we have: ℒ(ŝ) - (1+a)ℒ̂(ŝ) ≲1+3/a/n(d(T-log(t_0))/T-t_0+C_d^2)log𝒩_c(τ, 𝒮)/δ+(2+a)τ , with probability at least 1- δ- Lexp(-Ω(m)) over the randomness of initialization W. Second termAccording to the Bernstein-type concentration inequality <cit.>[Lemma 15] and <ref>, for δ∈ (0,1), τ >0 and a fixed function s, , we have: ℒ̂(s) - (1+a)ℒ(s) ≲1+3/a/n(d(T-log(t_0))/T-t_0+C_d^2)log1/δ+(2+a)τ , with probability at least 1-δ- Lexp(-Ω(m)) over the randomness of initialization W. Third termWe can derive that: ℒ(s) = 1/T-t_0∫_t_0^T∇log p_t(·) - s(·,t)_ℓ^2(p_t)^2 dt+ ℒ(s) - 1/T-t_0∫_t_0^T∇log p_t(·) - s(·,t)_ℓ^2(p_t)^2 dt For the first part, according to <ref>, since the error term is invariant with respect to translations on ∇log p_t(·) and the homogeneity of the ReLU neural network, we can omit ∇log p_t(0) = 0 and rescale bound for the input data, for any ε∈ (0,1), there exist an approximation function s satisfying ∇log p_t(·) - s(·,t) _∞≤ε, then we have: 1/T-t_0∫_t_0^T∇log p_t(·) - s(·,t)_ℓ^2(p_t)^2 dt ≤ dε^2 , that satisfy the configuration of network architecture in <ref>.For the second part:ℒ(s) - 1/T-t_0∫_t_0^T∇log p_t(·) - s(·,t)_ℓ^2(p_t)^2 dt = 1/T-t_0∫_t_0^T(𝔼_x_0∼ p_0𝔼_x_t∼ p_0t (x_t|x_0 )[∇_x_tlog p_0t (x_t|x_0 ) - s(x_t,t)_2^2 ]- ∇log p_t(·) - s(·,t)_ℓ^2(p_t)^2 )dt . According to <cit.>, we have: 𝔼_x_0∼ p_0𝔼_x_t∼ p_0t (x_t|x_0 = x_(i))[∇_x_tlog p_0t (x_t|x_0 = x_(i)) - s(x_t,t)_2^2 ]- ∇log p_t(·) - s(·,t)_ℓ^2(p_t)^2=𝔼_x_0∼ p_0𝔼_x_t∼ p_0t (x_t|x_0 )[∇_x_tlog p_0t (x_t|x_0 )_2^2 ] - ∇log p_t(·)_ℓ^2(p_t)^2 , which is an absolute value that does not depend on s. So we can define that: E_2 𝔼_x_0∼ p_0𝔼_x_t∼ p_0t (x_t|x_0 )[∇_x_tlog p_0t (x_t|x_0 )_2^2 ] - ∇log p_t(·)_ℓ^2(p_t)^2 . So if we choose s is the approximation function that <ref> provide, then we have: ℒ(s) ≤ dε^2 +E_2.Putting things togetherCombine all three terms, we have: ℒ(ŝ)≤(ℒ(ŝ) - (1+a)ℒ̂(ŝ)) + (1+a)(ℒ̂(s) - (1+a)ℒ(s)) + (1+a)^2 ℒ(s)≤(ℒ(ŝ) - (1+a)ℒ̂(ŝ)) + (1+a)(ℒ̂(s) - (1+a)ℒ(s)) + (1+a)^2 (dε^2 +E_2) = (ℒ(ŝ) - (1+a)ℒ̂(ŝ)) + (1+a)(ℒ̂(s) - (1+a)ℒ(s)) + (1+a)^2 dε^2 + (2a+a^2)E_2 + E_2 Then: 1/T-t_0∫_t_0^T∇log p_t(·) - ŝ(·,t)_ℓ^2(p_t)^2 dt =ℒ(ŝ) - E_2=(ℒ(ŝ) - (1+a)ℒ̂(ŝ)) + (1+a)(ℒ̂(s) - (1+a)ℒ(s)) + (1+a)^2 dε^2 + (2a+a^2)E_2 ≲ (1+3/a/n(d(T-log(t_0))/T-t_0+C_d^2)log𝒩_c(τ, 𝒮)/δ+(2+a)τ)+ (1+a)(1+3/a/n(d(T-log(t_0))/T-t_0+C_d^2)log1/δ+(2+a)τ) + (1+a)^2 dε^2 + (2a+a^2)E_2 , with probability at least 1- 2δ- 2Lexp(-Ω(m)) over the randomness of initialization W.Let a = ε^2 and τ = 1/n, then we have:1/T-t_0∫_t_0^T∇log p_t(·) - ŝ(·,t)_ℓ^2(p_t)^2 dt ≲1/nε^2(d(T-log(t_0))/T-t_0+C_d^2)log𝒩_c(1/n, 𝒮)/δ+1/n + dε^2 , with probability at least 1- 2δ- 2Lexp(-Ω(m)) over the randomness of initialization W.§ DISCUSSION OF LIPSCHITZ PROPERTY OF SCORE FUNCTIONHere we provide an example to illustrate how the Lipschitz constant of the score function in a causal model is related to the model's nonlinear functions.Here we give an example with d=3, the causality is x^(1)⇒ x^(2)⇒ x^(3).According to <ref>, we have that: s_1(x) = -x^(1)/σ_1^2 +∂ f_2(x^(1))/∂ x^(1)ϵ_2/σ_2^2,s_2(x) = f_2(x^(1)) - x^(2)/σ_2^2 +∂ f_3(x^(2))/∂ x^(2)ϵ_3/σ_3^2,s_3(x) = f_3(x^(2)) - x^(3)/σ_3^2 . Then we can derive that: ∂ s_1(x)/∂ x^(2) = ∂ s_1(x)/∂ x^(3) = ∂ s_2(x)/∂ x^(3) = ∂ s_3(x)/∂ x^(1) = ∂ s_3(x)/∂ x^(2) = 0 ,∂ s_1(x)/∂ x^(1) = -1/σ_1^2 + ∂^2 f_2(x^(1))/∂ x^(1)2ϵ_2/σ_2^2 ,∂ s_2(x)/∂ x^(2) = -1/σ_2^2 + ∂^2 f_3(x^(2))/∂ x^(2)2ϵ_3/σ_3^2 ,∂ s_2(x)/∂ x^(1) = ∂^2 f_3(x^(2))/∂ x^(2)∂ x^(1)ϵ_3/σ_3^2 ,∂ s_3(x)/∂ x^(3) = -1/σ_3^2 . We denote J as the Jacobian of the score function. Then we can derive: J_ℓ_∞= max( | -1/σ_1^2 + ∂^2 f_2(x^(1))/∂ x^(1)2ϵ_2/σ_2^2 |, | -1/σ_2^2 + ∂^2 f_3(x^(2))/∂ x^(2)2ϵ_3/σ_3^2 | +| ∂^2 f_3(x^(2))/∂ x^(2)∂ x^(1)ϵ_3/σ_3^2 | ,1/σ_3^2)≤max(1/σ_1^2 +|sup∂^2 f_2(x^(1))/∂ x^(1)2ϵ_2/σ_2^2 |, 1/σ_2^2 +| sup∂^2 f_3(x^(2))/∂ x^(2)2ϵ_3/σ_3^2 | +| sup∂^2 f_3(x^(2))/∂ x^(2)∂ x^(1)ϵ_3/σ_3^2 | ,1/σ_3^2) . Then we have that for any L satisfy: L≥max(1/σ_1^2 +|sup∂^2 f_2(x^(1))/∂ x^(1)2ϵ_2/σ_2^2 |, 1/σ_2^2 +| sup∂^2 f_3(x^(2))/∂ x^(2)2ϵ_3/σ_3^2 | +| sup∂^2 f_3(x^(2))/∂ x^(2)∂ x^(1)ϵ_3/σ_3^2 | ,1/σ_3^2) , then the L is one of the Lipschitz constants of the score function.According to the previous analysis, we can obtain the Lipschitz property of the score function by imposing some assumptions on the nonlinear function and noise of the model. For causal models with more complex DAG, the relationship between Lipshcitz and the model will be more complicated, but as long as the second derivatives of nonlinear functions are bounded and the variance of the noise is non-zero, the score function has Lipschitz property, and the value of the Lipschitz constant depends on the nonlinear function and noise of the model.§ BROADER IMPACTSThis is a theoretical work that provides theoretical analysis for causal inference based on score matching. As such, we do not expect our work to have negative societal bias, as we do not focus on obtaining state-of-the-art results in a particular task. On the contrary, our work can have various benefits for the community: * Causal inference is crucial in fields such as medicine, social sciences, and economics for understanding the essence of phenomena and formulating effective intervention measures. The outcomes of this work not only provide researchers in these fields with more reliable and interpretable theoretical insights into causal inference, driving scientific advancements and societal development, but also the score matching-based causal inference methods can help uncover hidden causal effects and mechanisms, providing a scientific foundation for decision-making in areas such as social equity, educational policies, and medical interventions.* The theoretical framework and methods developed in this work can inspire and inform other causal inference approaches, fostering interdisciplinary research and collaboration, and expanding the application scope of causal inference in different domains.
http://arxiv.org/abs/2310.18123v1
{ "authors": [ "Zhenyu Zhu", "Francesco Locatello", "Volkan Cevher" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20231027130956", "title": "Sample Complexity Bounds for Score-Matching: Causal Discovery and Generative Modeling" }
0000-0002-8739-3163]Mandy C. Chen Department of Astronomy and Astrophysics, The University of Chicago, Chicago, IL 60637, USA0000-0001-8813-4182]Hsiao-Wen Chen Department of Astronomy and Astrophysics, The University of Chicago, Chicago, IL 60637, USA0000-0002-1690-3488]Michael Rauch The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA0000-0002-2941-646X]Zhijie Qu Department of Astronomy and Astrophysics, The University of Chicago, Chicago, IL 60637, USA0000-0001-9487-8583]Sean D. Johnson Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA0000-0002-0668-5560]Joop Schaye Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, the Netherlands0000-0002-8459-5413]Gwen C. Rudie The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA0000-0002-0311-2812]Jennifer I-Hsiu Li Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA Michigan Institute for Data Science, University of Michigan, Ann Arbor, MI, 48109, USA0000-0002-2662-9363]Zhuoqi (Will) Liu Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA0000-0001-7869-2551]Fakhri S. Zahedy The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, Pasadena, CA 91101, USA0000-0001-5804-1428]Sebastiano Cantalupo Department of Physics, University of Milan Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy0000-0003-3244-0409]Erin Boettcher Department of Astronomy, University of Maryland, College Park, MD 20742, USA X-ray Astrophysics Laboratory, NASA/GSFC, Greenbelt, MD 20771, USA Center for Research and Exploration in Space Science and Technology, NASA/GSFC, Greenbelt, MD 20771, USA Turbulent motions in the circumgalactic medium (CGM) play a critical role in regulating the evolution of galaxies, yet their detailed characterization remains elusive. Using two-dimensional velocity maps constructed from spatially-extended [O2] and [O3] emission, <cit.> measured the velocity structure functions (VSFs) offour quasar nebulae at z≈0.5–1.1. One of theseexhibits a spectacular Kolmogorov relation.Here we carry out an ensemble studyusing an expanded sample incorporating four new nebulae from three additional QSO fields. The VSFs measured for all eight nebulae are best explained by subsonic turbulence in the cool gas (T∼10^4 K), which in turn strongly suggests thatthe gas is dynamically coupled to the hot ambient medium. Previous work demonstrates that the largest nebulae in our sample reside in group environments with clear signs of tidal interactions, suggesting that environmental effects are vital in seeding and enhancing turbulence within the gaseous halos, ultimately promoting the formation of the extended nebulae. No discernible differences are observed in the VSF properties between radio-loud and radio-quiet QSO fields. We estimate the turbulent heating rate per unit volume, Q_ turb, in the QSO nebulae to be ∼ 10^-26–10^-22 erg cm^-3 s^-1 for the cool phase and ∼ 10^-28–10^-25 erg cm^-3 s^-1 for the hot phase. This range aligns with measurements in the intracluster medium and star-forming molecular clouds but is ∼10^3 times higher than the Q_ turb observed inside cool gas clumps on scales ≲1 kpc using absorption-line techniques. We discuss the prospect of bridging the gap between emission and absorption studies by pushing the emission-based VSF measurements to below ≈10 kpc. § INTRODUCTIONThe circumgalactic medium (CGM) is the outermost, gaseous envelope of a galaxy, extending beyond the visible stellar disk and containing the majority of the baryons in the galaxy. This main gas reservoir records critical information about a galaxy’s past and ongoing interactions with the surrounding environment. Due to the tenuous nature of the CGM, absorption spectroscopy using bright background sources – predominantly quasi-stellar objects (QSOs) – has been the main probe of gaseous halos, yielding sensitive constraints on gas density, temperature, metallicity, and ionization state <cit.>. Over the past decade, the advent of wide-field, high-throughput integral field spectrographs (IFSs) has provided a spatial resolving power that complements the pencil-beam probe from QSO absorption spectroscopy, greatly aiding in the investigation of the CGM. Various dynamical processes in the CGM, such as infalls, outflows, and tidal interactions, can now be spatially and spectrally mapped by IFSs via strong nebular emission lines <cit.>. One particularly exciting prospect with these resolved kinematic measurements is the robust constraint of turbulent motions in low-density gas.With a high Reynolds number, ionized, diffuse plasma such as the CGM is expected to be turbulent <cit.>, which can manifest as large density fluctuations commonly observed in extended emission at tens of kpc scales in gaseous halos (e.g., Travascio et al, in prep). Turbulence plays a critical role in several key processes in the CGM, such as mixing/transporting metals <cit.>, facilitating multiphase structure formation <cit.>, and offsetting radiative cooling <cit.>.Until recently, observing turbulence in circumgalactic/intergalactic gas has had to rely on two approaches employing high-resolution absorption line spectra of background QSOs.One approach is to observe line widths of ions with different masses and isolate the turbulent contribution to the velocity profile along the line of sight <cit.>. Alternatively, if multiple lines of sight (e.g., to gravitationally lensed QSO images) are available, turbulence can be measured as a function of transverse separation between the lines of sight to form the structure functions for the line-of-sight velocities <cit.>. With the advent of IFS, spatially-resolved velocity maps of entire gaseous galactic halos can now be obtained in one shot, enabling the simultaneous measurement of the turbulent power spectrum over a wide range of scales, thus providing multiple independent constraints on the nature of turbulence and the turbulent energy transfer in the gas.lccrccl 1 Summary of the QSO properties. 0ptField name^a z_ QSO L_ bol (erg s^-1) N_ group^b σ_v,group^c (km s^-1) Radio mode ReferencesPKS0454-22^* 0.5335 ≈10^47.0 23 ≈ 320 Loud <cit.>PKS0405-123 0.5731 ≈10^47.3 20 ≈ 430 Loud <cit.>HE0238-1904 0.6282 ≈10^47.2 34 ≈ 400 Quiet <cit.> PKS0552-640 0.6824 ≈10^47.4 10 ≈ 335 Loud Johnson et al. (2023)J0454-6116^* 0.7861 ≈10^46.919 ≈ 300 Quiet Li et al. in prep. J2135-5316^* 0.8115 ≈10^47.3 3 – Quiet Li et al. in prep. TXS0206-048^* 1.1317 ≈10^47.3 27 ≈ 550 Loud <cit.> ^a VSF analyses for fields marked with ^* (i.e., PKS0454-22, J0454-6116, J2135-5316 and TXS0206-048) were presented in ; PKS0405-123, HE0238-1904 and PKS0552-640 are three newly included fields in this work (see Section <ref>). ^b Number of spectroscopically-identified group member galaxies, including the QSO host.^c Velocity dispersion of the group. Recently, <cit.> (hereafter Paper I) obtained two-dimensional line-of-sight velocity maps of line-emitting gas around four QSOs up to scales of ∼ 100 kpc using IFS data.Taking advantage of the spatially-resolved velocity maps from IFS observations, these authors constructedvelocity structure functions (VSFs), S_p, defined as S_p(r)=⟨ | v(x) - v(x+r) |^p ⟩,where x and r represent, respectively, a position in the velocity map and the distance vector between two positions separated by r. The exponent p is generally referred to as the order of the VSFs, and ⟨⟩ denotes the mean value averaged over all available velocity pairs separated by r. As can be seen from the definition, S_p quantifies the scale-dependent variance of a velocity field <cit.>, and has been commonly used to probe the dynamical state of the interstellar medium (ISM) in local H2 regions <cit.> as well as in the intracluster medium (ICM) in nearby cool-core clusters <cit.>. While the uncertainties remained large for three QSO nebulae,found that in one particular nebula, the gas dynamics can be unambiguously characterized by the Kolmogorov relation, expected for subsonic, isotropic and homogeneous turbulent flows.Building upon the sample studied in , in this follow-up paper, we include results from four nebulae discovered in three new QSO fields. Combining this new sample with the previous one establishes a sample of eight QSO nebulae that allows us to carry out an ensemble study of the empirical properties of CGM turbulence in distant QSO host halos. The QSOs are all luminous, with a bolometric luminosity of ∼ 10^47 erg s^-1, and span a range in redshift from z≈ 0.5 to z≈ 1.1.The nebulae are revealed in and/or line emission (see Figure <ref>) and are selected to have an extended, contiguous emission area ≳ 1500 kpc^2. Table <ref> summarises the properties of the QSOs in the sample. Out of the seven QSOs, four are radio-loud, and three are radio-quiet.This paper is organized as follows. In Section <ref>, we describe the observations of the ensemble sample and the subsequent velocity measurements using the emission line features.Based on the spatially-resolved velocity maps, we present the VSFs for all eight nebulae in Section <ref>.We discuss the implications of the results in Section <ref> and conclude in Section <ref>.Throughout this paper, we adopt a flat ΛCDM cosmology with H_0=70  km  s^-1  Mpc^-1, Ω_M=0.3 and Ω_Λ = 0.7 when deriving distances, masses and luminosities.All distances quoted are in physical/proper units. § OBSERVATIONS AND DATA ANALYSISTo constrain the turbulent energy spectrum, we follow the approach described into construct the VSFs of four nebulae found in three new QSO fields, PKS 0405-123, HE 0238-1904, and PKS 0552-640.In this section, we briefly summarize the IFS observations and the steps we took to construct a spatially-resolved velocity map based on a line profile analysis of [O2]λλ3727, 3729 and [O3]λ5008 emission lines in these QSO fields. §.§ IFS observationsTo measure the spatially-resolved kinematics in the plane of the sky for the QSO nebulae in our sample, we use the IFS observations obtained using the Multi-Unit Spectroscopic Explorer <cit.> on the VLT UT4.The Wide-Field-Mode (WFM) was used to observe all seven fields, offering a field-of-view (FOV) of 1×1 for a single pointing and a spatial sampling of 02 per pixel.MUSE covers a wavelength range of 4750–9350and has a spectral resolving power of R≈ 2000–4000, with a higher resolution at longer wavelengths.lccrc 2 Journal of MUSE observations. 0pt Seeing^a Field name RA(J2000) Dec.(J2000) t_ exp (s) (arcsec) PKS0454-22 04:56:08.90 -21:59:09.1 2700 06 PKS0405-123 04:07:48.48 -12:11:36.1 3510007HE0238-1904 02:40:32.58 -18:51:51.4 3150008PKS0552-640 05:52:24.60 -64:02:10.9 600008 J0454-6116 04:54:15.95 -61:16:26.6 5100 07 J2135-5316 21:35:53.20 -53:16:55.8 6840 06 TXS0206-048 02:09:30.74 -04:38:26.5 28800 07 ^a Atmospheric seeing FWHM measured using the QSO at 7000.To improve the quality of line fitting, each combined data cube was convolved with a Gaussian kernel of FWHM=07. This yielded a total PSF FWHM of ≈ 09–10, corresponding to a projected separation of 6-9 kpc at the redshifts of these QSOs.Table <ref> lists the coordinates, exposure time, and atmospheric seeing conditions of our sample.Out of the seven QSO fields, the measurements for four fields (PKS0454-22, J0454-6116, J2135-5316, and TXS0206-048) were presented in .The three newly included fields (PKS0405-123, HE0238-1904, and PKS0552-640) are all part of the MUSE Quasar-field Blind Emitters Survey (MUSEQuBES) program, and we use the MUSE-DEEP datacubes directly downloaded from the ESO phase-3 archive with program IDs 097.A-0089(A) and 094.A-0131(B) <cit.>. §.§ Construction of velocity maps As described in, the main steps to construct a two-dimensional velocity map include removing the contamination from the QSO point spread function (PSF), subtracting continuum flux across the whole MUSE FOV, constructing optimally extracted narrow-band images for and lines using three-dimensional masks, and finally fitting Gaussian components to the emission signals and optimizing the parameters via an MCMC analysis.Readers can find the detailed descriptions and associated technical considerations of each step in . Note that to increase the signal-to-noise ratio for faint spaxels in the outskirts of a nebula, we smooth the data cubes in the spatial dimension with a two-dimensional Gaussian kernel of full-width-at-half-maximum of FWHM=07, leading to a total PSF FWHM of ≈ 09–10 (see Table <ref>), corresponding to ≈ 6–9 kpc at the QSO redshifts. A subset (≈10–20%) of spaxels in the nebulae (mostly towards the inner region in the vicinity of the QSOs) exhibit multiple velocity components, which can be identified clearly with the line.With MUSE spectral resolution and due to the doublet nature of the line,multiple velocity components are only obvious for narrow features with a velocity dispersion ≲ 50 km/s.In , we demonstrated that different ways of handling the multi-component spaxels (e.g., adopting the flux-weighted mean velocity versus using the velocity of the strongest component) do not lead to significant differences in the VSF measurements.The insensitivity of the VSFs to the treatment of multi-component spaxels can be attributed to the relatively small proportion of spaxels requiring a multi-component fit, and that the majority of such spaxels exhibit a single prominent component that dominates the kinematics.Therefore, we opt to take the simple approach of using a single Gaussian function when fitting the lines.We also treat and from the same spaxels separately when conducting the line fitting, allowing the two lines to have different velocities and line widths. This decision is motivated by the observation that for spaxels requiring multiple velocity components, there exists spatial variation in the [O3]/[O2] ratio across different components, resulting in a different flux-weighted mean velocity for the two lines.In addition, the two lines have different footprints within the same nebula due to different signal-to-noise ratios and emission strengths. Therefore, to keep the analyses simple without sacrificing the accuracy of the velocity measurements, we opt to measure [O2] and [O3] separately.§.§ VSF measurements For the three new QSO fields presented in this paper, we show the continuum- and QSO-subtracted narrow-band images in Figure <ref>. The narrow-band images for PKS0454-22, J0454-6116, J2135-5316 and TXS0206-048 have already been presented in Figure 1 of . As described in Section 3.5 of , to ensure the robustness of the VSF measurements, we exclude spaxels with a velocity uncertainty larger than 45 km/s.We also examine the velocity map for each nebula in tandem with the broadband images from either MUSE or HST to identify spaxels that are likely to originate from continuum sources.If such spaxels exhibit distinctly different velocities and line widths from the rest of the nebula, we exclude them because such continuum sources are likely tobe separate from the rest of the nebula, and are simply projected to be within the nebula footprint. Finally, we exclude spaxels that are outliers (≈ 2 per cent tail on both the blue and red ends) in the probability density distribution of the velocities in each field.After the above-mentioned steps, all spaxels left in the velocity maps are included in the subsequent VSF calculation, as shown in the top left panels of Figures <ref>–<ref>.Summing over all spaxels included in the VSF analyses, the total luminosity and area for each nebula are listed in Table <ref>. lcccc 3 Summary of emission properties in QSO nebulae^a. 0pt2cLuminosity (erg s^-1) 2cNebula area (kpc^2) 2-34-5 Field name[O2] [O3] [O2] [O3] PKS0454-22 1.9× 10^42 2.2× 10^431552 2202PKS0405-123 S 1.2× 10^42 2.8× 10^422765 3171PKS0405-123 E 1.6× 10^42 3.2× 10^423839 4667HE0238-1904 3.2× 10^42 4.2× 10^425081 5356PKS0552-640 4.0× 10^42 1.2× 10^434105 3533 J0454-6116 3.5× 10^42 5.3× 10^42 3821 2128 J2135-5316 2.5× 10^42 9.2× 10^42 1614 2190 TXS0206-0482.0× 10^43 – 6239 – ^a Luminosities and nebula sizes are summed over the spaxels used for the subsequent VSF analyses, which encompass a smaller area than shown in Figure <ref>. Refer to velocity maps (e.g. Figures <ref>–<ref>) for the regions included in the VSF calculation. Within the spectral coverage of MUSE, we observe both and emission for six out of seven QSO fields in our sample, and we present the results based on both lines for these fields.For TXS0206-048 at z≈ 1.1, the line is redshifted out of the MUSE spectral window, and therefore only results based on are presented.PKS0405-123 consists of three main nebulae that are cleanly separated in velocity-position space <cit.>. For the purpose of this paper, we analyze the southern and eastern nebulae of PKS0405-123 separately and refer to them as PKS0405-123 S and PKS0405-123 E, and we do not include the nebula immediately surrounding the QSO in this field due to its relatively small size. We measure the VSFs up to order p=6 for all eight nebulae following the definition of Equation <ref>. VSFs with p>6 become too noisy to providemeaningful constraints.Due to the spatial correlation between spaxels that are separated by distances less than the size of the total PSF, not all velocity pairs in each distance bin are independent.Therefore, to obtain a more robust estimate of the uncertainty in the VSF measurements, we adopt the modified bootstrap method described in .In addition, as shown in , the spatial correlation due to atmospheric seeing and the additional Gaussian smoothing applied to the datacubes preferentially removes power from small scales and steepens the VSFs. This smoothing effect can be explicitly accounted for by employing a Gaussian-convolved2nd-order VSF, S_2^',S^'_2(r) = 2[Γ^'(0) - Γ^'(r)],where Γ^' is a Gaussian-convolved velocity autocorrelation function, Γ^'(r)=Γ(r)∗Γ_g(r).Here Γ(r) and Γ_g(r) are the autocorrelation function of the velocity field and the smoothing kernel, respectively. A more detailed derivation for Equation <ref> can be found in Equations 2–7 of .To quantify the slopes of the 2nd-order VSFs, we adopt a single power-law model:S_2∝ r^γ_2. When fitting the observed S_2^' with a power-law model, we conduct the convolution in Equation <ref> numerically, and find the best-fitting γ_2 with theroutine for each of the 1000 modified bootstrap samples described above to obtain the mean and dispersion of γ_2.Note that we only consider non-negative slopes of γ_2, which is motivated by data and avoids divergent values at r=0. With the IFS data, observations are confined to projected quantities both in velocity and spatial separations. Therefore, we report the VSF measurements using the line-of-sight velocities and the projected spatial separation r_ proj in the plane of the sky. The potential limitations due to the projection effect will be discussed in further detail in Section <ref>.§ RESULTS Using the velocity maps constructed for individual nebulae, we proceed with the VSF analysis using the full sample of eight extended nebulae. Recall that while it is relatively straightforward to measure the VSFs using spatially-resolved velocity maps, a primary systematic uncertainty is possible contributions to the observed signal from coherent bulk motions projected in the plane of the sky. To account for this uncertainty, we follow the approach adopted byand consider a simple, unidirectional velocity gradient model parameterized as v(x,y)=ax+by+c, where x and y are coordinates of individual spaxels, and a, b, and c are the free parameters.For each nebula, we measure the VSFs with and without the best-fitting two-dimensional bulk-flow model subtracted.The amplitudes of the best-fitting gradient for the and emission lines in each field are listed in Table <ref>. We estimate the uncertainty of this velocity gradient by repeating the fitting with 1000 randomly perturbed velocity maps based on the MCMC chains for each spaxel and find that the uncertainties are small (≲ 0.1 km s^-1) for all nebulae.Therefore, we do not list the uncertainties in Table <ref>. To identify possible coherent motions dominant along the radial or tangential directions (for example in the case of strong outflows or inflows),also calculated the VSFs using radial and tangential velocity pairs separately and found the VSF measurements to be comparable along these two directions.For the newly analyzed nebulae in this paper, we find a similar trend where radial and tangential VSFs show no clear differences and are therefore not included in the presentation here.In this section, we first examine the general trend displayed in the second-order VSF across all eight QSO nebulae.Then we quantify and compare the best-fitting VSF slope over a finite range of spatial scale where the measurements can be characterized by a power-law function.Finally, we explore the presence or absence of extended self-similarity <cit.> in turbulent flows in QSO host halos by measuring the higher-order VSFs. §.§ The overall shape of VSFs Figure <ref> shows the observed 2nd-order VSFs, S^'_2, for the eight nebulae in our sample. Radio-loud and radio-quiet fields are shown in the top and bottom rows, respectively. The vertical dashed lines mark the FWHM of the total PSF for each field (see Table <ref>).To guide the visual comparison, we overplot the expected S_2 for Kolmogorov turbulence, with the dashed gray line showing the intrinsic 2/3 slope and the solid gray line showing the observed shape of S_2^' after convolving with an appropriate PSF.Because different fields have slightly different PSF sizes, we use the mean value of the PSF FWHM for radio-loud (-quiet) fields when constructing the expected Kolmogorov S_2^' for the top (bottom) row.We also show the power-law with a slope of 1 (e.g., Burger's turbulence), without convolving with a PSF, in dotted gray lines.The comparison between the data and the model S_2 with slopes 2/3 and 1 underlines the importance of including the PSF effect when quantifying the observed VSF slopes.In particular, if the probed distance separation, r_ proj, is ≲ 10–20 times the PSF FWHM, the PSF smoothing effect can significantly steepen the apparent slope of the VSFs and a naive visual inspection will lead to the wrong conclusion that the VSF slopes are steeper than their intrinsic values. The VSFs obtained using the gradient-removed velocity maps are also included in Figure <ref> for comparison. As shown in Figure <ref>, all nebulae in our sample exhibit an overall increasing trend of velocity fluctuations with increasing spatial scale.The values of ⟨Δ v^2⟩ range from ≈ 5000–10,000 km^2/s^2 at r_ proj≈10 kpc to ≈ 10,000–80,000 km^2/s^2 at r_ proj≈50 kpc.The results based on the and lines are consistent within the uncertainty for fields with both lines. In general, we do not expect the VSFs constructed from and lines to be identical, because the footprints of the two emission lines in the nebulae do not overlap completely due to the different signal-to-noise ratios of the two lines at different locations.For regions with overlapping footprints from both and emission, the line-of-sight velocities can also differ for spaxels with multiple velocity components and varying [O3]/[O2] line ratios between components, as discussed in Section <ref>.We will show below that the VSFs from [O2] and [O3] lines lead to consistent constraints on the dynamical state of the gas.In addition, the removal of a large-scale, unidirectional velocity gradient generally flattens the VSFs via preferentially reducing the power at larger distance separations.Nonetheless, the constrained slopes for a single power-law fit are consistent before and after the removal of the gradient, as we will discuss in the following section.lcccccccccc 4 Summary of the power-law slopes of the VSFs constructed using [O2] and [O3] lines^a.0pt 3c[O2] 2c[O2] grad. removed^b 3c[O3] 2c[O3] grad. removed(lr)2-4(lr)5-6(lr)7-9(lr)10-11 Field name[r_1, r_2]^c γ_2 Gradient^d [r_1, r_2]γ_2[r_1, r_2]γ_2 Gradient[r_1, r_2]γ_2 PKS0454-22 [5.8, 20] <0.782.2 [5.8, 17] <0.66 [5.8, 20] <0.67 5.0 [5.8, 14] <1.45 PKS0405-123 S [7.4, 29] 1.07^+0.20_-0.18 5.8 [7.4, 17] <1.54 [7.4, 34] 0.97^+0.15_-0.15 6.2 [7.4, 17] <1.41 PKS0405-123 E [7.4, 37] 0.76^+0.19_-0.16 6.0 [7.4, 30]0.55^+0.22_-0.21 [7.4, 46] 0.33^+0.11_-0.11 5.8 [7.4, 22] <1.04 HE0238-1904 [8, 29] 0.48^+0.17_-0.18 0.9 [8, 30]0.43^+0.18_-0.18 [8, 33] 0.75^+0.15_-0.15 2.2 [8, 33] 0.88^+0.17_-0.17 PKS0552-640 [8.3, 25] 0.55^+0.28_-0.28 5.0 [8.3, 22]<0.97 [8.3, 32] 0.88^+0.20_-0.22 8.7 [8.3, 37] <0.50 J0454-6116 [7.5, 30] <0.51 1.6 [7.5, 40] <0.45 [7.5, 25] <0.84 2.8 [7.5, 25] <0.33J2135-5316 [7.2, 25] <0.50 0.9 [7.2, 23]<0.65 [7.2, 18] <1.23 1.8 [7.2, 18] <1.12TXS0206-048 [8.5, 60] 0.72^+0.12_-0.11 3.7 [8.5, 40] 0.56^+0.16_-0.17 –– – – – ^a The best-fitting slopes are derived from 1000 modified bootstrap samples, as discussed in Section <ref>. These slopes correspond to the intrinsic power-law slopes for S_2, with our fitting process explicitly addressing the PSF smoothing effect in the measured S^'_2. The reported values are medians along with the 16^ th and 84^ th percentiles.The 3^ th and 97^ th percentiles are approximately double the uncertainty estimates listed here for all fields. For the unconstrained results, we present 95% upper limits for the slope, assuming the observed pair separations fall within the inertial range.If the available pair separations are close to injection scales, then no robust constraints can be obtained.We exclusively consider non-negative power-law slopes, in line with the discussion in Section <ref>.^b Measurements obtained after removing a 2D velocity gradient (see<ref>).^c Lower and upper bounds in the projected distance separation, r_ proj, in the unit of kpc, within which the power-law slopes of the VSFs are constrained (see<ref>).^d Best-fitting 2D velocity gradient, in the unit of km/s/kpc.§.§ 2nd-order VSF slopes As shown in Figure <ref>, all VSFs exhibit structures that deviate from a single power-law.In particular, at larger separations, the VSFs can show an overall decreased amplitude (e.g., TXS0206-048 at r_ proj≳ 60 kpc), an overall enhanced power (e.g., J0454-6116 at r_ proj≳ 30 kpc), or an oscillatory behavior (e.g., HE0238-1904 at r_ proj≳ 30 kpc).Such deviations may reflect different levels of velocity fluctuations in the central regions of the nebulae versus the outskirts, as velocity pairs at larger separations are predominantly constructed from spaxels in the outskirts.In addition, large-scale periodic oscillations in the velocity fields can manifest as oscillations in the VSFs at large separations <cit.>.The VSF measurements at larger separations are also more uncertain due to a combined effect of fewer pair counts and uncertain velocity centroids as a result of fainter signals in the outskirts of a nebula. Taking into account the above-mentioned factors, we restrict the fitting to be within a finite range of spatial scales, [r_1, r_2],when employing a single power-law model to quantify the slopes of the VSFs.The lower limit r_1 is chosen to be the FWHM of the total PSF for each field (see Table <ref>), while the upper limit r_2 is chosen through a series of trial and error such that we obtain the lowest reduced χ^2 for the best-fitting model within this range.We refer to r_2 as the VSF turnover scale and will discuss its correlation with the nebula size later in Section <ref>.When constraining the VSF slopes, we explicitly incorporate the smoothing effect in the 2nd-order VSF models before comparing them with the data, as described in Section <ref>. The [r_1, r_2] values as well as the best-fitting slopes for the 2nd-order VSF, γ_2, are listed in Table <ref> using both the directly measured line-of-sight velocity maps and the gradient-removed velocity maps. As mentioned above, removing a large-scale, unidirectional gradient tends to flatten the VSF, leading to a smaller r_2 and weaker constraints on the VSF slopes. The comparisons between best-fitting power-law models and the data for PKS0454-22, J0454-6116, J2135-5316, and TXS0206-048 are shown in , while the models for PKS0405-123 S, PKS0405-123 E, HE0238-1904, and PKS0552-640 are shown in Figures <ref>–<ref> in the Appendix of this paper.Based on the line-of-sight velocity maps directly measured using the and emission lines (top left panels of Figures <ref>–<ref>), the slope γ_2 for the eight nebulae in our sample shows a range of values. Specifically, the 16^ th–84^ th measurement percentiles of four nebulae are consistent with the Kolmogorov expectation of γ_2=2/3 (PKS 0405-123 E, HE 0238-1904, PKS 0552-640, and TXS 0206-048), while three nebulae show flatter VSFs (PKS 0454-22, J0454-6116 and J2135-5316).PKS 0405-123 S exhibits a steeper slope but this is also a system that shows a large-scale velocity gradient across the nebula.After removing a unidirectional velocity gradient, the VSF is consistent with the Kolmogorov expectation.Below we discuss these three categories individually. Nebulae with γ_2 consistent with 2/3:the VSF measurements for PKS0405-123 E, HE0238-1904, PKS0552-640 and TXS0206-048 lead to a constrained 2nd-order slope in agreement with the value 2/3.For HE0238-1904 and PKS0552-640, the measurements for both and within the 16^ th–84^ th percentiles are consistent with the Kolmogorov slope.For TXS0206-048, only measurements with are available and the result is consistent with γ_2=2/3. While the VSF slope for the nebula PKS0405-123 E based on is flatter than 2/3 within the 16^ th–84^ th percentiles, the values within the 3^ th–97^ th percentiles using both and emission are in agreement with the Kolmogorov slope and therefore we consider the VSFs of this nebula consistent with the Kolmogorov expectation.Nebulae with γ_2<2/3: for the three nebulae in PKS 0454-22, J0454-6116 and J2135-5316, only upper limits of γ_2 can be obtained and the 95% limits derived from measurements are below 2/3.While the γ_2 upper limits obtained from the measurements are larger than 2/3, the smaller γ_2 upper limits obtained using suggest that the VSF slopes for these three nebulae are likely flatter than the Kolmogorov expectation.As we discussed in , the flatter VSFs may indicate the presence of multiple energy injection scales <cit.> and/or the effect of a dynamically important magnetic field <cit.>.Nebulae with γ_2>2/3: Based on the directly measured velocity fields, PKS0405-123 S exhibits VSFs that are steeper than the expectation of Kolmogorov turbulence. The constraints are consistent using the and measurements.One possible explanation for the steepening of the VSF in this nebula is a strong effect of projection smoothing if the depth of the nebula is larger than the projected distance scales in the plane of the sky (see discussions in Section <ref>). Moreover, the line-of-sight velocity maps for both [O2] and [O3] show a possible velocity shear along the NW-SE direction (see Figures <ref> and <ref>).The best-fitting direction and amplitude for the velocity gradient are consistent between both emission lines, suggesting that the bulk flow can plausibly contribute to the VSF measurements, leading to steeper VSF slopes.Indeed, the VSFs become flatter after we remove a unidirectional velocity gradient, resulting in slope upper limits consistent with the Kolmogorov expectation albeit with larger uncertainties. In summary, using the direct measurements of the line-of-sight velocity fields based on the and/or emission lines, five out of eight nebulae exhibit a 2nd-order VSF slope that is consistent with the expected value of 2/3 for Kolmogorov turbulence while three exhibit a flatter VSF.Incidentally, the three nebulae with a flatter VSF are also the smallest in the sample (see Table <ref>).It is possible that the observations do not have a sufficiently large dynamic range for securing a robust constraint on the shape of the VSF <cit.>. §.§ Extended self-similarity (ESS) in turbulent flows In addition to measuring the 2nd-order VSF slope γ_2,also explored the presence of ESS, in which a simple power-law function holds between VSFs of different orderson spatial scales that are outside of the inertial range where the Kolmogorov relation applies <cit.>.This ESS is particularly useful for inferring the energy cascade rate when the inertial range is not well established. Compared with the slopes of VSFs of individual orders, the ESS slope ratios are often better constrained with a higher statistical significance thanks to the tight correlation between different orders. In addition, an enhanced level of intermittency in a velocity field will suppress the VSF slopes at higher orders compared with the slopes of lower orders <cit.>, making the ESS slope ratios a valuable diagnostic for the underlying gas dynamics. Here we explore the presence or absence of ESS in the QSO nebulae by measuring the VSFs up to order p=6.We obtain the slope ratios γ_p/γ_3 for p=1–6 by fitting a single power-law model to the S_p^' vs. S_3^' measurements.As discussed in , the smoothing effect due to the data PSF does not change the ESS slope ratios. The results are displayed in Figure <ref>, where the data points represent the median values obtained from fitting the 1000 modified bootstrap samples (see Section <ref>), and the error bars indicate the 16^ th and 84^ th percentiles.The correlation between 2nd- and 3rd-order VSFs for each nebula are displayed in the right-most panels of Figures <ref>–<ref>. Specifically, we measure γ_p/γ_3 using the and velocity maps as well as their corresponding residual maps after removing a unidirectional velocity gradient. Figure <ref> shows the ESS slope ratios, with radio-loud fields in the top row and radio-quiet fields at the bottom.We also overplot the expected γ_p/γ_3 ratios from different theoretical considerations and numerical simulations, including the Kolmogorov expectation of γ_p/γ_3=p/3 (blue dashed curve), the Kolmogorov turbulence with intermittency correction <cit.>, the expectation for supersonic magnetohydrodynamic turbulence <cit.>, and numerical predictions for hydrodynamic turbulence with Mach numbers of M=0.9 and 6.1 <cit.>.In general, the ratio γ_p/γ_3 is expected to be suppressed significantly at larger p's in supersonic flows with a high Mach number.This can be seen in Figure <ref> where the numerical simulations predict that for gas motions with M=6.1, γ_p/γ_3 does not increase significantly for p>3, showing a plateau in the γ_p/γ_3 curve (dotted lines). While the strongest distinguishing power for different scenarios comes in at higher orders, the measurements are also more uncertain. In addition, removing a large-scale gradient from the velocity field can change the γ_p/γ_3 ratios to be more consistent with predictions for lower Mach numbers (e.g., see the trend for PKS0405-123 S). Within the 16^ th and 84^ th measurement percentile range and considering the results both before and after removing the large-scale velocity gradient, seven out of eight nebulae in our sample show ESS slope ratios consistent with expectations from subsonic turbulence (black solid curve, blue dash-dotted curve, and dash-dotted curve in Figure <ref>).For the nebula surrounding HE0238-1904, the γ_p/γ_3 ratios are consistent with the predictions for supersonic magnetohydrodynamic turbulence as presented in <cit.>, suggesting that the Mach number of gas motions in this field may be higher than that in other nebulae. Given that this field has a constrained γ_2 value that is consistent with the Kolmogorov expectation as discussed above, additional effects (e.g., the presence of a dynamically important magnetic field) might contribute to a relatively small γ_2 in tandem with suppressed γ_p/γ_3 ratios. A more detailed investigation into the properties of this nebula (e.g., ionization state, interactions with group member galaxies) is needed to further shed light on the possible physical causes for this difference in ESS slope ratios, and a larger sample is required to examine whether the HE0238-1904 nebula is a special case. Overall, no system in our sample exhibits ESS slope ratios that indicate gas motions with M≳6.§ DISCUSSION We have shown that the 2nd-order VSF measured for eight QSO nebulae in our sample exhibits a range of slopes.While five of the nebulae in our sample are consistent with the expected slope of 2/3 for Kolmogorov turbulence, the remaining three exhibit a shallower slope.Despite a range of 2nd-order VSF observed in these QSO nebulae, the measurements suggest that turbulent flows in the and line-emitting clouds are subsonic. The subsonic dynamical state of the gas is further corroborated by the ESS slope ratios γ_p/γ_3, which are consistent with theoretical or numerical expectations for subsonic systems with M≲1 in seven out of eight nebulae. None of the systems shows γ_p/γ_3 measurements that are indicative of highly supersonic flows with M≳6. In addition, we do not observe significant differences between radio-loud and radio-quiet QSO fields in terms of nebula size, line emission luminosity, VSF slopes, VSF amplitude, and turbulent energy heating rate.Recall that five of the nebulae in our sample occur near radio-loud QSOs, while the remaining three reside in radio-quiet halos.The main distinguishing characteristic between radio-loud and radio-quiet QSOs is the presence of powerful jets in radio-loud sources that can result in large-scale structures like radio lobes spanning from tens to thousands of kpc in size <cit.>. The mechanical energy contained in the collimated jets and the associated inflated bubbles is estimated to be ∼ 10^41–10^46 erg/s <cit.>. If a significant portion of this energy can be deposited into the CGM as kinematic energy, we may expect the VSFs from radio-loud and radio-quiet fields to exhibit different properties. While previous studies have found that radio jets are the dominant mechanism for driving fast outflows in the inner ≲10 kpc regions in radio galaxies <cit.>, a lack of correlation between the observed VSFs and the radio power suggests that the effect of radio jets may be limited to the inner regions and have little influence on the gas kinematics on scales ≳tens of kpc. This is in agreement with simulation predictions for the ICM in cool-core clusters <cit.>. A larger sample with both radio-loud and radio-quiet sources will be helpful to draw robust conclusions regarding the difference (or lack thereof) in the CGM dynamics between these two populations. In this section, we first discuss the implications for the dynamical state of the gas in the multiphase CGM and infer the energy transfer rate in these QSO host nebulae.We then discuss potential caveats associated with observational limitations, including projection effects, finite nebula sizes, and the small number of systems in the current sample. §.§ Implications for the multiphase CGM dynamics Based on the velocity dispersion of member galaxies in the QSO host group environment, the halo mass of the QSO hosts in our sample is estimated to be ≈10^13–10^14 <cit.>.This mass range suggests a viral temperature of T≈ 10^6–10^7 K for the underlying hot halo <cit.>. Meanwhile, the sound speed of the gas can be calculated by c_ s=√(γ k_ BT/μ m_ p), where γ=5/3 is the adiabatic index for an ideal monatomic gas, k_ B is the Boltzmann constant, μ is the mean atomic weight (which is 0.588 for fully ionized gas), and m_ p is the proton mass. For the cool gas of T≈10^4 K, c_ s, cool≈15 km/s, while for the hot medium of T≈ 10^6–10^7 K, c_ s, hot≈150–500 km/s. Therefore, for the nebulae in our sample, the Mach number calculated using the sound speed of the cool gas is ℳ_ cool = √(3)σ_ pos/c_ s, cool≈ 7–18, and ℳ_ hot = √(3)σ_ pos/c_ s, hot≈ 0.2–1.8 using c_ s, hot for the hot gas.Here σ_ pos is the velocity dispersion in the plane of the sky.As we will discuss below in Section <ref>, σ_ pos is typically smaller than the velocity dispersion along the line of sight, and the Mach numbers will be larger (ℳ_ cool≈ 9–20 and ℳ_ hot≈ 0.3–2.0) when estimated using the line-of-sight velocity dispersion. Given the contrast between the two Mach numbers, ℳ_ cool and ℳ_ hot, the subsonic motions revealed by the VSFs of the nebulae suggest that the [O2] and [O3] emission originates from cool gas clumps embedded in the ambient hot medium.If these cool clumps are in pressure equilibrium with the hot halo, then they can serve as tracers for the kinematics of the volume-filling plasma. The scenario of a dynamically-coupled multiphase gaseous system is supported by absorption line studies on CGM kinematics of z∼2 star-forming galaxies <cit.> as well as by recent measurements in the core regions of nearby galaxy groups and clusters <cit.>.There has also been an increasing number of theoretical and numerical predictions arguing for a shared dynamical state across different gas phases <cit.>The dynamical coupling likely happens due to a combination of physical processes involving cooling, the exchange of mass and momentum between cool and hot phases, and the competition between cool clump formation and cloud crushing at different mass/length scales. Turbulence is expected to facilitate these processes, which in turn further feed into the development of turbulence in the gaseous halo.In the absence of turbulence, the condensed cool clumps tend to settle in more organized structures such as a disk. The extended morphological features of the nebulae in our sample suggest that turbulence is significant in these gaseous halos. Phenomenologically, <cit.> proposed an empirical criterion of t_ cool/t_ eddy≲ 1 for the condensation and survival of cool gas in clusters and groups, where t_ cool is the gas cooling time and t_ eddy is the eddy turnover time.Based on the VSF measurements, we can calculate the eddy turnover time via t_ eddy≈ϵ^-1/3l^2/3, where ϵ is the energy transfer rate per unit mass at the spatial scale l (for more discussion on ϵ see Section <ref> below). For the nebulae in our sample, we estimate t_ eddy≈ 60–150 Myr at l≈ 10 kpc and t_ eddy≈ 150–300 Myr at l≈ 50 kpc. While we cannot obtain an estimation for t_ cool due to the absence of temperature and metallicity measurements of the hot phase, our measured t_ eddy is in agreement with the estimated values (t_ eddy≈100–200 Myr for galaxy groups) that fulfill the gas condensation criterion in <cit.> (see their Figure 5). In addition, turbulence in the CGM can also be produced by Kelvin-Helmholtz instability during the accretion of cool gas streams <cit.>, and motions of fragmented cool gas clumps in disrupted, turbulent mixing zones near the accreting streams are predicted to be subsonic in numerical simulations <cit.>.Among our sample, the nebula in the field of TXS0206-048 exhibits compelling signs of cool, filamentary gas accretion from large scales <cit.>, suggesting that the observed subsonic turbulence may be in part produced through the accreting streams. Finally, previous studies have identified a correlation between the presence of close companions around the QSOs and the presence of strong, extended nebular line emission <cit.>. In our sample, the morphokinematics of some nebulae (e.g., PKS0405-123, HE0238-1904, TXS0206-048) reveal that part of the line-emitting gas originates from stripped ISM of group member galaxies as indicated by consistent line-of-sight velocities between the galaxies and extended nebulae <cit.>. It is natural to assume in these cases that the tidal interactions between group member galaxies disturb the gas and enhance turbulence and thermal instabilities in the hot halo, leading to more efficient cooling and cool clump condensation.The stripped ISM can also serve as massive cool gas seeds that facilitate the coagulation of smaller clumps, aiding in subsequent stochastic mass growth in the cool phase <cit.>. The significance of this environmental effect on the formation of extended nebulae is supported by the fact that the nebulae in PKS0405-123, HE0238-1904, and TXS0206-048 are much larger in area than the nebulae in fields such as J0454-6116 and J2135-5316 where no massive close companions with consistent line-of-sight velocities were found in the nebulae footprint.§.§ Energy transfer rate over seven decades in spatial scale As described in , the energy transfer rate per unit mass ϵ can be calculated via the “four-fifths law"<cit.>:ϵ = 5/4[|⟨Δ v (r)^3⟩|/r] ≈5/4[⟨ |Δ v (r)|^3⟩/r].For Kolmogorov turbulence, ϵ is a constant at all scales within the inertial range.For VSFs flatter (steeper) than the Kolmogorov expectation, the energy transfer rate would be higher (lower) on smaller spatial scales.Across different nebulae in our sample and on different scales between 10–60 kpc, the estimated ϵ shows a range of values between ≈0.02 cm^2 s^-3 and ≈0.2 cm^2 s^-3. For nebulae with both and measurements, the values obtained using these two lines are consistent within uncertainty. This estimated range for ϵ with our sample is comparable to the measurements for Hα filaments in core regions of nearby cool-core clusters <cit.> and molecular clouds in nearby H2 regions <cit.>.Much lower estimates of ϵ≈ 10^-7–10^-3 cm^2 s^-3 were obtained for CGM cool clumps probed through absorption line spectroscopy <cit.>, and a Milky Way high-velocity cloud <cit.>. To gain further insights into the differences between these dynamical systems, we convert the estimated ϵ to a turbulent heating rate per unit volume via Q_ turb=ρϵ, where ρ is the gas density and can span a wide range for gas in different phases. For the QSO nebulae in our sample, the doublet line ratios suggest a median upper limit of gas density for the T∼10^4 K cool phase of ≲40 cm^-3 <cit.>, while an estimate of ≈1–5 cm^-3 is obtained assuming pressure equilibrium between typical AGN-illuminated [O2]-emitting gas and the hot halo <cit.>.Based on the [S2]λλ6716,6731 doublet ratio, observations of spatially extended nebula illuminated by the active galactic nucleus (AGN) in the Teacup galaxy at z∼0.1 show that the gas density at distances of a few kpc away from the galaxy center is ≲10 cm^-3 <cit.>.Therefore, we adopt a range of 1–40 cm^-3 for the cool phase gas when calculating Q_ turb to account for this wide range of uncertainty. For the hot phase with T≈10^6–10^7 K, we adopt a density range of 0.01–1 cm^-3 <cit.>.We obtain an estimated Q_ turb of ≈10^-26–10^-22 erg cm^-3 s^-1 for the cool gas and ≈10^-28–10^-25 erg cm^-3 s^-1 for the hot gas, as shown by the blue and red shaded regions in Figure <ref>.<cit.> constrained the Q_ turb of the ICM in the core regions of nearby cool-core clusters to be ≈ 10^-28–10^-25 erg cm^-3 s^-1 (the gray shaded region in Figure <ref>), in agreement with our result for the hot phase.For star-forming molecular clouds, measurements across a wide range of spatial scales of ≈0.01–100 pc led to an estimate of Q_ turb≈10^-27–10^-24 erg cm^-3 s^-1 as presented in <cit.> and shown by the brown shaded region in Figure <ref>. <cit.> measured the density and kinematics of a bright concentration region near the edge of Complex C, an HVC in the Milky Way, which resulted in an estimated Q_ turb≈10^-30–10^-28 erg cm^-3 s^-1 as shown by the green shaded region in Figure <ref>. Using non-thermal velocity widths of resolved absorption profiles and clump sizes inferred from photoionization models,<cit.>constrained Q_ turb to be ≈10^-30–10^-27 erg cm^-3 s^-1 for spectrally resolved cool clumps with a size scale of ≈ 10 pc – 1 kpc in the CGM. These are shown by the gray data points in Figure <ref>. It can be seen that the turbulent heating rates in the QSO nebulae, the cool-core cluster ICM, and the star-forming molecular clouds are on average ∼ 1000 times higher than that in the MW HVC and cool gas clumps probed in absorption. Given that both Complex C and cool absorption clumps are expected to be in relatively quiescent, undisturbed environments <cit.>, a possible explanation for this difference is that feedback due to star formation and AGN activities can significantly elevate the turbulent energy in the gaseous halos. However, caveats remain in this interpretation. As discussed in the previous section, the galaxy environments of the largest extended nebulae hint towards the scenario where tidal/merger interactions play a key role in stirring up the gas and facilitating the formation of multiphase structures, and the presence of a large amount of cool gas near the QSOs can lead to more efficient black hole accretion <cit.>. In this case, the elevated turbulent energy might be a precursor for fueling these luminous QSOs instead of a consequence of QSO feedback. For the first time, we are able to determine turbulent energy transfer rate in the diffuse cosmic gas over seven decades in spatial scale from ∼ 0.01 pc to ∼ 100 kpc, but the measurements rely on two distinct approaches at different spatial scales.In particular, in the circumgalactic space, where we see three orders of magnitude difference in Q_ turb from large to small scales, such distinction is also accompanied by differences in the way turbulence energy is determined.The lack of overlapping spatial scales probed by emission and absorption prevents us from forming a consistent picture of turbulent energy cascade in galaxy halos, while systematic uncertainties remain when comparing turbulent flows based on VSF measurements and those from absorption-line analyses. In , we discussed uncertainties associated with VSF measurements due to either projection effects <cit.> or PSF smoothing (see further discussion in<ref>). While the smallest area accessible in emission measures is limited to the PSF size of the data, the absorption line technique averages cloud properties over the beam size that is dictated by the black hole accretion disk size (i.e., on the order of ≪1 pc). At the same time, absorption-line analyses are subject to uncertainties in the photo-ionizing background radiation field.Future observations using AO-assisted ground-based IFSs and/or space-based IFSs can extend the small scales probed in the VSFs to ≲ 10 kpc for the line-emitting gas, bridging the gap in spatial scales accessible between emission and absorption studies. A sample of systems with both extended line emission and high-resolution absorption line data will also greatly aid in the investigation of this discrepancy in Q_ turb.§.§ Velocity dispersion along the line of sight versus in the plane of the sky In Figure <ref>, we show the velocity dispersion in the plane of the sky, σ_ pos, versus the mean velocity dispersion along the line of sight, ⟨σ_ los⟩.σ_ pos is quantified as the standard deviation of the line-of-sight velocity from spaxels included in the VSF measurements (see discussion in Section <ref> and the velocity maps in Figures <ref>–<ref>), and ⟨σ_ los⟩ is the mean line width (obtained through a single-component Gaussian fit) for the same set of spaxels.We show results for both and emission as they can differ in σ_ pos and ⟨σ_ los⟩ due to the different footprints of the two lines. The statistical uncertainties of both velocity dispersions estimated through Monte Carlo resampling are small and are not shown in Figure <ref>.The measurements for σ_ pos before and after removing the uni-directional velocity gradient in the plane of the sky are consistent with each other to within ≈ 20 km/s.Therefore, for clarity, we only show the values obtained using the directly measured [O2] and [O3] velocity maps.It can be seen that for all nebulae in our sample, σ_ pos≲⟨σ_ los⟩. This observation agrees with the general trend seen in spatially-resolved data for H2 regions where the velocity dispersion along the line of sight exceeds the velocity dispersion in the plane of the sky <cit.>. One possible explanation for this trend is the smoothing effect due to multiple line-emitting clouds along the line of sight contributing to the observed velocity centroid, leading to reduced velocity dispersion in the plane of the sky. In addition, the contribution from bulk/coherent motions along the line of sight will also result in larger σ_ los. To investigate this possibility, we adopt a simple assumption that ⟨σ_ los⟩^2 = [σ_ pos^2 + (v_ grad, los× L_ los)^2], where v_ grad, los is the velocity gradient along the line of sight.We approximate the depth of the nebula L_ los to be the square root of the nebula size (see Table <ref>), and derive a velocity gradient of v_ grad, los≈ 0.5–3 km/s/kpc for different nebulae.The range of this derived v_ grad, los is in qualitative agreement with the best-fitting velocity gradient in the plane of the sky (see Table <ref>), suggesting that bulk flowsalong the line of sight may be non-negligible.In contrast, the velocity dispersion across the plane of the sky provides a robust tracer of the underlying velocity variance at scales ≳10 kpc, particularly when a credible model for the coherent shear in the plane of the sky can be obtained with the spatially-resolved velocity measurements, as pointed out by previous studies<cit.>. §.§ Power-law turnover scale for the VSFsAs discussed in Section <ref>, the shapes of the VSFs generally do not follow a single power-law across the entire range of scales probed.While additional structures in the VSFs may provide hints for different physical processes present in the nebulae, we caution that the limited nebula size and signal-to-noise can hinder a robust interpretation of these structures.In particular, we note that there is a moderate correlation (with a Spearman's r coefficient of 0.7) between the VSF turnover scale r_2 (see Section <ref>) and the size of the nebula for both the and emission, as shown in Figure <ref>.This correlation indicates that the deviation of the VSF from a single power-law at larger scales is in part due to the limited nebula size probed by the data given the detection limit. Previous studies have also shown that boundaries of clouds/nebulae can artificially flatten the VSFs at large scales that mimic the signature of energy injection and affect the interpretation of the data <cit.>.In addition, the smooth transition between the inertial range and the energy injection scale can cause the VSF slopes to taper off at a scale as small as half of the true energy injection scale <cit.> and further complicate the interpretation of a flattening signal in the VSFs.Given the abovementioned caveats, we refrain from interpreting r_2 or VSF flattening scales in our sample as indicative of energy injection scales. However, Figure <ref> indicates no discernible correlations between the constrained 2nd-order power-law slopes (γ_2) and VSF turnover scale (r_2) or nebula size, underscoring the robustness of γ_2 measurements. Measurements from local H2 regions reported by <cit.> result in larger γ_2 values on average (shown in the blue shaded region in Figure <ref>), suggesting elevated Mach numbers in local H2 regions and/or increased susceptibility to projection smoothing in their observations (for more discussion of projection effects see Section <ref> below).§.§ Limitations and caveats A notable limitation in the present study arises from the projection effect inherent in the data. Several studies have investigated how VSFs are affected by the use of projected measurements. Analytically, <cit.> derived that for volume-filling gas, the projection effect depends on the spatial scales probed: VSFs are steepened when measuring separation scales smaller than the depth of the cloud along the line of sight, while the VSF slopes recover to the intrinsic value at scales exceeding the cloud depth. This result is sometimes referred to as the “projection smoothing" effect and was independently confirmed by <cit.> and <cit.>. On the other hand, <cit.> used numerical simulations to show that for spatially confined structures (e.g., isolated filaments), the projection effect flattens the VSFs. As we have discussed in Section <ref>, the dynamical state of the nebulae examined in this work indicates that the cool line-emitting gas is embedded in the hot ambient medium and traces the turbulent motions of the hot, volume-filling gas. Therefore, our measurements are more likely affected by the “projection smoothing" effect, suggesting that the intrinsic VSF slopes may be flatter than the values reported in Table <ref>, which still supports our interpretation of the subsonic/transonic gas motions. In addition to whether the gas is volume-filling or spatially confined, in reality, the projection effect will also depend on detailed properties of the system such as density/emissivity fluctuations and the three-dimensional geometry of the gas structure. Detailed investigations using high-resolution numerical simulations are needed to robustly quantify and calibrate the projection effect in more realistic environments. Another main limitation of the current study is the restricted dynamic range in the VSF measurements, which is confined to approximately one decade or less in projected distance separation. This restriction prevented us from obtaining robust constraints on the VSFs slopes for several systems in our sample.While the largest separation is determined by the nebula size given the detection threshold, the smallest separation accessible in the data is dictated by the spatial sampling (i.e., angular size per spatial pixel) as well as the PSF size. As ground-based observations without adaptive optics (AO) are fundamentally limited by atmospheric seeing, improving the dynamic range towards small scales requires conducting AO-assisted observations on the ground (e.g., with VLT/ERIS in the infrared and using the Narrow-Field-Mode on VLT/MUSE in the optical) with longer exposure times to reach sufficient signal-to-noise. Alternatively, space-based IFSs such as JWST/NIRSpec with unprecedented spatial resolution have also started delivering an increasing sample of spatially-resolved observations of the CGM <cit.>. Finally, with a fixed PSF size, targeting systems at lower redshifts with a higher angular-to-physical size ratio can also help increase the VSF dynamic range. However, few extended (≳50 kpc) nebulae have been discovered at z<0.5 <cit.> and additional effort is required to expand the sample size of low-redshift extended nebulae. § CONCLUSION This paper presents an ensemble study of the turbulent motions in eight extended nebulae surrounding seven QSOs at z≈0.5–1.1.Using the and/or emission lines, we measure the line-of-sight velocity fields and construct the velocity structure functions (VSFs). We probed the dynamical state of the gas illuminated by the QSO radiation field at scales ≈10–100 kpc. Our main conclusions are: * Five out of the eight nebulae in our sample have a constrained power-law slope of the 2nd-order VSFs, γ_2, between ≈0.3–1.1, while the other three nebulae have loose constraints corresponding to 95% upper limits of ≲ 0.5–1.5, as shown in Figures <ref> and <ref> and discussed in Section <ref>.To within the 2-σ measurement uncertainty, the slopes are either consistent with the expectation from Kolmogorov turbulence or flatter, suggesting that the gas motions are subsonic.* Removing a best-fitting unidirectional velocity gradient from the line-of-sight velocity maps flattens the VSFs in general, but also leads to larger uncertainties due to a reduced dynamic range in the VSFs that can be used for a single power-law fit.The results before and after removing a velocity gradient are consistent within the range of the uncertainty, as shown in Figure <ref>.* Complementing the measurements for the 2nd-order VSF slopes, γ_2, the ESS slope ratios γ_p/γ_3 for p=1–6 are also in agreement with the expectation of subsonic turbulence, as shown in Figure <ref> and discussed in Section <ref>.The only exception is the nebula surrounding the QSO field HE0238-1904, where the γ_p/γ_3 ratios are consistent with the supersonic MHD turbulence prediction by <cit.> both before and after removing a uni-directional gradient field.A more detailed investigation of this field and a larger sample size are required to shed light on whether this field is a special case.* The subsonic motions in the QSO nebulae suggest that the line-emitting cool clouds with T∼ 10^4 K are embedded within a hot ambient medium with T∼ 10^6–10^7 K.Adopting the sound speed of the hot medium of c_ s, hot≈500 km/s, we estimate the Mach number of the cool clouds to be ≈ 0.2–0.5, consistent with the observed VSF properties. The subsonic nature of gas motions supports a scenario where the cool clumps condense out of the hot gas, carrying the turbulent memory of the hot halo and serving as tracers of hot phase dynamics (see Section <ref>).* No discernible differences are seen in VSF properties between radio-loud and radio-quiet QSO fields, suggesting that the collimated jets and their inflated bubbles do not play a critical role in shaping the dynamical state of the gas on ∼tens of kpc scales. * Comparing the mean velocity dispersion along the line of sight, ⟨σ_ los⟩, and the velocity dispersion observed in the plane of the sky,σ_ pos, we find that ⟨σ_ los⟩≳σ_ pos for all fields (Figure <ref>).We discuss that projection effects and bulk motion along the line of sight are possible sources for the larger dispersion (see Section <ref>).* The turbulent heating rate per unit volume, Q_ turb, in the QSO nebulae is estimated to be ∼ 10^-26–10^-22 erg cm^-3 s^-1 for the cool phase and ∼ 10^-28–10^-25 erg cm^-3 s^-1 for the hot phase at scales ≈ 10–60 kpc.This range is in agreement with the measurements for intracluster medium and star-forming molecular clouds but is ∼ 1000 times higher than that estimated for Milky Way Complex C and cool circumgalactic gas clumps probed in low-ion absorption lines, as shown in Figure <ref> and discussed in Section <ref>. While the difference in Q_ turb might be a signpost for AGN/stellar feedback, a robust investigation into the systematics of the different measurements is required to shed light on this discrepancy. Future observations of extended nebulae using AO-assisted IFSs on the ground (e.g., MUSE Narrow-Field-Mode) and/or space-based IFSs (e.g., JWST/NIRSpec IFU) will help extend the small scales probed in VSFs to ≲10 kpc, improving the robustness of the VSF constraints and bridging the gap between Q_ turb measured by emission and absorption techniques. The findings of this ensemble study align with the recent emerging picture of the multiphase CGM where different gas phases are intricately connected throughout their formation and evolution history. Turbulence plays a critical role in facilitating nonlinear interactions within the gaseous halos, which in turn promote further developments of turbulence. For shaping the dynamical properties of gas traced by [O2] and [O3] at scales ≳10 kpc, environmental effects (e.g., tidal interactions, galaxy mergers, gas accretion) may dominate over QSO feedback. These findings can be directly compared with high-resolution numerical simulations to shed light on detailed physical mechanisms that govern the driving and development of turbulence in the CGM.We thank Fausto Cattaneo and Jenny Greene for helpful discussions throughout this work. We also thank Yuan Li for constructive feedback on the discussions of this paper.HWC and MCC acknowledge partial support from NSF AST-1715692 grants. ZQ acknowledges partial support from NASA ADAP grant 80NSSC22K0481. JIL is supported by the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program. SC gratefully acknowledges support from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation programme grant agreement No 864361. FSZ acknowledges the support of a Carnegie Fellowship from the Observatories of the Carnegie Institution for Science. EB acknowledges support by NASA under award number 80GSFC21M0002.This research has made use of the services of the ESO Science Archive Facility and the Astrophysics Data Service (ADS)[<https://ui.adsabs.harvard.edu/classic-form>]. The analysis in this work was greatly facilitated by the followingpackages: <cit.>,<cit.>,<cit.>,<cit.>, and<cit.>.This work was completed with resources provided by the University of Chicago Research Computing Center. § ABSENCE OF THE LUMINOSITY–VELOCITY DISPERSION RELATION For local regions as well as galaxies (at both low and high redshifts), a L-σ relation corresponding to the correlation between the luminosity of the region/galaxy in a certain line emission (such as Hα and Hβ) and its velocity dispersion is commonly observed <cit.>. In our QSO nebulae sample, however, we do not observe such a correlation, as shown in Figure <ref>. The contrast here likely arises from the different emission mechanisms for recombination lines versus collisionally-excited lines, the former of which is more well coupled to the total mass and stellar feedback in the regions/galaxies. In addition, QSOs are variable and the number of ionization photons output by QSOs is subject to significant changes on timescales of ≲tens of Myr <cit.>, further weakening a correlation between the luminosity of the surrounding nebulae and the velocity dispersion of the gas. § MEASUREMENTS FOR INDIVIDUAL NEBULAE Here we present the VSFs measurements for individual nebulae in PKS0405-123, PKS0552-640, and HE0238-1904.The measurements for PKS0454-22, J0454-6116, and J2135-5316 can be found in the Appendix of . aasjournal
http://arxiv.org/abs/2310.18406v1
{ "authors": [ "Mandy C. Chen", "Hsiao-Wen Chen", "Michael Rauch", "Zhijie Qu", "Sean D. Johnson", "Joop Schaye", "Gwen C. Rudie", "Jennifer I-Hsiu Li", "Zhuoqi", "Liu", "Fakhri S. Zahedy", "Sebastiano Cantalupo", "Erin Boettcher" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231027180014", "title": "An ensemble study of turbulence in extended QSO nebulae at $z\\approx0.5$--1" }
[][email protected] Kitaev model on a honeycomb lattice may provide a robust topological quantum memory platform, but finding a material that realizes the unique spin liquid phase remains a considerable challenge. We demonstrate that an effective Kitaev Hamiltonian can arise from a half-filled Fermi-Hubbard Hamiltonian where each site can experience a magnetic field in a different direction. As such, we provide a method for realizing the Kitaev spin liquid on a single hexagonal plaquette made up oftwelve quantum dots.Despite the small system size, there are clear signatures of the Kitaev spin-liquid ground state, and there is a range of parameters where these signatures are predicted, allowing a potential platform where Kitaev spin-liquid physics can be explored experimentally in quantum dot plaquettes. Engineering the Kitaev spin liquid in a quantum dot system Sankar Das Sarma January 14, 2024 ==========================================================§ INTRODUCTION Quantum spin liquids are a new phase of matter that exhibit the lack of long-ranged order, an emergent gauge field, long-ranged entanglement, topological order, and fractionalization of spins <cit.>. Despite several promising candidate materials coming from frustrated Kagome <cit.> and triangular <cit.> lattices, there remains a lack of consensus about the nature of their ground-state phase.Another route to spin liquid materials comes out of the Kitaev model on the honeycomb lattice <cit.>, an exactly solvable playground for exploring the physics of spin liquids and non-Abelian anyons <cit.>. In the gapless isotropic phase, the low-energy excitations behave as non-Abelian anyons, once a magnetic field introduces a small gap, and these anyons could form the basis for perfect topological memory <cit.>. The model became physically relevant after Jackeli and Khaliullin found that certain materials, arising from 4d rare earth atoms with the correct geometry, may have a significant Kitaev term <cit.>. The search for a material realization of the Kitaev spin liquid, a “Kitaev material,” has now generated an enormous amount of research on a host of compounds such as Na2IrO3 <cit.>, Li2IrO3 <cit.>, H3LiIr2O6 <cit.>, Na2Co2TeO6 <cit.>, and α-RuCl3 <cit.>. For RuCl3 in particular, the smoking-gun signature of a Kitaev spin liquid, a quantized thermal Hall effect, has been claimed to have been measured <cit.>, but convincingly reproducing the results has been difficult and questions remain <cit.>. Within the Kitaev materials, there remain formidable challenges: most materials enter a long-range ordered phase at low temperature, implying considerable non-Kitaev interactions, and the underlying effective spin Hamiltonian is never known exactly<cit.>, particularly since the fundamental Hamiltonian is an electronic and not a spin Hamiltonian. In fact, it is unclear that naturally occurring solid state materials can manifest the precise Hamiltonian necessary for producing quantum spin liquids described by theoretical models, including the Kitaev model. There is, however, an alternate way of realizing spin liquids by using engineered structures containing the requisite spin Hamiltonian by design, i.e., quantum simulators. Advances in these systems, allow for much more detailed probing of the proposed spin-liquid state. In two-dimensional Rydberg arrays it was theoretically proposed and thenexperimentally demonstrated that an arrangement of atoms on the bonds of a Kagome lattice can lead to some topological ordering <cit.>. Although the long-ranged nature of the interaction, non-exactness of the Rydberg blockade, and non-equilibrium nature of the state complicate the interpretation of the experiment <cit.>, this result represents a definitive advance in spin-liquid experiments. In principle, within fully programmable quantum computers it is possible to build the Kitaev model <cit.> and a similar model with many of the same properties, the toric code <cit.>. There have been multiple other proposals for realizing Kitaev physics using a Floquet drive <cit.> and trapped ions <cit.>, but the former requires significant temporal coherence and the latter has stringent constraints on the relevant time-scales. There is thus considerable interest in the engineered realization of the Kitaev spin liquid using quantum simulators.In this Letter, we discuss how spin-liquid physics can be explored in small quantum-dot systems by precisely creating the Kitaev model on a single hexagonal plaquette (Fig. <ref>). Quantum dot systems are a potential spin qubit quantum computing platform where full control has been demonstrated for six sites <cit.> but where systems with more dots (with as many as sixteen having been fabricated so far <cit.>) can be considered quantum simulators of Hubbard-model physics <cit.>. In fact, semiconductor quantum dot based spin qubits are considered to be a leading quantum computing platform because of their scalability, fast all-electrical operations, and long coherence. Even though the systems are small, they have already provided experimental evidence for Nagaoka ferromagnetism <cit.> and the small-system analog of the Mott transition <cit.>, and they could provide evidence for flat-band ferromagnetism in the near future <cit.>.It is already possible to apply a magnetic field gradient using micromagnets <cit.>, and our main assumption is that, as the technology improves, it will be possible to place each site in its own effective magnetic field, which is also necessary for quantum computing single qubit operations. Under this assumption, we will derive an effective Hamiltonian that can be tuned to be exactly the Kitaev model on a single hexagon.The physics we propose is no more challenging than fabricating the spin qubit based quantum computing platform, which is a huge activity in more thana dozen research centers and industrial labs including Intel Corporation <cit.>.Since a “phase” is only defined in the thermodynamic limit, we cannot claim to ever create a Kitaev spin-liquid “phase” in such a small system (which is a problem intrinsic to all quantum simulator platforms). However, we find that the unique properties of the Kitaev model allow for spin-liquid signatures to be manifest even in this small system for a range of parameters around the exact Kitaev point–that is, the system does not have to be perfectly fine-tuned. Our construction is not limited to a single hexagonal plaquette, and can be extended straight-forwardly to a many-unit cell system. § THEORY We start by explaining the construction on a single hexagonal plaquette, see Fig. <ref>. In addition to six sites on the vertices of the hexagon, which will interact via an effective Kitaev Hamiltonian, we have an additional six sites that live on the bonds or edges of the hexagon that will be frozen/integrated out. We will demonstrate that this system, which can be fabricated using the existing spin qubit technology, is sufficient to see Kitaev-spin-liquid-like physics.Because we are considering an application to experimental quantum dot systems, the Hamiltonian for our twelve-site system is given, by construction, by the Fermi-Hubbard model in a magnetic field H=U∑_i n_i↑ n_i↓ + ∑_ij,σ t_ij c_iσ^† c_jσ + 1/2∑_i,σ,σ' h_i · c_iσ^†σ_σ,σ'c_iσ'where t_ij=t_ji^* are not necessarily real. We assume that the system is half-filled, which is easy to control in spin qubit quantum dot structures.We only allow for nearest-neighbor hopping t_1 and hopping between nearest-neighbor vertices t_2, since longer distance hopping falls off exponentially. We are envisioning two different magnetic field strengths: |h_B| for the sites positioned on the bonds/edges of the hexagon and |h_V| for sites positioned on the vertices. The direction of the magnetic field follows the pattern described in Fig. <ref>: each edge is labeled by one of three orthogonal directions, x, y, or z, and the field on a bond site points in that direction (i.e. h_2 = -h_B ẑ), and the field on a vertex points in the sum of the directions of adjacent edges [i.e. h_1 = h_V (ẑ + x̂)]. We use the hopping strength |t_1| as the energy unit. In order to create a single Kitaev plaquette, we will show self-consistently that we need the scalings U ≫ |h_B| ≫ |t_1|^2/U, |h_V|∼ |t_1|^2/U, and |t_2|∼ |t_1|^2/√(U|h_B|). Again, this is, in principle, achievable in semiconductorspin qubit platforms, where U and the hoppings are the largest and the samllest energy scales, respectively, and the magnetic field is experimentally tunable.We first perform perturbation theory in |h_B|/U, |t_ij|/U following <cit.>. The full details of the calculation are given in the Supplement Material (SM) <cit.>,and we end up with the effective Hamiltonian of localized spins, S_i = σ_i/2, at 𝒪(U^-3): H_eff,spin = 1/2∑_i(1-2|t_1|^2/U^2) h_i ·σ_i+∑_⟨ ij⟩|t_1|^2/U(1-4|t_1|^2/U^2+1/4|h|^2/U^2)(σ_i ·σ_j -1) + ∑_⟨⟨ i i'⟩⟩_B|t_1|^4/U^3 (σ_i ·σ_i' -1) + ∑_⟨⟨ j k⟩⟩_V( |t_2|^2/U +|t_1|^4/U^3) (σ_j ·σ_k -1) + ∑_⟨ ij⟩|t_1|^2/2U^2(h_j ·σ_i + h_i ·σ_j)+3 ∑_(j,i,k)_Bsin(ϕ_B) |t_1^2 t_2|/U^2σ_i · (σ_j ×σ_k)where ⟨⟨ jk⟩⟩_V (⟨⟨ i i'⟩⟩_B) indicates next-nearest-neighbor pairs between vertex (bond) sites, and (j,i,k)_B indicates a sum over bonds where j,k are the vertex sites and i is the bond site. The value of ϕ_B is how much flux, in units of the flux quantum, pierces the triangle made up of those three sites; in our geometry, ϕ_B=0, but in other geometries this term may exist. However, if ϕ_B ≪ 1, this term is likely negligible. Note that we do not have the ring-exchange term because it requires a 4-cycle to exist in hopping. We also have made use of |h_V| ∼ |t_1|^2/U and |t_2| ∼ |t_1|^2/√(U|h_B|) to ignore terms that are already higher-order in 1/U.We now integrate out the bond sites to be left with an effective Hamiltonian for just the vertex sites. We fix the magnetic field on each bond to be h_i = -|h_B|α̂ where site i is on an α bond. We perform perturbation theory in |t_1|^2/(|h_B|U) again to 𝒪(U^-3) (with ϕ_B=0).H_V, eff = 1/2∑_j∈ V h_eff,j·σ_j + ∑_⟨ jk⟩_V,α J σ_j ·σ_k + Kσ_j^ασ_k^α + C h_eff,j =∑_i_α∈ n.n.(j) α̂[ h_V(1-|t_1|^2/2U^2) + 2|t_1|^2/U(1-4 |t_1|^2/U^2 + |h_B|^2/4U^2-|h_B|/2U + 2|t_1|^2/U|h_B|)]J =|t_2|^2/U + |t_1|^4/U^3- 2 |t_1|^4/U^2 |h_B|+2|t_1|^4 h_V/U^2|h_B|^2-4|t_1|^6/U^3|h_B|^2 K= 2|t_1|^4/U^2 |h_B|-4|t_1|^4 h_V/U^2|h_B|^2+8|t_1|^6/U^3|h_B|^2 where ⟨ jk⟩_V,α indicates nearest-neighbor pairs of vertices that are connected via an α bond, and i_α∈ n.n.(j) indicates the nearest-neighbors of site j that are on an α bond. The constant C is provided in the SM <cit.>. Since we want the field strength, | h_eff,j/K ≲ 1 and Heisenberg coupling J/K ≲ 1, we need |t_2| ≲√(2)|t_1|^2/√(U|h_B|) and h_V ≈ -2|t_1|^2/U, which justifies the scaling we used to derive Eqs. (<ref>) and (<ref>). Although we have computed expressions for these quantities to 𝒪(U^-3), we see that the Kitaev coupling is 𝒪(U^-2) implying that, even if ϕ_B=0 turns out to be a poor assumption, our construction still works for large enough U. Despite the notion of a phase being properly defined only in the thermodynamic limit, there is a clear-cut signature of a Kitaev spin-liquid like “phase” even in this small plaquette. First, there is an operator defined on each plaquette that commutes with the Kitaev Hamiltonian. For our single plaquette, it is given byW_P =σ_1^y σ_3^x σ_5^z σ_7^y σ_9^xσ_11^z =± 1where the site indices are from Fig. <ref>. The value of W_P=± 1 is a signature of the emergent ℤ_2 gauge field of the Kitaev model <cit.>. Second, the spin-spin correlators are short-ranged <cit.>: the only non-zero static S^z-S^z correlators for our system at the Kitaev point are ⟨ S_1^z S_3^z⟩=⟨ S_7^z S_9^z⟩ =-1/6 and ⟨ S_j^zS_j^z⟩ = 1/4 with S = σ/2.§ RESULTS In order to verify our theory and clarify possible experimental signatures, we use the Density-Matrix Renormalization Group (DMRG) <cit.> method to directly find the ground state of Eq. (<ref>) and compare with exact diagonalization (ED) on six sites given by Eq. (<ref>). For DMRG, we use<cit.> with a bond dimension of χ=4096, large enough to describe the ground state exactly. In Fig. <ref>, we plot the plaquette operator, ⟨ W_P⟩, and⟨ S_i^z S_j^z⟩ for a z-bond (i=1,j=3),an x-bond (i=1,j=11), andthe farthest spins (i=1,j=7). We set t_1=h_B=1 with t_2 and h_V given byh_V= -A_1+A_2 h_eff/K/A_3 + 4 A_4 h_eff/K; t_2= ±√(U(J/K(A_2-4A_4 h_V)-2A_4 h_V-A_5)); A_1= 2 |t_1|^2/U( 1+ h_B^2-16|t_1|^2/4U^2+ 4|t_1|^2-h_B^2/2U|h_B|); A_2= 2 |t_1|^4/U^2 |h_B| + 8 |t_1|^6/U^3 h_B^2;A_3 = 1- 2|t_1|^2/U^2A_4=|t_1|^4/U^2 h_B^2;A_5 = |t_1|^4/U^3 - 2|t_1|^4/U^2|h_B| - 4|t_1|^6/U^3 h_B^2,so as to reproduce a targeted value of J/K and |h_eff|/K with errors at higher order than 𝒪(U^-3). We are able to verify that DMRG and ED have ground-state energies that agree to 𝒪(U^-4) when the parameters are specified in this way (see SM <cit.>). We use real values of the hoppings so as to avoid additional parameters needed to compute the flux, ϕ_B.We see that when U/|t_1|=33, the two methods give excellent agreement with each other further lending credence to our theoretical derivations and plaquette constructions efficacy. Note that, when |h_eff|/K=0 and J/K≪ 1, ⟨ W_P⟩ and ⟨ S_i^z S_j^z⟩ take on their approximate values as expected for the Kitaev model. Additionally, when |h_eff|/K 0, there are points where the derivatives of these observables are discontinuous, and, in the region J/K≪ 1, they take a value close to the Kitaev value. In a large system, these features are consistent with the magnetic field gapping out the itinerant Majoranas and providing a gap that J must overcome; in our system, we have verified that some of the degeneracy seen at the J/K, |h_eff|/K=0 point are lifted in the presence of a magnetic field implying that the same interpretation might hold. Taken together, the numerics demonstrate that, even though strictly speaking the Kitaev Hamiltonian only arises at a single point, it is possible to see evidence of the Kitaev state in a range of parameters meaning that the construction is less fine-tuned than anticipated, i.e., there is some robustness.§ CONCLUSION From the above results (and additional numerical results shown in <cit.>), we argue that if |J|/K ≲ 0.02 and |h_eff|/K ≲ 0.75, our twelve-site system should exhibit Kitaev-spin-liquid-like physics. The experimental setup therefore does not need to be perfectly fine-tuned to observe this physics: for U/|t_1|=33 and |t_1|/|h_B|=1, these values roughly correspond to a range of 0.0615 ≤ h_V≤ 0.0650 and 0.2565≤ t_2≤ 0.2620 with K≈ 0.00229. This range reveals that h_V and t_2 need only be accurate at the 5% and 2% level, respectively, which should be experimentally controllable in semiconductor quantum dot structures. It is also not necessary to prepare the ground-state of the system. As long as the energy is well-below |h_B|, the state will always have short-ranged spin-spin correlators as this property is true for every eigenstate in the Kitaev model <cit.>.Another advantage of our construction is that the field on the vertex sites does not need to be tuned individually. If the field on the bond sites decays at just the right rate, it can provide the necessary field as h_j∈ V points in the same direction as the sum of the neighboring h_j∈ B.The main perturbationwe are neglecting is hopping between the bond sites, t_3, but this term will only create corrections in Eq. (<ref>) at 𝒪(U^-3). If U is large enough or if the geometry can be chosen to make t_3 small, our results will still hold. Although our proposal should realize a Kitaev spin liquid plaquette in principle, there are many open questions.For example, what are the necessary conditions and system sizes to observe the non-Abelian anyon braiding signatures?What are the most suitable experimental signatures of these anyons?How to realize topological qubits usinganyon braiding in such Kitaev plaquettes?Our work should motivate both experimental work to realize our proposed Kitaev quantum dot plaquette and theoretical work to answer these questions.§ ACKNOWLEDGEMENTSTC is supported by a University of California Presidential Postdoctoral Fellowship and SDS is supported by the Laboratory for Physical Sciences. SDS thanks the Kavli Institute for Theoretical Physics at UCSB, which is funded by the National Science Foundation, for its hospitality through the program “Quantum Materials with and without Quasiparticles.” Use was made of the computational facilities administered by the Center for Scientific Computing at the CNSI and MRL (an NSF MRSEC; DMR-1720256) and purchased through NSF CNS-1725797.69 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Savary and Balents(2016)]savary2016quantum author author L. Savary and author L. Balents, title title Quantum spin liquids: a review, @noopjournal journal Reports on Progress in Physics volume 80, pages 016502 (year 2016)NoStop [Wen et al.(2019)Wen, Yu, Li, Yu, and Li]wen2019experimental author author J. Wen, author S.-L. Yu, author S. Li, author W. Yu, and author J.-X. Li, title title Experimental identification of quantum spin liquids, @noopjournal journal npj Quantum Materials volume 4, pages 12 (year 2019)NoStop [Broholm et al.(2020)Broholm, Cava, Kivelson, Nocera, Norman, and Senthil]broholm2020quantum author author C. Broholm, author R. Cava, author S. Kivelson, author D. Nocera, author M. Norman, and author T. Senthil, title title Quantum spin liquids, @noopjournal journal Science volume 367, pages eaay0668 (year 2020)NoStop [Feng et al.(2017)Feng, Li, Meng, Yi, Wei, Zhang, Wang, Jiang, Liu, Li et al.]feng2017gapped author author Z. Feng, author Z. Li, author X. Meng, author W. Yi, author Y. Wei, author J. Zhang, author Y.-C. Wang, author W. Jiang, author Z. Liu, author S. Li, et al., title title Gapped spin-1/2 spinon excitations in a new kagome quantum spin liquid compound cu3zn (oh) 6fbr, @noopjournal journal Chinese Physics Letters volume 34, pages 077502 (year 2017)NoStop [Shores et al.(2005)Shores, Nytko, Bartlett, and Nocera]shores2005structurally author author M. P. Shores, author E. A. Nytko, author B. M. Bartlett, and author D. G. Nocera, title title A structurally perfect s= 1/2 kagome antiferromagnet, @noopjournal journal Journal of the american chemical society volume 127, pages 13462 (year 2005)NoStop [Shimizu et al.(2003)Shimizu, Miyagawa, Kanoda, Maesato, and Saito]Shimizu2003 author author Y. Shimizu, author K. Miyagawa, author K. Kanoda, author M. Maesato, and author G. Saito, title title Spin Liquid State in an Organic Mott Insulator with a Triangular Lattice, https://doi.org/10.1103/PhysRevLett.91.107001 journal journal Phys. Rev. Lett. volume 91, pages 107001 (year 2003)NoStop [Itou et al.(2008)Itou, Oyamada, Maegawa, Tamura, and Kato]Itou2008 author author T. Itou, author A. Oyamada, author S. Maegawa, author M. Tamura, and author R. Kato, title title Quantum spin liquid in the spin-1/2 triangular antiferromagnet EtMe_3Sb[Pd(dmit)_2]_2, https://doi.org/10.1103/PhysRevB.77.104413 journal journal Phys. Rev. B volume 77, pages 104413 (year 2008)NoStop [Law and Lee(2017)]Law2017 author author K. T. Law and author P. A. Lee, title title 1t-tas_2 as a quantum spin liquid, https://doi.org/10.1073/pnas.1706769114 journal journal Proceedings of the National Academy of Sciences volume 114, pages 6996 (year 2017)NoStop [He et al.(2018)He, Xu, Chen, Law, and Lee]He2018 author author W.-Y. He, author X. Y. Xu, author G. Chen, author K. T. Law, and author P. A. Lee, title title Spinon Fermi Surface in a Cluster Mott Insulator Model on a Triangular Lattice and Possible Application to 1TTaS_2, https://doi.org/10.1103/PhysRevLett.121.046401 journal journal Phys. Rev. Lett. volume 121, pages 046401 (year 2018)NoStop [Xu et al.(2023a)Xu, Bag, Sherman, Yadav, Kolesnikov, Podlesnyak, Moore, and Haravifard]xu2023realization author author S. Xu, author R. Bag, author N. E. Sherman, author L. Yadav, author A. I. Kolesnikov, author A. A. Podlesnyak, author J. E. Moore, and author S. Haravifard, title title Realization of u (1) dirac quantum spin liquid in ybzn2gao5, @noopjournal journal arXiv preprint arXiv:2305.20040(year 2023a)NoStop [Kitaev(2006)]kitaev2006 author author A. Kitaev, title title Anyons in an exactly solved model and beyond, @noopjournal journal Annals of Physics volume 321, pages 2 (year 2006)NoStop [Nayak et al.(2008)Nayak, Simon, Stern, Freedman, and Das Sarma]Nayak2008 author author C. Nayak, author S. H. Simon, author A. Stern, author M. Freedman, and author S. Das Sarma, title title Non-abelian anyons and topological quantum computation, https://doi.org/10.1103/RevModPhys.80.1083 journal journal Rev. Mod. Phys. volume 80, pages 1083 (year 2008)NoStop [Jackeli and Khaliullin(2009)]Jackeli2009 author author G. Jackeli and author G. Khaliullin, title title Mott insulators in the strong spin-orbit coupling limit: From heisenberg to a quantum compass and kitaev models, https://doi.org/10.1103/PhysRevLett.102.017205 journal journal Phys. Rev. Lett. volume 102, pages 017205 (year 2009)NoStop [Ye et al.(2012)Ye, Chi, Cao, Chakoumakos, Fernandez-Baca, Custelcean, Qi, Korneta, and Cao]Ye2012 author author F. Ye, author S. Chi, author H. Cao, author B. C. Chakoumakos, author J. A. Fernandez-Baca, author R. Custelcean, author T. F. Qi, author O. B. Korneta, and author G. Cao, title title Direct evidence of a zigzag spin-chain structure in the honeycomb lattice: A neutron and x-ray diffraction investigation of single-crystal na_2iro_3, https://doi.org/10.1103/PhysRevB.85.180403 journal journal Phys. Rev. B volume 85, pages 180403(R) (year 2012)NoStop [Comin et al.(2012)Comin, Levy, Ludbrook, Zhu, Veenstra, Rosen, Singh, Gegenwart, Stricker, Hancock, van der Marel, Elfimov, and Damascelli]Comin2012 author author R. Comin, author G. Levy, author B. Ludbrook, author Z.-H. Zhu, author C. N. Veenstra, author J. A. Rosen, author Y. Singh, author P. Gegenwart, author D. Stricker, author J. N. Hancock, author D. van der Marel, author I. S. Elfimov, and author A. Damascelli, title title na_2iro_3 as a novel relativistic mott insulator with a 340-mev gap, https://doi.org/10.1103/PhysRevLett.109.266406 journal journal Phys. Rev. Lett. volume 109, pages 266406 (year 2012)NoStop [Hwan Chun et al.(2015)Hwan Chun, Kim, Kim, Zheng, Stoumpos, Malliakas, Mitchell, Mehlawat, Singh, Choi et al.]hwan2015direct author author S. Hwan Chun, author J.-W. Kim, author J. Kim, author H. Zheng, author C. C. Stoumpos, author C. Malliakas, author J. Mitchell, author K. Mehlawat, author Y. Singh, author Y. Choi, et al., title title Direct evidence for dominant bond-directional interactions in a honeycomb lattice iridate na2iro3, @noopjournal journal Nature Physics volume 11, pages 462 (year 2015)NoStop [Singh and Gegenwart(2010)]Singh2010 author author Y. Singh and author P. Gegenwart, title title Antiferromagnetic mott insulating state in single crystals of the honeycomb lattice material na_2iro_3, https://doi.org/10.1103/PhysRevB.82.064412 journal journal Phys. Rev. B volume 82, pages 064412 (year 2010)NoStop [Singh et al.(2012)Singh, Manni, Reuther, Berlijn, Thomale, Ku, Trebst, and Gegenwart]Singh2012 author author Y. Singh, author S. Manni, author J. Reuther, author T. Berlijn, author R. Thomale, author W. Ku, author S. Trebst, and author P. Gegenwart, title title Relevance of the heisenberg-kitaev model for the honeycomb lattice iridates A_2iro_3, https://doi.org/10.1103/PhysRevLett.108.127203 journal journal Phys. Rev. Lett. volume 108, pages 127203 (year 2012)NoStop [Choi et al.(2012)Choi, Coldea, Kolmogorov, Lancaster, Mazin, Blundell, Radaelli, Singh, Gegenwart, Choi, Cheong, Baker, Stock, and Taylor]Choi2012 author author S. K. Choi, author R. Coldea, author A. N. Kolmogorov, author T. Lancaster, author I. I. Mazin, author S. J. Blundell, author P. G. Radaelli, author Y. Singh, author P. Gegenwart, author K. R. Choi, author S.-W. Cheong, author P. J. Baker, author C. Stock, and author J. Taylor, title title Spin waves and revised crystal structure of honeycomb iridate na_2iro_3, https://doi.org/10.1103/PhysRevLett.108.127204 journal journal Phys. Rev. Lett. volume 108, pages 127204 (year 2012)NoStop [Liu et al.(2011)Liu, Berlijn, Yin, Ku, Tsvelik, Kim, Gretarsson, Singh, Gegenwart, and Hill]Liu2011 author author X. Liu, author T. Berlijn, author W.-G. Yin, author W. Ku, author A. Tsvelik, author Y.-J. Kim, author H. Gretarsson, author Y. Singh, author P. Gegenwart, and author J. P. Hill, title title Long-range magnetic ordering in na_2iro_3, https://doi.org/10.1103/PhysRevB.83.220403 journal journal Phys. Rev. B volume 83, pages 220403(R) (year 2011)NoStop [Williams et al.(2016)Williams, Johnson, Freund, Choi, Jesche, Kimchi, Manni, Bombardi, Manuel, Gegenwart, and Coldea]Williams2016 author author S. C. Williams, author R. D. Johnson, author F. Freund, author S. Choi, author A. Jesche, author I. Kimchi, author S. Manni, author A. Bombardi, author P. Manuel, author P. Gegenwart, and author R. Coldea, title title Incommensurate counterrotating magnetic order stabilized by kitaev interactions in the layered honeycomb li_2iro_3, https://doi.org/10.1103/PhysRevB.93.195158 journal journal Phys. Rev. B volume 93, pages 195158 (year 2016)NoStop [Biffin et al.(2014)Biffin, Johnson, Kimchi, Morris, Bombardi, Analytis, Vishwanath, and Coldea]Biffin2014 author author A. Biffin, author R. D. Johnson, author I. Kimchi, author R. Morris, author A. Bombardi, author J. G. Analytis, author A. Vishwanath, and author R. Coldea, title title Noncoplanar and counterrotating incommensurate magnetic order stabilized by kitaev interactions in li_2iro_3, https://doi.org/10.1103/PhysRevLett.113.197201 journal journal Phys. Rev. Lett. volume 113, pages 197201 (year 2014)NoStop [Winter et al.(2016)Winter, Li, Jeschke, and Valentí]Winter2016 author author S. M. Winter, author Y. Li, author H. O. Jeschke, and author R. Valentí, title title Challenges in design of kitaev materials: Magnetic interactions from competing energy scales, https://doi.org/10.1103/PhysRevB.93.214431 journal journal Phys. Rev. B volume 93, pages 214431 (year 2016)NoStop [Kitagawa et al.(2018)Kitagawa, Takayama, Matsumoto, Kato, Takano, Kishimoto, Bette, Dinnebier, Jackeli, and Takagi]kitagawa2018 author author K. Kitagawa, author T. Takayama, author Y. Matsumoto, author A. Kato, author R. Takano, author Y. Kishimoto, author S. Bette, author R. Dinnebier, author G. Jackeli, and author H. Takagi, title title A spin–orbital-entangled quantum liquid on a honeycomb lattice, @noopjournal journal Nature volume 554, pages 341 (year 2018)NoStop [Takagi et al.(2019)Takagi, Takayama, Jackeli, Khaliullin, and Nagler]takagi2019 author author H. Takagi, author T. Takayama, author G. Jackeli, author G. Khaliullin, and author S. E. Nagler, title title Concept and realization of kitaev quantum spin liquids, @noopjournal journal Nature Reviews Physics volume 1, pages 264 (year 2019)NoStop [Lin et al.(2021)Lin, Jeong, Kim, Wang, Huang, Masuda, Asai, Itoh, Günther, Russina et al.]lin2021field author author G. Lin, author J. Jeong, author C. Kim, author Y. Wang, author Q. Huang, author T. Masuda, author S. Asai, author S. Itoh, author G. Günther, author M. Russina, et al., title title Field-induced quantum spin disordered state in spin-1/2 honeycomb magnet na2co2teo6, @noopjournal journal Nature communications volume 12, pages 1 (year 2021)NoStop [Banerjee et al.(2018)Banerjee, Lampen-Kelley, Knolle, Balz, Aczel, Winn, Liu, Pajerowski, Yan, Bridges et al.]banerjee2018 author author A. Banerjee, author P. Lampen-Kelley, author J. Knolle, author C. Balz, author A. A. Aczel, author B. Winn, author Y. Liu, author D. Pajerowski, author J. Yan, author C. A. Bridges, et al., title title Excitations in the field-induced quantum spin liquid state of α-rucl 3, @noopjournal journal npj Quantum Materials volume 3, pages 1 (year 2018)NoStop [Banerjee et al.(2017)Banerjee, Yan, Knolle, Bridges, Stone, Lumsden, Mandrus, Tennant, Moessner, and Nagler]Banerjee2017 author author A. Banerjee, author J. Yan, author J. Knolle, author C. A. Bridges, author M. B. Stone, author M. D. Lumsden, author D. G. Mandrus, author D. A. Tennant, author R. Moessner, and author S. E. Nagler, title title Neutron scattering in the proximate quantum spin liquid α-rucl3, @noopjournal journal Science volume 356, pages 1055 (year 2017)NoStop [Banerjee et al.(2016)Banerjee, Bridges, Yan, Aczel, Li, Stone, Granroth, Lumsden, Yiu, Knolle et al.]banerjee2016 author author A. Banerjee, author C. Bridges, author J.-Q. Yan, author A. Aczel, author L. Li, author M. Stone, author G. Granroth, author M. Lumsden, author Y. Yiu, author J. Knolle, et al., title title Proximate kitaev quantum spin liquid behaviour in a honeycomb magnet, @noopjournal journal Nature materials volume 15, pages 733 (year 2016)NoStop [Ran et al.(2017)Ran, Wang, Wang, Dong, Ren, Bao, Li, Ma, Gan, Zhang, Park, Deng, Danilkin, Yu, Li, and Wen]Ran2017 author author K. Ran, author J. Wang, author W. Wang, author Z.-Y. Dong, author X. Ren, author S. Bao, author S. Li, author Z. Ma, author Y. Gan, author Y. Zhang, author J. T. Park, author G. Deng, author S. Danilkin, author S.-L. Yu, author J.-X. Li, and author J. Wen, title title Spin-wave excitations evidencing the kitaev interaction in single crystalline rucl_3, https://doi.org/10.1103/PhysRevLett.118.107203 journal journal Phys. Rev. Lett. volume 118, pages 107203 (year 2017)NoStop [Nasu et al.(2016)Nasu, Knolle, Kovrizhin, Motome, and Moessner]nasu2016 author author J. Nasu, author J. Knolle, author D. L. Kovrizhin, author Y. Motome, and author R. Moessner, title title Fermionic response from fractionalization in an insulating two-dimensional magnet, @noopjournal journal Nature Physics volume 12, pages 912 (year 2016)NoStop [Kasahara et al.(2018)Kasahara, Ohnishi, Mizukami, Tanaka, Ma, Sugii, Kurita, Tanaka, Nasu, Motome et al.]kasahara2018 author author Y. Kasahara, author T. Ohnishi, author Y. Mizukami, author O. Tanaka, author S. Ma, author K. Sugii, author N. Kurita, author H. Tanaka, author J. Nasu, author Y. Motome, et al., title title Majorana quantization and half-integer thermal quantum hall effect in a kitaev spin liquid, @noopjournal journal Nature volume 559, pages 227 (year 2018)NoStop [Yokoi et al.(2021)Yokoi, Ma, Kasahara, Kasahara, Shibauchi, Kurita, Tanaka, Nasu, Motome, Hickey et al.]yokoi2021 author author T. Yokoi, author S. Ma, author Y. Kasahara, author S. Kasahara, author T. Shibauchi, author N. Kurita, author H. Tanaka, author J. Nasu, author Y. Motome, author C. Hickey, et al., title title Half-integer quantized anomalous thermal hall effect in the kitaev material candidate α-rucl3, @noopjournal journal Science volume 373, pages 568 (year 2021)NoStop [Bruin et al.(2022)Bruin, Claus, Matsumoto, Kurita, Tanaka, and Takagi]bruin2022 author author J. Bruin, author R. Claus, author Y. Matsumoto, author N. Kurita, author H. Tanaka, and author H. Takagi, title title Robustness of the thermal hall effect close to half-quantization in α-rucl3, @noopjournal journal Nature Physics , pages 1 (year 2022)NoStop [Yamashita et al.(2020)Yamashita, Gouchi, Uwatoko, Kurita, and Tanaka]yamashita2020 author author M. Yamashita, author J. Gouchi, author Y. Uwatoko, author N. Kurita, and author H. Tanaka, title title Sample dependence of half-integer quantized thermal hall effect in the kitaev spin-liquid candidate α- rucl 3, @noopjournal journal Physical Review B volume 102, pages 220404(R) (year 2020)NoStop [Czajka et al.(2022)Czajka, Gao, Hirschberger, Lampen-Kelley, Banerjee, Quirk, Mandrus, Nagler, and Ong]czajka2022 author author P. Czajka, author T. Gao, author M. Hirschberger, author P. Lampen-Kelley, author A. Banerjee, author N. Quirk, author D. G. Mandrus, author S. E. Nagler, and author N. Ong, title title The planar thermal hall conductivity in the kitaev magnet {\alpha}-rucl3, @noopjournal journal arXiv preprint arXiv:2201.07873(year 2022)NoStop [Lefrançois et al.(2021)Lefrançois, Grissonnanche, Baglo, Lampen-Kelley, Yan, Balz, Mandrus, Nagler, Kim, Kim et al.]lefranccois2021 author author É. Lefrançois, author G. Grissonnanche, author J. Baglo, author P. Lampen-Kelley, author J. Yan, author C. Balz, author D. Mandrus, author S. Nagler, author S. Kim, author Y.-J. Kim, et al., title title Evidence of a phonon hall effect in the kitaev spin liquid candidate alpha-rucl_3, @noopjournal journal arXiv preprint arXiv:2111.05493(year 2021)NoStop [Samarakoon et al.(2022)Samarakoon, Laurell, Balz, Banerjee, Lampen-Kelley, Mandrus, Nagler, Okamoto, and Tennant]samarakoon2022 author author A. M. Samarakoon, author P. Laurell, author C. Balz, author A. Banerjee, author P. Lampen-Kelley, author D. Mandrus, author S. E. Nagler, author S. Okamoto, and author D. A. Tennant, title title Extraction of the interaction parameters for alpha- rucl _3 from neutron data using machine learning, @noopjournal journal arXiv preprint arXiv:2202.10715(year 2022)NoStop [Maksimov and Chernyshev(2020)]Maksimov2020 author author P. A. Maksimov and author A. L. Chernyshev, title title Rethinking rucl_3, https://doi.org/10.1103/PhysRevResearch.2.033011 journal journal Phys. Rev. Res. volume 2, pages 033011 (year 2020)NoStop [Slagle et al.(2018)Slagle, Choi, Chern, and Kim]Slagle2018 author author K. Slagle, author W. Choi, author L. E. Chern, and author Y. B. Kim, title title Theory of a quantum spin liquid in the hydrogen-intercalated honeycomb iridate h_3liir_2o_6, https://doi.org/10.1103/PhysRevB.97.115159 journal journal Phys. Rev. B volume 97, pages 115159 (year 2018)NoStop [Verresen et al.(2021)Verresen, Lukin, and Vishwanath]Verresen2021 author author R. Verresen, author M. D. Lukin, and author A. Vishwanath, title title Prediction of toric code topological order from rydberg blockade, https://doi.org/10.1103/PhysRevX.11.031005 journal journal Phys. Rev. X volume 11, pages 031005 (year 2021)NoStop [Semeghini et al.(2021)Semeghini, Levine, Keesling, Ebadi, Wang, Bluvstein, Verresen, Pichler, Kalinowski, Samajdar et al.]semeghini2021probing author author G. Semeghini, author H. Levine, author A. Keesling, author S. Ebadi, author T. T. Wang, author D. Bluvstein, author R. Verresen, author H. Pichler, author M. Kalinowski, author R. Samajdar, et al., title title Probing topological spin liquids on a programmable quantum simulator, @noopjournal journal Science volume 374, pages 1242 (year 2021)NoStop [Giudici et al.(2022)Giudici, Lukin, and Pichler]giudici2022dynamical author author G. Giudici, author M. D. Lukin, and author H. Pichler, title title Dynamical preparation of quantum spin liquids in rydberg atom arrays, @noopjournal journal Physical Review Letters volume 129, pages 090401 (year 2022)NoStop [Sahay et al.(2022)Sahay, Vishwanath, and Verresen]sahay2022quantum author author R. Sahay, author A. Vishwanath, and author R. Verresen, title title Quantum spin puddles and lakes: Nisq-era spin liquids from non-equilibrium dynamics, @noopjournal journal arXiv preprint arXiv:2211.01381(year 2022)NoStop [Xiao et al.(2021)Xiao, Freericks, and Kemper]xiao2021determining author author X. Xiao, author J. K. Freericks, and author A. F. Kemper, title title Determining quantum phase diagrams of topological kitaev-inspired models on nisq quantum hardware, @noopjournal journal Quantum volume 5, pages 553 (year 2021)NoStop [Bespalova and Kyriienko(2021)]bespalova2021quantum author author T. A. Bespalova and author O. Kyriienko, title title Quantum simulation and ground state preparation for the honeycomb kitaev model, @noopjournal journal arXiv preprint arXiv:2109.13883(year 2021)NoStop [Lu et al.(2009)Lu, Gao, Gühne, Zhou, Chen, and Pan]Lu2009 author author C.-Y. Lu, author W.-B. Gao, author O. Gühne, author X.-Q. Zhou, author Z.-B. Chen, and author J.-W. Pan, title title Demonstrating anyonic fractional statistics with a six-qubit quantum simulator, https://doi.org/10.1103/PhysRevLett.102.030502 journal journal Phys. Rev. Lett. volume 102, pages 030502 (year 2009)NoStop [Han et al.(2007)Han, Raussendorf, and Duan]Han2007 author author Y.-J. Han, author R. Raussendorf, and author L.-M. Duan, title title Scheme for demonstration of fractional statistics of anyons in an exactly solvable model, https://doi.org/10.1103/PhysRevLett.98.150404 journal journal Phys. Rev. Lett. volume 98, pages 150404 (year 2007)NoStop [Xu et al.(2023b)Xu, Sun, Wang, Xiang, Bao, Zhu, Shen, Song, Zhang, Ren et al.]xu2023digital author author S. Xu, author Z.-Z. Sun, author K. Wang, author L. Xiang, author Z. Bao, author Z. Zhu, author F. Shen, author Z. Song, author P. Zhang, author W. Ren, et al., title title Digital simulation of projective non-abelian anyons with 68 superconducting qubits, @noopjournal journal Chinese Physics Letters(year 2023b)NoStop [Sun et al.(2023)Sun, Goldman, Aidelsburger, and Bukov]Sun2023Engineering author author B.-Y. Sun, author N. Goldman, author M. Aidelsburger, and author M. Bukov, title title Engineering and probing non-abelian chiral spin liquids using periodically driven ultracold atoms, https://doi.org/10.1103/PRXQuantum.4.020329 journal journal PRX Quantum volume 4, pages 020329 (year 2023)NoStop [Schmied et al.(2011)Schmied, Wesenberg, and Leibfried]schmied2011quantum author author R. Schmied, author J. H. Wesenberg, and author D. Leibfried, title title Quantum simulation of the hexagonal kitaev model with trapped ions, @noopjournal journal New Journal of Physics volume 13, pages 115011 (year 2011)NoStop [Philips et al.(2022)Philips, Madzik, Amitonov, de Snoo, Russ, Kalhor, Volk, Lawrie, Brousse, Tryputen et al.]philips2022universal author author S. G. Philips, author M. T. Madzik, author S. V. Amitonov, author S. L. de Snoo, author M. Russ, author N. Kalhor, author C. Volk, author W. I. Lawrie, author D. Brousse, author L. Tryputen, et al., title title Universal control of a six-qubit quantum processor in silicon, @noopjournal journal Nature volume 609, pages 919 (year 2022)NoStop [Borsoi et al.(2023)Borsoi, Hendrickx, John, Meyer, Motz, van Riggelen, Sammak, de Snoo, Scappucci, and Veldhorst]borsoi2023shared author author F. Borsoi, author N. W. Hendrickx, author V. John, author M. Meyer, author S. Motz, author F. van Riggelen, author A. Sammak, author S. L. de Snoo, author G. Scappucci, and author M. Veldhorst, title title Shared control of a 16 semiconductor quantum dot crossbar array, @noopjournal journal Nature Nanotechnology , pages 1 (year 2023)NoStop [Buterakos and Sarma(2023)]buterakos2023magnetic author author D. Buterakos and author S. D. Sarma, title title Magnetic phases of bilayer quantum-dot hubbard model plaquettes, @noopjournal journal arXiv preprint arXiv:2308.04504(year 2023)NoStop [Nagaoka(1966)]Nagaoka1966 author author Y. Nagaoka, title title Ferromagnetism in a narrow, almost half-filled s band, https://doi.org/10.1103/PhysRev.147.392 journal journal Phys. Rev. volume 147, pages 392 (year 1966)NoStop [Dehollain et al.(2020)Dehollain, Mukhopadhyay, Michal, Wang, Wunsch, Reichl, Wegscheider, Rudner, Demler, and Vandersypen]dehollain2020nagaoka author author J. P. Dehollain, author U. Mukhopadhyay, author V. P. Michal, author Y. Wang, author B. Wunsch, author C. Reichl, author W. Wegscheider, author M. S. Rudner, author E. Demler, and author L. M. Vandersypen, title title Nagaoka ferromagnetism observed in a quantum dot plaquette, @noopjournal journal Nature volume 579, pages 528 (year 2020)NoStop [Buterakos and Das Sarma(2019)]Buterakos author author D. Buterakos and author S. Das Sarma, title title Ferromagnetism in quantum dot plaquettes, https://doi.org/10.1103/PhysRevB.100.224421 journal journal Phys. Rev. B volume 100, pages 224421 (year 2019)NoStop [Hensgens et al.(2017)Hensgens, Fujita, Janssen, Li, Van Diepen, Reichl, Wegscheider, Das Sarma, and Vandersypen]hensgens2017quantum author author T. Hensgens, author T. Fujita, author L. Janssen, author X. Li, author C. Van Diepen, author C. Reichl, author W. Wegscheider, author S. Das Sarma, and author L. M. Vandersypen, title title Quantum simulation of a fermi–hubbard model using a semiconductor quantum dot array, @noopjournal journal Nature volume 548, pages 70 (year 2017)NoStop [Stafford and Das Sarma(1994)]Stafford1994 author author C. A. Stafford and author S. Das Sarma, title title Collective coulomb blockade in an array of quantum dots: A mott-hubbard approach, https://doi.org/10.1103/PhysRevLett.72.3590 journal journal Phys. Rev. Lett. volume 72, pages 3590 (year 1994)NoStop [Tasaki(1992)]Tasaki1992 author author H. Tasaki, title title Ferromagnetism in the hubbard models with degenerate single-electron ground states, https://doi.org/10.1103/PhysRevLett.69.1608 journal journal Phys. Rev. Lett. volume 69, pages 1608 (year 1992)NoStop [Buterakos and Das Sarma(2023)]buterakos2023certain author author D. Buterakos and author S. Das Sarma, title title Certain exact many-body results for hubbard model ground states testable in small quantum dot arrays, https://doi.org/10.1103/PhysRevB.107.014403 journal journal Phys. Rev. B volume 107, pages 014403 (year 2023)NoStop [Pioro-Ladriere et al.(2007)Pioro-Ladriere, Tokura, Obata, Kubo, and Tarucha]pioro2007micromagnets author author M. Pioro-Ladriere, author Y. Tokura, author T. Obata, author T. Kubo, and author S. Tarucha, title title Micromagnets for coherent control of spin-charge qubit in lateral quantum dots, @noopjournal journal Applied physics letters volume 90 (year 2007)NoStop [Neyens et al.(2023)Neyens, Zietz, Watson, Luthi, Nethwewala, George, Henry, Wagner, Islam, Pillarisetty et al.]neyens2023probing author author S. Neyens, author O. Zietz, author T. Watson, author F. Luthi, author A. Nethwewala, author H. George, author E. Henry, author A. Wagner, author M. Islam, author R. Pillarisetty, et al., title title Probing single electrons across 300 mm spin qubit wafers, @noopjournal journal arXiv preprint arXiv:2307.04812(year 2023)NoStop [Takahashi(1977)]takahashi1977half author author M. Takahashi, title title Half-filled hubbard model at low temperature, @noopjournal journal Journal of Physics C: Solid State Physics volume 10, pages 1289 (year 1977)NoStop [MacDonald et al.(1988)MacDonald, Girvin, and Yoshioka]macdonald1988t author author A. H. MacDonald, author S. M. Girvin, and author D. Yoshioka, title title t/U expansion for the hubbard model, https://doi.org/10.1103/PhysRevB.37.9753 journal journal Phys. Rev. B volume 37, pages 9753 (year 1988)NoStop [SM()]SM @noopnote See Supplemental Material at [URL will be inserted by publisher] for the perturbation theory details and additional numerical results.Stop [Baskaran et al.(2007)Baskaran, Mandal, and Shankar]Baskaran2007 author author G. Baskaran, author S. Mandal, and author R. Shankar, title title Exact results for spin dynamics and fractionalization in the kitaev model, https://doi.org/10.1103/PhysRevLett.98.247201 journal journal Phys. Rev. Lett. volume 98, pages 247201 (year 2007)NoStop [White(1992)]White1992 author author S. R. White, title title Density matrix formulation for quantum renormalization groups, https://doi.org/10.1103/PhysRevLett.69.2863 journal journal Phys. Rev. Lett. volume 69, pages 2863 (year 1992)NoStop [Hauschild and Pollmann(2018)]tenpy author author J. Hauschild and author F. Pollmann, title title Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy), https://doi.org/10.21468/SciPostPhysLectNotes.5 journal journal SciPost Phys. Lect. Notes , pages 5 (year 2018), note code available from <https://github.com/tenpy/tenpy>, https://arxiv.org/abs/1805.00055 arXiv:1805.00055 NoStop [pages=1]KitaevFromHubbard_SM.pdf [pages=2]KitaevFromHubbard_SM.pdf [pages=3]KitaevFromHubbard_SM.pdf [pages=4]KitaevFromHubbard_SM.pdf [pages=5]KitaevFromHubbard_SM.pdf [pages=6]KitaevFromHubbard_SM.pdf
http://arxiv.org/abs/2310.18393v2
{ "authors": [ "Tessa Cookmeyer", "Sankar Das Sarma" ], "categories": [ "cond-mat.mes-hall", "cond-mat.str-el" ], "primary_category": "cond-mat.mes-hall", "published": "20231027180000", "title": "Engineering the Kitaev spin liquid in a quantum dot system" }
Practical application of quantum neural network to materials informatics: prediction of the melting points of metal oxides Hirotoshi Hiraie-mail: [email protected] Central R&D Labs., Inc.,41-1, Yokomichi, Nagakute, Aichi 480-1192, Japan=============================================================================================================================================empty emptyAn important prerequisite for autonomous robots is their ability to reliably grasp a wide variety of objects. Most state-of-the-art systems employ specialized or simple end-effectors, such as two-jaw grippers, which severely limit the range of objects to manipulate. Additionally, they conventionally require a structured and fully predictable environment while the vast majority of our world is complex, unstructured, and dynamic.This paper presents an implementation to overcome both issues. Firstly, the integration of a five-finger hand enhances the variety of possible grasps and manipulable objects. This kinematically complex end-effector is controlled by a deep learning based generative grasping network. The required virtual model of the unknown target object is iteratively completed by processing visual sensor data. Secondly, this visual feedback is employed to realize closed-loop servo control which compensates for external disturbances. Our experiments on real hardware confirm the system's capability to reliably grasp unknown dynamic target objects without a priori knowledge of their trajectories. To the best of our knowledge, this is the first method to achieve dynamic multi-fingered grasping for unknown objects. A video of the experiments is available at https://youtu.be/Ut28yM1gnvIhttps://youtu.be/Ut28yM1gnvI. § INTRODUCTIONGrasping unknown, moving objects presents numerous challenges: the computationally intensive methods used to deduce grasping strategies solely on sensor data from the environment must be repetitively executed in real-time to allow continuous adaption to the changing object pose. Additionally, there are many sources of error resulting in grasp failure, including tracking loss, inaccurate target segmentation as well as a collision between the robot and the dynamic environment. For these reasons, most previous works simplify this complex problem with prior assumptions regarding the target, its trajectory, or the grasp.All existing approaches on dynamic grasping we found use two- or three-jaw grippers which are controlled by a single degree of freedom (DoF) to open or close. The system presented in this paper employs a multi-fingered hand: the 15 DoF anthropomorphic DLR-HIT Hand II <cit.>. While this high-dimensional end-effector is more complex to control, it allows manipulation of an increased variety of objects as demonstrated in various works <cit.>. For this hand, the grasp sampling and evaluation framework Five-finger Hand Net (FFHNet) <cit.> was developed. Since it is the first real-time capable deep learning based system for multi-fingered grasping, we demonstrate the potential of employing it in a closed control loop for grasping unknown objects. Figure <ref> shows successful grasps from our conducted experiments.This novel approach fulfills the following five properties to achieve robust dynamic grasp execution: (1) By processing camera feedback, the closed-loop servo controller increases the robustness regarding sensor noise, model error, and physical disturbances. This enables the system to track and grasp moving objects. (2) Every part of the system is real-time capable (≥30Hz). (3) During the grasp approach phase, sensor observations are combined to improve the virtual target model required for grasp generation. This provides the basis to fully exploit the capabilities of multi-fingered grasping since it facilitates the inference of reliable grasps compared to an evaluation of the most recent observation only. Additionally, since our system relies on a single-camera view, this allows an estimation of a suitable grasp pose in the absence of useful sensor feedback, e.g., due to occlusions. (4) To achieve high flexibility, the system is designed such that it requires little prior knowledge and is not coupled to predefined classes of a data set: both tracking and grasp generation perform well on unseen objects. (5) Lastly, employing a multi-fingered hand enables better manipulability and higher flexibility compared to standard two-jaw grippers.This work's contributions can be summarized as follows:nolistsep * Our novel method is, to the best of our knowledge, the first to enable multi-fingered grasping of dynamic targets. It jointly integrates state-of-the-art image processing, grasp planning, and robot control methods. * We present an approach to filter and combine the visual tracking results to create a virtual target model online without prior knowledge of the target's movement. * Our novel metric for dynamic grasping updates the target grasp from a set that is continuously re-generated in real-time. It facilitates the simultaneous evaluation of multiple factors influencing grasp success and offers an interface for potential future extensions.* We demonstrate our system's capabilities to grasp various moving objects in different real-world experiments. § RELATED WORKIn contrast to the system presented in this paper, all publications in this section use grippers which are controlled by a single DoF. This also applies to the works employing three-jaw grippers since they couple all fingers to open or close <cit.>. Additionally, they impose restrictions on the target object to be graspable with this finger configuration. Instead, we employ a 15-DoF multi-fingered hand with superior manipulation capabilities and enable grasping of truly unknown objects.Early works are limited to the execution of pre-programmed grasps for a single known object. While initially the target's trajectory must be known a priori <cit.>, this simplification is later omitted <cit.>. Also, more recent approaches constrain the target and its pose to ensure the stability of a pre-programmed grasp <cit.>. Some additionally require the target to be known or to fulfill specific geometric properties <cit.>. Another approach greatly simplifies visual tracking and grasp planning by marking the grasp pose with an Apriltag on the target <cit.>.Modern methods often use data-driven models to generate grasps. Some systems rely only on the most recent observation of a depth camera and discard all structural information from previous time steps <cit.>. Without a virtual model of the unknown target, they depend on always reliable camera data. This cannot be guaranteed in case of occlusions, unclear perspectives of the target, or inaccurate sensor feedback when the camera's minimum depth range is undershot. While these implementations can handle changes in the target pose as it is approached, they require a static target directly before and during the execution of the grasp.In another approach, the object's model is not created online but requires a preceding time-inefficient scanning phase <cit.>. To avoid the necessity of real-time capable grasp generation, some systems use pre-generated sets of grasps and continuously re-rank these grasps during object tracking based on the target object's movement <cit.>. However, this implies a priori knowledge of the target's geometrical properties which does not apply to the approach presented in this paper. Also, the pre-processing step to generate grasps significantly increases process times.A recent work to enable dynamic grasping by predicting the target's movement does not generalize to unknown objects nor arbitrary trajectories <cit.>. The system AnyGrasp cannot adjust the grasp online based on visual feedback, and its speed of 7Hz violates the real-time constraint, restricting the target's movements to be slow and predictive <cit.>.Some works address dynamic grasping in the context of human-to-robot handovers using visual feedback <cit.>. Since these implementations assume the human's cooperation during the handover, they can at most handle minor target movements because they either do not update the grasp at all or only at a slow rate of 5Hz.Other approaches apply Deep Reinforcement Learning to the dynamic grasping problem <cit.>. However, all of them greatly reduce the grasp generation complexity by only considering spherical or cubic objects. Additionally, they further simplify the problem by eitheremploying Apriltags for object detection <cit.>, report a significant gap in the grasp success rate between simulated and real experiments <cit.>, or exclusively evaluate their model in simulation <cit.>.Our model overcomes the mentioned limitations of the existing approaches by employing closed-loop servo control and target model generation in real-time to achieve five-fingered grasping of unknown dynamic targets. § METHODThe control system is divided into the two processes Target Model Generation and Grasp Control as indicated in Figure <ref>. They run asynchronously to avoid blocking and to enable efficient recovery after errors such as tracking loss. §.§ Target Model GenerationTo create a point cloud model representing the target object, the camera feedback is processed. The depth camera attached to the robot's end-effector (eye-in-hand configuration) provides a color and depth data stream. After initialization with a bounding box, the target object is segmented from subsequent color images by a visual object tracker as long as it stays in the camera's observable space. In this work, the transformer-based model TransT_M <cit.> is chosen. As the winner of the VOT-RT2021 <cit.> challenge, it offers state-of-the-art robustness and accuracy for a huge diversity of objects while obeying real-time constraints.Upon aligninment of the camera's color and depth data, the segmentation mask provided by the tracker is applied to the depth image, resulting in the segmentation of the target object's surface structure. This depiction is converted to a point cloud. To avoid confusion, this point cloud directly generated from the camera data is called observation point cloud 𝐏_t at time step t. The point cloud created by post-processing and merging the observation point clouds is called model point cloud 𝐐_t. The model point cloud represents the object from all observed perspectives.Due to sensor noise and imperfections in the segmentation mask, a small number of undesired points might be also included in the observation point cloud. To minimize the amount of these faulty data in the model point cloud, the observation point cloud is post-processed before being merged with the model point cloud.Outliers belonging to the background are filtered out by removing all points whose z-coordinates p_t,i,z differ from the median depth of the observation point cloud P_t by more than a threshold c_z𝐏'_t = ⋃_𝐏_t p_t,i [|p_t,i,z -(𝐏_t)_z| ≤ c_z].c_z determines the maximum length of the target's virtual representation in the camera's z-axis.The transformation to align the observation point clouds from different time steps without a priori knowledge of the target movement can only be inferred by employing their structural information. This is achieved by registration with the Iterative Closest Point (ICP) algorithm <cit.> which allows fast and precise approximation of the transformation between two point clouds by iteratively maximizing their overlap.However, applying a local optimization algorithm such as ICP to noisy input data is error-prone. Since the resulting transformation is required to transform the model point cloud into the current camera frame, a false estimation can lead to imprecise grasps. To distinguish successful from unsuccessful ICP alignment, two criteria must be fulfilled:(1)The alignment's fitness score is sufficiently high.(2)The norms of the resulting translation and rotation between the two centered point clouds remain smaller than an upper bound. This implies a maximal relative velocity between the robot and the target.If any of these conditions is violated, the recent observation is discarded. Otherwise, the model point cloud, as well as the previous n_s observation point clouds 𝐏_t-τ, are transformed into the current camera frame to match the pose of the recent observation point cloud 𝐏_t.As an additional post-processing step, the result is smoothed by selecting only points with correspondences in the previous n_s observations. When discretizing the continuous surface of the target object with a point cloud, the locations of single points in consecutive time steps differ due to the discretization grid depending on the camera and object pose. Therefore, a corresponding point is defined as one that lies in an ϵ-ball around the source point. Only if at least one point p'_t-τ,j can be found in every previous n_s observation point clouds 𝐏'_t-τ with an Euclidean distance to the source point lower than a small positive constant ϵ, this source point p'_t,i is considered for further processing𝐏_t” = ⋃_𝐏_t' p'_t,i [∀τ: ∃ p'_t-τ,j∈𝐏'_t-τ: ||p'_t,i - p'_t-τ,j|| < ϵ]. Finally, the post-processed observation point cloud 𝐏_t” is merged with the model point cloud 𝐐_t-1 to obtain an improved model point cloud 𝐐_t𝐐_t = 𝐐_t-1∪𝐏_t”. Grasps are generated based on this composed point cloud model. To allow efficient processing of the model point cloud for machine learning algorithms, it is encoded with the Basis Point Set (BPS) <cit.>. The points are represented by their 1D distance to each fixed basis point instead of their 3D coordinates. §.§ Grasp ControlThe BPS-encoded point cloud constitutes the input for the generative grasping framework FFHNet <cit.>. It samples a set of grasps and assigns success predictions s_G,i. Every grasp is represented by its palm translation 𝐭_G,i, its palm rotation matrix 𝐑_G,i and its finger configuration θ_G,i.The following sections describe the post-processing steps for selecting a suitable grasp and controlling the robot's movement to reach a pose where grasp execution is successful. Further steps are the estimation of the target's velocity and the estimation of the grasp pose in case of missing control feedback due to tracking or ICP failure.§.§.§ Grasp Metric   From the generated grasps described by the set 𝔾_i = (𝐭_G,i, 𝐑_G,i, θ_G,i, s_G,i), one grasp is selected maximizing the metric𝔾^* = _i∑_j m_j(𝔾_i).This metric combines the following quantities:Predicted grasp success: To select a stable grasp, a high success prediction of FFHNet is required. This part is weighted with the constant c_0m_0(𝔾_i) = c_0 s_G,i. Pose difference: This part of the metric attempts to minimize the Euclidean distance between the robot's pose (𝐭_R, 𝐫_R) and the grasp pose (𝐭_G,i, 𝐫_G,i) in axis-angle representation. The linear and angular offset are weighted with the constants c_1,l and c_1,rm_1(𝔾_i) = - c_1,l || 𝐭_G,i - 𝐭_R || - c_1,r || 𝐫_G,i - 𝐫_R ||.Shorter robot movements reduce the grasp execution time. This not only increases efficiency but also mitigates the failure risk caused by a target movement out of the robot's reachable space.Kinematic feasibility: The robot must be able to kinematically reach the chosen grasp pose without any collisions. Grasps not fulfilling this property are neglected. Since checking the robot's inverse kinematics and collisions for all grasps at every time step is computationally inefficient, this constraint is enforced after evaluation of the grasp metric. If the chosen grasp is not reachable, the next best grasp is chosen and checked, until a valid grasp is found.§.§.§ Robot ControlAfter evaluation of the grasp metric, the robot is moved towards the pose (𝐭_G^*, 𝐑_G^*) of the selected grasp 𝔾^*. While minimizing the offset between the robot's current end-effector pose and the target end-effector pose, the object must remain in the camera's field of view to provide visual feedback for the control loop. Therefore, the control law for the end-effector's orientation is adjusted. As long as the translational error's norm of the end-effector position ||𝐭_G^*|| is larger than a threshold c_O, the camera is aligned towards the target's center. Once this threshold is undershot, the desired orientation is linearly interpolated between the grasp orientation error 𝐫_G^* and the alignment error between the camera center and object center 𝐫_O. When ||𝐭_G^*|| falls below another threshold c_G, exclusively the target grasp's orientation 𝐫_G^* is approached𝐫̃_G^* = {[ 𝐫_O,||𝐭_G^*|| ≥ c_O; Δ g ·𝐫_O + (1 - Δ g) ·𝐫_G^*, ||𝐭_G^*|| ∈ (c_G, c_O);𝐫_G^*||𝐭_G^*|| ≤ c_G ].Δ g = ||𝐭_G^*|| - c_G/c_O - c_G c_O > c_GAll rotations are represented as axis-angle vectors.With the resulting rotation 𝐫̃_G^*, the desired Cartesian end-effector velocity (𝐯_d, ω_d)^⊤ is calculated. The PD controller[ 𝐯_d; ω_d ] =[ c_p,v𝐭_G^* + c_d,v𝐭̇_G^* + 𝐯_G;c_p,ω𝐫̃_G^* + c_d,ω𝐫̇̃̇_G^* ]with the empirically tuned control constants c_p,v, c_d,v, c_p,ω, c_d,ω ensures fast grasp pose alignment. 𝐯_G is the estimated velocity of the target. §.§.§ Grasp ExecutionThe evaluator part of FFHNet models the relation between point cloud, grasp (pose and finger configuration), and success probability. Instead of a generated grasp, the current end-effector pose combined with the finger configuration of the recently selected grasp is fed to the neural network to predict the success probability of the current state. If an execution threshold is exceeded, the hand is closed to the predicted finger positions.§.§.§ Velocity Estimation   The evolution of the target's position is tracked based on a fixed feature point on its surface. The object's velocity 𝐯_G is estimated by processing the position of this feature point over time with a Kalman filter <cit.>. The estimated velocity is included in the control law in Equation <ref>. This allows the controller to behave similarly to a static grasping setup, assuming an accurate velocity estimate 𝐯_G. Additionally, this velocity is required to estimate a suitable grasp pose in the absence of useful control feedback.§.§.§ Grasp Pose EstimationThe camera's field of view and depth range are limited and the ICP algorithm can provide incorrect transformations. Especially shortly before grasp execution, when the object is close to the camera, the risk of point cloud registration failure is increased. The camera's minimum depth range can lead to incorrect depth values, and its limited field of view can result in incorrect ICP registration because the object is only partially visible. E.g., a corner of a partially observed box can be aligned with any of the box corners since their structure is similar.When these wrong transformations are filtered out, the control feedback is missing. Then, the target grasp is updated by moving it according to the estimated object velocity. A successful grasp in the absence of control feedback over multiple consecutive time steps requires a highly accurate velocity estimation.§ EXPERIMENTS To demonstrate the method's capabilities, two experiments are carried out. At first, the target objects are grasped on a conveyor belt which moves at different linear velocities between 0 and 220mm s. The velocity of the conveyor belt as well as the direction of movement are unknown to the system and must be estimated online. The second experiment is a human-to-robot handover. This setup is challenging due to the nonlinear motions that humans perform. Figure <ref> and the https://youtu.be/Ut28yM1gnvIaccompanying video show grasps from the experiments.The target items are displayed in Figure <ref>. They are a subset of the Yale-CMU-Berkeley (YCB) object set <cit.> which was omitted during the training of FFHNet. Therefore, all of the target objects are unknown to the system.The hardware used for the experiments is the 7 DoF robot manipulator Agile Robots Diana 7, the DLR-HIT Hand II <cit.>, and the depth camera Intel RealSense D435. These components are controlled by a computer equipped with a NVIDIA GeForce RTX 3070 Ti graphics card. §.§ Grasping Objects on a Conveyor BeltIn this experiment, the ten target objects are grasped while moving linearly on a conveyor belt. Every object is grasped once for every speed increment of 20mm s between 0 and 220mm s. Figure <ref> shows the grasping success rates for the respective conveyor belt velocities. For 120 grasp attempts, an average success of 71.7% is achieved. The diagram indicates that the system works reliably with success rates of 80–90% for target velocities slower than 20mm s and between 140 and 200mm s. Impressively, there is no performance degradation at these high speeds compared to static grasping, which shows the power of our implementation. However, the success rates drop to 50–60% for target speeds between 60 and 100mm s. We confirmed that tracking, target model generation, robot control, and velocity estimation work reliably for all target velocities. Therefore, we suspect that the performance drop is caused by imprecise grasp predictions of FFHNet for point clouds resulting from slow target velocities. For a conveyor belt speed of 220mm s, the limits of the system's capabilities are reached.Table <ref> lists the success rates for each target object. High performance is achieved for many objects with varying shapes and sizes. Large cuboid objects (sugar box), sphere-like objects (apple) as well as smaller cuboid (foam brick) and cylindrical objects (cup) are grasped successfully in more than 80% of the attempts. The objects with the worst performance are the mustard bottle, the mug, and the baseball. The shape of the mustard bottle causes FFHNet to often generate grasps for its small lid requiring extremely high precision. Due to the mug's great diameter, any deviation from the grasp pose can lead to collisions. Finally, the spherical shape of the baseball does not provide any structural features to align different perspectives. That is why the resulting model of the object represents only a fraction of the real surface which can result in insufficient grasp predictions of FFHNet.In Table <ref>, the grasp failures are divided into the three cases imprecise grasp pose, hand-target collision, and bad grasp timing. An imprecise grasp pose resulting in an unstable grasp or miss of the target accounts for almost every second grasp failure. This case occurred most often for the mustard bottle and the baseball because FFHNet generates error-prone or inaccurate grasps for these object shapes. The reason for roughly a third of the failures is a collision of the hand with the target, most commonly observed for the mug. Its large diameter requires highly precise velocity estimation, grasp prediction, and robot control. The remaining failures are caused because a suitable hand pose was not recognized by FFHNet and the grasp was not executed at the right time. The reason can be a false prediction by FFHNet as well as inaccuracies in the construction of the virtual target model. No object is particularly affected by this case.The plots in Figure <ref> show some of the system's quantities for a grasp of the sugar box on the conveyor belt. The linear (lin.) and rotational (rot.) errors are smoothed for better visibility and strive towards zero. The estimated (est.) velocity requires some time to converge to the ground truth value of 200mm s since the conveyor belt is started after the system. After 2.4s, the visual feedback is missing and the system relies on its estimation. Tracking loss commonly occurs for large objects like the sugar box in the final approach phase because the ICP algorithm fails to reliably align the partial camera observations with the constructed virtual model. As visible in the lower plot, the success prediction (pred.) for the currently estimated state converges to the success prediction of the chosen grasp until the execution threshold is crossed. Without visual feedback, system's grasp pose estimation still allows a successful execution of the grasp. §.§ Human-to-Robot HandoverIn this experiment, ten humans are asked to hand the target objects to the robot. They are instructed to hold the object on their open palm to avoid collisions between their hand and the robot's hand. FFHNet is biased to predict top grasps because in its training process, grasps colliding with a virtual bottom plane were classified as unsuccessful. The target's velocity is not estimated in this experiment assuming a constant target pose in case of tracking loss.As listed in Table <ref>, the overall success rate of 77% for the 100 grasp attempts is higher than the success rate of the conveyor belt experiment. This can be explained by the human habit of presenting a handover object in a way that facilitates grasping for the receiver. Consequently, many objects are grasped at success rates of 90% or higher: the pudding box, the gelatin box, and the Rubik's cube as small and midsize cuboid objects, but also the large cylindrical mug. Interestingly, the latter is the object with the second-worst result in the conveyor belt experiment. In contrast to the conveyor belt setting, the humans' cooperative behavior supports a precise enclosure of the object between the robotic fingers. Conversely, for the apple object and the foam brick, only 60% of the handovers are successful with them being the objects with the highest success rates in the conveyor belt experiment. The executed grasps for these items seem to be confusing for the human participants. In both experiments, the system performs worst for the mustard bottle.As indicated by Table <ref>, the main failure reason is an imprecise grasp pose. As for the conveyor belt experiment, this occurs most commonly for the mustard bottle because FFHNet generates error-prone grasps for the unconventional shape. The small number of hand-target collisions can be explained as humans try to avoid them during a handover. Most of these failures occurred when handing over the apple, as some of the system's grasping attempts may have been misleading to the human participants. A delayed execution never led to grasp failure in this experiment.Figure <ref> shows the minimization of the linear and rotational errors and the increase of the success prediction for a successful handover of the Rubik's cube without significant tracking loss. § CONCLUSION & FUTURE WORKWe presented a system to grasp unknown dynamic objects with a multi-fingered hand which we believe is an unprecedented achievement. It constructs a virtual target model by processing camera data from a single view with a visual object tracking model, ICP, and further filtering. This model constitutes the input to the generative grasping model FFHNet, providing a distribution of possible grasps. Considering their related success predictions as well as other environmental quantities, the most suitable grasp is selected and continuously updated in real-time. The virtual model and velocity estimation enable the system to compensate for missing visual feedback in case of tracking loss or alignment errors of the observed camera data. In our conducted experiments with real hardware, the system grasped various dynamic objects in different challenging scenarios at a remarkable overall success rate of 74.1%.The reasons for the most common failure case, a grasp execution with an imprecise hand pose, are insufficient target model creation and inaccurate predictions of FFHNet. To further improve the system's precision, a more robust approach to construct the target model than visual object tracking + ICP as well as a more reliable generative grasping model than FFHNet is required. This will also reduce the number of failures due to bad grasp timing, as the underlying reasons are similar.Additionally, with an approach capable of generating a diverse distribution of suitable grasps, the grasp metric can be extended to also evaluate the direction of the target movement. Approaching the target from the direction in which it is moving facilitates dynamic grasping <cit.>. With FFHNet, this is not possible due to its top-grasp bias.The remaining failure cases due to hand-target collisions can be addressed by implementing real-time capable trajectory planning and collision checking. However, this requires a reliable creation of the virtual target model.§ ACKNOWLEDGMENTThis research has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 778602 ULTRACEPT.IEEEtran
http://arxiv.org/abs/2310.17923v1
{ "authors": [ "Yannick Burkhardt", "Qian Feng", "Karan Sharma", "Zhaopeng Chen", "Alois Knoll" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20231027063733", "title": "Dynamic Grasping of Unknown Objects with a Multi-Fingered Hand" }
Text2Bundle: Towards Personalized Query-based Bundle Generation Zhihua Wei January 14, 2024 =============================================================== The Erdős-Moser theorem () says that every infinite tournament admits an infinite transitive subtournament. We study the computational behavior of the Erdős-Moser theorem with respect to the arithmetic hierarchy, and prove that Δ^0_n instances of  admit low_n+1 solutions for every n ≥ 1, and that if a set B is not arithmetical,then every instance of  admits a solution relative to which B is still not arithmetical. We also provide a level-wise refinement of this theorem. These results are part of a larger program of computational study of combinatorial theorems in Reverse Mathematics. § INTRODUCTION We conduct a computational study of the Erdős-Moser theorem, an infinitary statement from graph theory.A tournament on a domain D ⊆ is an irreflexive binary relation R ⊆ D^2 such that for every a, b ∈ D with a ≠ b, exactly one of R(a, b) and R(b, a) holds. A tournament R is transitive if for every a, b, c ∈ D, if R(a, b) and R(b, c) then R(a, c). A subtournament of R is the restriction of R to a subdomain H ⊆ D^2. We identify subtournaments with their domains. The following statement is known as the Erdős-Moser theorem, and is an infinitary version of some theorem by Erdős and Moser <cit.>.[Erdős-Moser theorem]is the statement Every infinite tournament admits an infinite transitive subtournament. The Erdős-Moser theorem easily follows from the celebrated Ramsey theorem for pairs. Given a set X ⊆ and some integer n ∈, we let [X]^n denote the set of all unordered n-tuples over X.Given a coloring f : []^n → k, a set H ⊆ is f-homogeneous (for color i < k)if f(σ) = i for every σ∈ []^n. [Ramsey's theorem] Given n, k ∈, ^n_k is the statement Every coloring f : []^n → k admits an infinite f-homogeneous set. There exists a one-to-one correspondence between a tournament R and a coloring f : []^2 → 2, by letting f({x, y}) = 1 if (R(x, y) ↔ x <_ y).can the restated as for every coloring f : []^2 → 2, there exists an infinite f-transitive subset H, that is, for every x, y, z ∈ H such that x < y < z and every i < 2, then if f({x, y}) = f({y, z}) = i, we have f({x, z}) = i. Since any f-homogeneous set is f-transitive, the Erdős-Moser theorem can be considered as a particular case of Ramsey's theorem for pairs.§.§and ^2_2 in Reverse Mathematics Both the Erdős-Moser theorem and Ramsey's theorem for pairs have been extensively studied in Reverse Mathematics, both from a computational and a proof-theoretic viewpoint. See Hirschfeldt <cit.> for an introduction to the reverse mathematics of combinatorial principles.From many perspectives,is very close to ^2_2. The combinatorics are very similar, and the Erdős-Mőser theorem can be considered as a disjunction-free version of Ramsey's theorem for pairs. These similarities in combinatorics have many consequences in Reverse Mathematics. Jockusch <cit.> proved that every computable instance of ^2_2 admits a Π^0_2 solution, while there exists a computable instance of ^2_2 with no Σ^0_2 solution. These bounds are the same for the Erdős-Moser theorem. On the proof-theoretic side, the first-order part of Ramsey's theorem for pairs and the Erdős-Moser theorem are known to coincide <cit.>. More generally, most of the known statements implied by ^2_2 are already known to follow from  over _0, the base theory of Reverse Mathematics. Whetherimplies ^2_2 was open for a long time, before Lerman, Solomon and Towsner <cit.> answered it negatively.When considering non-computable instances, the behaviors of ^2_2 andturn out to be very dramatically different. For every function g : →, there exists a coloring f : []^2 → 2 such that every infinite f-homogeneous set computes a function h dominating g, that is, ∀ x(h(x) ≥ g(x)). Indeed, simply take f(x, y) = 1 iff g(x) < y. Thus, by a theorem of Slaman and Groszek <cit.>, there exists a (non-computable) instance of ^2_2 such that every solution computes every hyperarithmetic (or equivalently Δ^1_1) set. On the other hand, Patey and Wang (both unpublished) independently proved that for every non-computable set B and every instance of , there exists a solution which does not compute B. This property of  is shared with the infinite pigeonhole principle (^1_2). Indeed, Dzhafarov and Jockusch <cit.> proved that for every non-computable set B and every set A, there is an infinite subset H of A or A which does not compute B. §.§and ^1_2 under stronger reductions As mentioned above, when considering non-computable instances, the Erdős-Moser theorem seems to have closer behavior to the pigeonhole principle than to Ramsey's theorem for pairs. In a series of papers, Monin and Patey <cit.> developed a framework to control iterated jumps of solutions to the pigeonhole principle. They proved in particular the following three facts (see <cit.>): * If B is not arithmetic (resp. hyperarithmetic), then for every set A, there is an infinite subset H of A or A such that B is not A-arithmetic (resp. A-hyperarithmetic).* If B is not Σ^0_n (resp. Δ^0_n), then for every set A, there is an infinite subset H of A or A such that B is not Σ^0_n(A) (resp. Δ^0_n(A)).* For every Δ^0_n set A, there is an infinite subset H of A or A of low_n degree. In this article, we prove the following three theorems:theorem]thm:arithmetic-main If B is not arithmetic, then for every tournament T, there is an infinite transitive subtournament H such that B is not H-arithmetic. The generalization to the hyperarithmetic hierarchy is not proven, but the authors believe that it holds with the same proof mutatis mutandis. Like for the pigeonhole principle, a layerwise version of the previous theorem holds:theorem]thm:layerwise-main Fix n ≥ 1. If B is not Σ^0_n, then for every tournament T, there is an infinite transitive subtournament H such that B is not Σ^0_n(H). The case n = 1 was already proven independently by the first author and Wang (unpublished). The statement where Σ^0_n is replaced by Δ^0_n directly follows from <Ref> and Post's theorem. Indeed, if B is not Δ^0_n, then by Post's theorem, either it or its complement is not Σ^0_n. Then apply <Ref> to conclude.theorem]thm:effective-main Fix n ≥ 1. Every Δ^0_n tournament T has an infinite transitive subtournament of low_n+1 degree. The case n = 1 follows from the same statement for Ramsey's theorem for pairs, proven by Cholak, Jockusch and Slaman <cit.>. On the other hand, as explained, the statement for Ramsey's theorem for pairs fails for n > 1, since there exists a Δ^0_2 instance of ^2_2 such that every solution computes ∅'.Besides the intrinsic interest of these theorems, they are also motivated by the more general program of development of good iterated jump control of combinatorial theorems, and in particular by the goal to prove the strictness of the free set, thin set and rainbow ramsey hierarchies. See Monin and Patey <cit.> for a discussion on the subject. The Erdős-Moser is the first statement about colorings of pairs which is known to admit a good iterated jump control. §.§ Definition and notation A binary string is an ordered tuple of bits a_0, …, a_n-1∈{0, 1}. The empty string is written ϵ. A binary sequence (or a real) is an infinite listing of bits a_0, a_1, …. Given s ∈ω, 2^s is the set of binary strings of length s and 2^<s is the set of binary strings of length <s. Accordingly, 2^<ω is the set of binary strings and 2^ω is the set of binary sequences. Given a string σ∈ 2^<ω, we use |σ| to denote its length. Given two strings σ, τ∈ 2^<ω, σ is a prefix of τ (written σ≼τ) if there exists a string ρ∈ 2^<ω such that σ^⌢ρ = τ. Given a sequence X, we write σ≺ X if σ = Xn for some n ∈ω. A binary string σ can be interpreted as a finite set F_σ = { x < |σ| : σ(x) = 1 }. We write σ⊆τ for F_σ⊆ F_τ. We write #σ for the size of F_σ. Given two strings σ and τ, we let σ∪τ be the unique string ρ of length max(|σ|, |τ|) such that F_ρ = F_σ∪ F_τ. A binary tree is a set of binary strings T ⊆ 2^<ω which is closed downward under the prefix relation. A path through T is a binary sequence P ∈ 2^ω such that every initial segment belongs to T.A Turing idealis a collection of sets which is closed downward under the Turing reduction and closed under the effective join, that is, (∀ X ∈)(∀ Y ≤_T X) Y ∈ and (∀ X, Y ∈) X ⊕ Y ∈, where X ⊕ Y = { 2n : n ∈ X }∪{ 2n+1 : n ∈ Y }. A Scott set is a Turing idealsuch that every infinite binary tree T ∈ has a path in . In other words, a Scott set is the second-order part of an ω-model of _0 +. A Turing idealis countable coded by a set X if = { X_n : n ∈ω} with X = ⊕_n ∈ω X_n. Given n ≥ 1, a formula is Σ^0_n() (resp. Π^0_n()) if it is Σ^0_n(X) (resp. Π^0_n(X)) for some X ∈.Given two sets A and B, we denote by A < B the formula (∀ x ∈ A)(∀ y ∈ B)[x < y]. We write A ⊆^* B to mean that A - B is finite, that is, (∃ n)(∀ a ∈ A)(a ∉B → a < n). A k-cover of a set X is a sequence of sets Y_0, …, Y_k-1 such that X ⊆ Y_0 ∪…∪ Y_k-1. §.§ Organization of this paper In <Ref>, we try to give an overview of the forcing construction, by explaining in <Ref> the importance of the so-called forcing question, thendiving in <Ref> into the combinatorics of , especially explaining the role of the infinite pigeonhole principle as a warrant of extendibility for the Erdős-Moser theorem, and then explaining in <Ref> the issues raised when trying to control iterated jumps of solutions with variants of Mathias forcing.In <Ref>, we restate the main properties of partition regular and large classes, studied in Monin and Patey <cit.>. In particular, we define the notions of cohesive and minimal classes in <Ref>, which play an essential role to maintain the compatibility of large classes between different levels of the iterated jump control. Last, we restate in <Ref> the existence of a hierarchy of Scott sets and of cohesive classes, which play the role of a spine for the main notion of forcing.<Ref> is dedicated to the development of the main forcing framework, by defining its conditions, the forcing relation, and a forcing question. This framework is applied in various contexts, to prove strong cone avoidance of  for arithmetic reductions in <Ref>, a layerwise version of this strong cone avoidance for Σ^0_n operators in <Ref>, and prove the existence of low_n solutions through an effectivization of the construction in <Ref>.§ THE BIG PICTURE section]sect:big-pictureThe techniques used in this article are rather sophisticated with many technical subtleties, and it may be quite hard to have the big picture. In this section, we describe the general forcing argument used to prove our main theorems, and highlight a few technical difficulties justifying the design choice of our notion of forcing. §.§ Forcing question section]sect:picture-forcing-questionThe three main theorems are related, in that they involve very similar techniques of iterated jump control. Indeed, in each case, it consists of constructing a solution whose Σ^0_n properties resemble the ones of the ground model. For this, one tries to translate Σ^0_n(G) formulas relative to the generic object constructed G into absolute Σ^0_n formulas. In set-theoretic forcing, this is achieved through the forcing relation, whose definition must be sufficiently simple (in terms of definitional complexity) to make the new model inherit properties of the ground model. In computability theory, the situation is slightly different, and the simplicity of the forcing relation is less important than the one of the so-called forcing question. In what follows, a notion of forcing is a partial order (, ≤) such that every sufficiently generic filter ⊆ induces a set G_⊆. Every notion of forcing is equipped with a forcing relation, written ⊩, between the set of conditionsand the set of arithmetic formulas [G] with a set parameter G denoting the generic object constructed. Fix a notion of forcing (, ≤).A forcing question is a relationover ×[G] such that, for every c ∈ and φ(G) ∈[G], (1) If c φ(G), then there is an extension d ≤ c such that d ⊩φ(G); (2) If c φ(G), then there is an extension d ≤ c such that d ⊩φ(G); The notion of forcing question is not canonical, and a single notion of forcing might have many candidate forcing questions. On the other hand, many computational properties of the generic object G might be directly derived from the existence of a forcing question with sufficiently nice definitional properties. Consider for example the following property:definition]def-uniformly-preserving Fix a notion of forcing (, ≤) and some n ∈. A forcing questionis uniformly Σ^0_n-preserving if for every c ∈ and every uniform sequence of Σ^0_n formulas φ_0(G), φ_1(G), …, the sequence c φ_0(G), c φ_1(G), … is uniformly Σ^0_n. The following proposition is at the heart of our forcing construction. It was used by Wang <cit.>, where the author showed for each notion of forcing the existence of a uniformly Σ^0_n-preserving forcing question, without naming explicitly this concept.proposition]prop-uniform-preservation-sigman Let (, ≤) be a notion of forcing with a uniformly Σ^0_n-preserving forcing question. Then for every non-Σ^0_n set C and every sufficiently generic set G for this notion of forcing, C is not Σ^0_n(G). Fix a Σ^0_n formula φ(G, x) with one free first-order variable x. Let D_φ⊆ be the set of conditions c such that either c ⊩φ(G, a) for some a ∉C, or c ⊩φ(G, a) for some a ∈ C. Let us show that D_φ is dense. Given a condition c ∈, let W = { a ∈ : c φ(G, a) }. Since the forcing question is uniformly Σ^0_n-preserving, then the set W is Σ^0_n, hence C ≠ W. Let a ∈ C Δ W, the symmetric difference of C and W. * If a ∈ W ∖ C, then by definition, c φ(G, a), so by property (1) of the forcing question, there is an extension d ≤ c such that d ⊩φ(G, a). * If a ∈ C ∖ W, then by definition, c φ(G, a). By property (2) of the forcing question, c φ(G, a), and by property (1), there is an extension d ≤ c such that d ⊩φ(G, a).In both cases, d ∈ D_φ, so D_φ is dense. Ifis a sufficiently generic filter, it will intersect D_φ for every Σ^0_n formula φ(G, x), hence, letting G be the set induced by , C will not be Σ^0_n(G). The construction of low_n solutions are often effectivizations of the forcing argument, either by constructing a ∅^(n)-computable filter sufficiently generic for deciding Σ^0_n(G) formulas, or by constructing, with any PA degree P over ∅^(n-1), a P-computable filter G_ sufficiently generic for deciding Σ^0_n-1(G) formulas. In the latter case, using the low basis theorem relativized to ∅^(n-1), there exists such a PA degree P over ∅^(n-1) such that P' ≤_T ∅^(n), thus Σ^0_n properties of G_ can be decided thanks to ∅^(n). §.§ Combinatorics of  section]sect:combinatorics-emIn order to understand the design of the notion of forcing for this article, it is important to get familiar with the combinatorics of the Erdős-Moser theorem. Lerman, Solomon and Towsner <cit.> analyzed the basic combinatorial ideas essential to the computable study of the theorem.A transitive tournament T over a domain A can be seen as a linear order (A, ≤_T)defined by a ≤_T b iff a = b or T(a, b). This interpretation should be kept in mind throughout the article. For convenience, we shall always consider that the tournament contains two end-points -∞ and +∞, that is, such that T(-∞, x) and T(x, +∞) holds for every x. Fix a tournament T over a domain A. (1) The interval (a, b) between a, b ∈ A ∪{-∞, +∞} is the set of points x ∈ A such that T(a, x) and T(x, b) holds. (2) Given a finite T-transitive subset F ⊆ A and a, b ∈ F ∪{-∞, +∞}, the interval (a, b) is minimal in F if (a, b) ∩ F = ∅.Fix a tournament T over ω. Any finite T-transitive set F is not necessarily extendible into an infinite solution. Indeed, maybe there exist some a, b ∈ F such that T(a, b) holds, but T(b, x) and T(x, a) both hold for cofinitely many x. We shall therefore work with Mathias conditions (σ, X) where σ is a T-transitive finite set, with some extra structure which will guarantee that σ is extendible into an infinite solution. This yields the following definition (due to Patey <cit.>). An -condition for T is a Mathias condition (σ, X) such that * for all y ∈ X, σ∪{ y } is T-transitive* X is included in a minimal T-interval of σActually, the second property can be obtained from the first one by a simple application of the infinite pigeonhole principle. Indeed, there are only finitely many minimal T-intervals in σ, and each element of X belongs to exactly one of them. The notion of condition extension is the usual Mathias extension.To simplify notation, given two disjoint sets F and E, we write F →_T E if for every a ∈ F and b ∈ E, T(a, b) holds. One essential feature in the understanding of the computational content of a theorem is to understand the combinatorics necessary to extend a partial solution with an arbitrarily large number of elements in one block. In the case of the Erdős-Moser theorem, the following lemma contains its core combinatorics.lemma]lem:em-condition-combi Fix an -condition c = (σ, X) for a tournament T, an infinite subset Y ⊆ X and a finite T-transitive set ρ⊆ X such that maxρ < min Y and [ρ→_T Y ∨ Y →_T ρ]. Then (σ∪ρ, Y) is a valid extension of c. Suppose one wants to design a good forcing question for deciding Σ^0_1 formulas with this notion of forcing. To simplify the situation, assume first that the tournament T is stable, that if, for every a, either (∀^∞ b) T(a, b) holds, or (∀^∞ b)T(b, a) holds. In other words, each element admits a limit behavior with respect to T. Let f : ω→ 2 be the limit behavior of T, that is, f(a) = 0 iff ∀^∞ b T(a, b) and f(a) = 1 otherwise. The following naive definition does not satisfy the desired definitional properties: Let c = (σ,X) be an EM-condition, n be an integer, and e be a Turing index. Let c Φ_e^G(n)↓ hold if there exists a finite f-homogeneous T-transitive set τ⊆ X such that Φ_e^σ∪τ(n) ↓. By <Ref>, this is a valid forcing condition, in that if it holds, then there exists an extension forcing Φ_e^G(n)↓, and otherwise, there exists an extension forcing Φ_e^G(n)↑. From a definitional viewpoint, the previous relation is Σ^0_1(X ⊕ T ⊕ f). However, the tournament T and its limit behavior f are strongly non-computable, and may even compute the set B that we want to avoid. The solution to get rid of these parameters is to make an over-approximation:definition]def:forcing-question-em-level0 Let c = (σ,X) be an EM-condition, n be an integer, and e be a Turing index. Letc Φ_e^G(n)↓ hold if for every tournament R and every function g : → 2, there is a finite g-homogeneous R-transitive set τ⊆ X such that Φ_e^σ∪τ(n) ↓. At first sight, an overapproximation yields a forcing question with even worse definitional properties since it contains a universal second-order quantification. However, thanks to compactness, the forcing question is actually Σ^0_1(X), as it is equivalent to the following definition: c Φ_e^G(n)↓ if there exists some threshold t such that for every tournament R over {0, …, t} and every function g : {0, …, t}→ 2, there is a finite g-homogeneous R-transitive set τ⊆ X such that Φ_e^σ∪τ(n) ↓. If the forcing question holds, then by letting R = T and g = f, it is clear that there exists an extension forcing Φ_e^G(n)↓. On the other hand, if the forcing question does not hold, then the witness of failure might be some tournaments R and some colorings f which are unrelated to T and g. This is where the combinatorics of Ramsey theory comes into play.lemma]lem:questionf Let c = (σ,X) be an -condition, n be an integer and e be a Turing index.(1) If c Φ_e^G(n)↓, then there exists d ≤ c such that d ⊩Φ_e^G(n) ↓.(2) Else, if c Φ_e^G(n) ↓, then there exists d ≤ c such that d ⊩Φ_e^G(n) ↑. We prove each point : (1) If c Φ_e^G(n)↓, letting R = T and g = f, there exists a finite f-homogeneous T-transitive set τ⊆ X such that Φ_e^σ∪τ(n) ↓. By choice of f, there exists some t ∈ω such that τ→_T X ∖{0, …, t} or X ∖{0, …, t}→_T τ. Thus, by<Ref>, the pair d := (σ∪τ, X ∖{0, …, t}) is an EM-condition. Note that d ≤ c and that d ⊩Φ_e^G(n)↓. (2) If c Φ_e^G(n) ↓, then there exists a coloring h : → 2 and a tournament R such that for all finite h-homogeneous and R-transitive set τ⊆ X, Φ_e^σ∪τ(n) ↑. By the pigeonhole principle and the Erdős-Moser theorem restricted to X, there exists an infinite subset Y ⊆ X which is both h-homogeneous and R-transitive. The pair d := (σ,Y) is a valid EM-condition such that d ⊩Φ_e^G(e) ↑.Whenever the tournament is not stable, the situation seems more complicated as there is no clear choice of f. Surprisingly, the previous forcing question still holds, but with a more subtle proof in the first case. The idea is the following: in the first case, by compactness, the finite extension candidate is bounded by a threshold. One can then restrict the reservoir, so that every element below the threshold has a limit behavior with respect to the new reservoir, and then act like in the stable case. We only prove the first case, as the second case did not involve stability of the tournament. (1) If c Φ_e^G(n)↓, by compactness, there exists some threshold t such that for every tournament R over {0, …, t} and every function g : {0, …, t}→ 2, there is a finite g-homogeneous R-transitive set τ⊆ X such that Φ_e^σ∪τ(n) ↓. For every element y ∈ X ∖{0, …, t}, one can associate a function g_y : {0, …, t}→ 2 defined by g_y(x) = 1 iff T(x, y) = 1. Since there are 2^t many such functions, then the function y ↦ g_y is a finite coloring of the reservoir, so by the infinite pigeonhole principle, there exists an infinite subset Y ⊆ X ∖{0, …, t} which is homogeneous for some color g : {0, …, t}→ 2. In other words, for every g-homogeneous set τ⊆{0, …, t}, either τ→_T Y, or Y →_T τ.Letting R = T, there is a finite g-homogeneous T-transitive set  τ⊆ X ∩{0, …, t} such that Φ_e^σ∪τ(n) ↓. Thus, by<Ref>, the pair d := (σ∪τ, Y) is an EM-condition. Note that d ≤ c and that d ⊩Φ_e^G(n)↓.Together with the general discussion of <Ref> about forcing questions, this section constitutes a proof thatadmits strong cone avoidance. The bottom line of this section is the following: The combinatorics of the Erdős-Moser theorem involve the pigeonhole principle, in that in order to ensure the extendibility of a finite T-transitive set, one needs to ensure that it is homogeneous for the appropriate instance of ^1_2. This 2-coloring represents the limit behavior of the tournament. Whenever the tournament is not stable, the choice of the 2-coloring is not clear ahead of time, and the colorings must be universally quantified.Last, note that the use of ^1_2 in the proof ofis not overkill, in that given a 2-coloring f : → 2, one can define a tournament T by T(x, y) iff [x < y ↔ f(x) = f(y)]. Then any infinite transitive subtournament is, up to finite changes, f-homogeneous.§.§ Iterated jump control of EM forcing section]sect:iterated-jump-pictureIn computability-theoretic forcing, one usually forces a Σ^0_1/Π^0_1 property in a strong sense: if c ⊩φ(G) for φ∈{Σ^0_1, Π^0_1}, then φ(G_) actually holds for every filter  containing c. The situation becomes significantly more complicated when considering Σ^0_2/Π^0_2 formulas. A Π^0_2 formula (∀ x)( ∃ y) φ(G, x, y) can be considered as a countable collection of Σ^0_1 formulas { (∃ y) φ(G, n, y) : n ∈}. Such a formula cannot usually be forced in a strong sense. The relation c ⊩ (∀ x )(∃ y) φ(G, x, y) holds iff for every x ∈, the set of conditions forcing (∃ y) φ(G, x, y) is dense below c. This way, every sufficiently generic filter containing c will also contain a condition forcing (∃ y )φ(G, x, y) for each x ∈, thus the property (∀ x )(∃ y) φ(G_, x, y) holds for every sufficiently generic filter containing c.Stating the density of a collection of conditions can be a definitionally complex statement, depending on the complexity of the notion of forcing. In some simple cases, such as Cohen forcing, the forcing relation for Π^0_2(G) formulas is Π^0_2, yielding a good forcing question.Variants of Mathias forcing do not behave that well. Indeed, the statement of density requires universal and existential quantification on the conditions, hence on the reservoirs, which are second-order objects. Actually, the approach of Mathias forcing provably fails:The set ∅” is Π^0_2(G_) for every sufficiently generic filter  for Mathias forcing with computable reservoirs. By Martin's domination theorem, a set is of high degree iff it computes a function eventually dominating every total computable function. Given a computable function f and a computable Mathias condition (σ, X), there exists a computable Mathias extension (σ, Y) such that the principal function of Y (the function which to n associates the nth element of Y) dominates f. Thus, for every sufficiently generic filter , the principal function of G_ will eventually dominate every total computable function, hence be of high degree. The same argument holds for computable EM forcing. Intuitively, the reason of failure of Mathias forcing and its variants, is because of the sparsity of its reservoirs. One way to circumvent the problem is to restrict the class possible reservoirs with a third-order object, which will play the role of a reservoir of reservoirs: this meta-reservoir is a class of infinite sets Ł which cannot contain arbitrarily sparse objects. Our goal is to work with EM conditions (σ, X) such that X ∈Ł. One however requires the class Ł to be closed under certain operations which are needed for using the combinatorics of <Ref>. Analyzing the operations made over the reservoirs, they are of three kinds: * truncation of a reservoir from a finite number of elements (case 1 of the forcing question)* splitting of a reservoir based on an instance of ^1_2 (case 2 of the forcing question)* choice of an R-transitive subtournament for an instance R of  (case 2 of the forcing question)Assuming Ł contains only infinite sets, the truncation operation is a consequence of finitely many applications of ^1_2. Unfortunately, contrary to the pigeonhole principle, the classes which are closed under applications of  do not have nice combinatorial properties. Therefore, in Case 2 of <Ref>, instead of applying the Erdős-Moser theorem restricted to X to obtain an infinite R-transitive subtournament Y ⊆ X, we shall simply add R to the list of tournaments we commit to be transitive for. The benefit of it is that the only remaining operation done on our reservoirs is the application of ^1_2. The counterpart of postponing our application of  is that our forcing conditions will now be made of triples (R⃗, σ, X), where R⃗ is a finite sequence of tournaments, and such that (σ, X) is an EM condition for every R ∈R⃗. This list R⃗ can grow with condition extension. The reservoir X will therefore belong to a class Ł which is closed under applications of ^1_2. This notion of closure is called partition regularity. We shall introduce this concept and its main properties in <Ref>. The restriction of the reservoirs to those which belong to a partition regular class dramatically decreases the definitional complexity of the forcing question, as instead of asking whether for every infinite set Y ⊆ X, there is an infinite set Z ⊆ Y satisfying some property, one can ask whether the collection of all Z satisfying the property is partition regular. Based on the complexity of the property, the question will not be definitionally too complex.§ PARTITION REGULARITY AND LARGENESS section]sect:largeness-prThe notion of partition regularity comes from combinatorics and is widely used in Ramsey theory. It therefore naturally occurred in the computability-theoretic analysis of combinatorial theorems.A class Ł⊆ 2^ω is partition regular if : * Ł is non-empty,* for all X ∈Ł, if X ⊆ Y, then Y ∈Ł,* for every integer k, for every X ∈Ł, for every k-cover Y_1, Y_2, … Y_k of X, there exists i ≤ k such that Y_i ∈Ł.Dorais <cit.> was the first to use variants of Mathias forcing in which the reservoirs belong to partition regular classes, to produce generic sets of non-high degree. Since then, Monin and Patey <cit.> successfully used this variant to control iterated jump of solutions to the infinite pigeonhole principle. Monin and Patey <cit.> contains all the computability-theoretic analysis of partition regularity used in this article. In this section, we therefore simply state the relevant definitions and theorems for the sake of completeness.Partition regularity enjoys nice closure properties, but is not a notion of largeness per se, in that a superclass of a partition regular class is not necessarily partition regular itself. Throughout the article, given a property φ(X), we will ask whether the class { X : φ(X) } is large, in the sense that it contains a partition regular subclass. This yields the following definition, which is often more convenient to manipulate than partition regularity. A class Ł⊆ 2^ω is large if : * for all X ∈Ł, if X ⊆ Y, then Y ∈Ł,* for every integer k, for every k-cover Y_1, Y_2, … Y_k of ω, there exists i ≤ k such that Y∈Ł.The large classes are exactly those which contain a partition regular subclass. To avoid degenerate behaviors, in this article, we shall restrict ourselves to large classes which contain only infinite sets.A large class is non-trivial if it contains only infinite sets. In particular, if a partition regular class  is non-trivial, then it is closed under finite changes. Indeed, if X ∈ and Y =^* X as witnessed by a finite set F, then Y, F form a 2-cover of X, so either Y or F must belong to , and by non-triviality, F ∉.One of the core properties of large classes is the following lemma, which plays an essential role in the computability-theoretic analysis of large classes. For example, by contraposition, if an F_σ class is not large, then it is included into a non-large open class.lemma]lem:intersection-still-large Let (_n)_n ∈ω be a decreasing sequence of large classes. Their intersection ⋂_n ∈ω_n is again large. The above lemma also holds if one replaces largeness by partition regularity. Moreover, a union of partition regular classes is still partition regular. Therefore, every large class contains a largest (for inclusion) partition regular subclass, which justifies the following definition. For every large class , let Ł() denote the largest partition regular subclass of . By the infinite pigeonhole principle, the class of all infinite sets is partition regular. This naturally generalizes to a whole family of partition regular classes: For every set X ⊆ω, let Ł_X := { E ⊆ω : |E ∩ X|=∞}. For every infinite set X, the class Ł_X is partition regular. This class plays an essential role in ensuring that a set belongs to all partition regular subclasses. Indeed, ifis a partition regular subclass of Ł_X, then X ∈, as otherwise, since X, X forms a 2-cover of ω, we would have X∈⊆Ł_X, contradiction. §.§ Π^0_2 large classes By Monin and Patey <cit.>, there are no non-trivial Σ^0_2 large classes. The first interesting example of such classes are Π^0_2. Along this article, we will only be interested in F_σ classes, and more precisely intersections of Σ^0_1() classes, for some Scott set  (recall that a formula is Σ^0_1() if it is Σ^0_1(P) for some P ∈). Fix a Scott setencoded by a set M ⊆ω, i.e., M = ⊕_n ∈ω X_n and = { X_n : n ∈ω}. One can code such classes by sets C ⊆ω^2 as follows: Fix a set P. For every e ∈ω, let ^P_e = { Z ∈ 2^ω : ∃σ∈ W^P_e : σ⊆ Z }. For every C ⊆ω^2, let ^_C = ⋂_(e,i) ∈ C^X_i_e. The following lemma is a core lemma in the analysis of the definitional complexity of the statement ^_C is large, thanks to <Ref>.lemma]largenesssentencecomp Let 𝒜 be a Σ_1^0 class. The sentence 𝒜 is large is Π_2^0. By <Ref>, a class ^_C is large iff ^_F is large for every finite set F ⊆ C. The class ^_F is Σ^0_1() uniformly in F, hence by a relativization of <Ref>, the statement ^_F is large is Π^0_2() uniformly in F. The overall statement ^_C is large is therefore Π^0_1(C ⊕ M'), where M is the set coding .The following lemma shows that instead of working with large classes of the form ^_C, one can work with partition regular classes without extra definitional complexity.lemma]lem:compute-luc For every set C ⊆ω^2, there exists D ≤_T C such that ^_D = Ł(^_C).§.§ -minimal and -cohesive classes section]sect:minimal-cohesive-classesWhen two classesandare large, their intersection ∩ is not necessarily large. For example, letting X be a bi-infinite set, both the classes Ł_X and Ł_X are large, but their intersection is not, as witnessed by the 2-cover X, X of ω. During the forcing construction, one will consider multiple properties to be forced, and therefore will need to ensure that not only the corresponding classes are large, but so are their intersection. One natural approach consists in creating a large class which will be minimal for inclusion, in the following sense: A classis -minimal if for every X ∈ and e ∈ω, either ⊆_e^X or ∩_e^X is not large. Then, in order to decide whether two Σ^0_1() propertiesandare large, one can ask independently whether ∩ and ∩ are large. If both are, then by -minimality of , ⊆ and ⊆, hence ∩ is large as well. Eventhough the notion of minimality was defined for largeness, partition regularity comes for free for an -minimal large class.lemma]lem:minimal-partition-regular Every -minimal large class _C^ is partition regular. There exists -minimal large classes of the form ^_C. However, the index set C is computationally too complex, as it is only M”-computable. Indeed, in order to create the set C by finite approximations C_0 ⊆ C_1 ⊆…, one needs to successively ask whether ^_C_s∩_e^X is large, which is a Π^0_2() question. Thankfully, one can consider a weaker notion with better computational properties, which still satisfies the compatibility requirements. A classis -cohesive if for every X ∈, either ⊆Ł_X or ⊆Ł_X. Every -minimal class is -cohesive. Moreover, one can compute the index set C of an -cohesive class ^_C in any PA degree over M'. Indeed, instead of deciding whether ^_C_s∩Ł_X is large or not, one needs to pick a true statement among ^_C_s∩Ł_X is large and ^_C_s∩Ł_X is large. Choosing a true Π^0_2(M) sentence among two such sentences, known that one of them is true, can be computed by any PA degree over M' (see Cholak, Jockusch and Slaman <cit.> for a proof). lemma]lem:cohesive-compatibility Let _C^ be an -cohesive class. Let _D^ and_E^ be such that _C^∩_D^ and_C^∩_E^ are both large. Then so is _C^∩_D^∩_E^. In general, given a large class ^_C, the index set C can be completed into an index set D ⊇ C in multiple ways to form an -minimal large subclass ^_D, depending on the order in which the questions are asked. However, whenever _C^ is -cohesive, then by <Ref>, the order of the questions does not matter, thus it contains a unique -minimal subclass, which can be explicitly defined as follows:Given an large class , the collection of sets ⟨⟩ := ⋂_e ∈ω, X ∈{_e^X : ∩_e^Xis a large }is an -minimal large class contained in .§.§ The (_C_n^_n)_n ∈ω sequence section]sect:uc-sequenceMonin and Patey <cit.> defined an infinite hierarchy of Scott sets together with a decreasing sequence of minimal classes for these Scott sets, playing a central role in the definition of the notion of forcing. The nth level of this hierarchy is responsible for having a good (n+1)st jump control.The following first proposition is an easy consequence of the uniform low basis theorem for Π^0_1 classes, due to Lawton (see Hirschfeldt and al. <cit.>):proposition]mn-seq There exists a sequence of sets M such that : * M_n codes for a countable Scott set _n,* ∅^(n) is uniformly coded by an element of _n,* Each M_n' is uniformly computable in ∅^(n+1).Then, the next proposition follows from our remark on the complexity of the constructionof an -cohesive large class. Indeed, since _n+1 contains M'_n and _n+1 is a Scott set, then it contains a PA degre over M'_n. Moreover, this construction is uniform.proposition]cn-seq There exists a sequence of sets C such that : *is an _n-cohesive large class,* _C_n+1^_n+1⊆⟨⟩,* Each C_n is coded by an element of _n+1 uniformly in n and M_n+1. § FORCING FRAMEWORK section]sect:forcing-frameworkWe now develop the general framework for iterated jump control of solutions to the Erdős-Moser theorem through forcing. This framework will be applied in sections <ref>, <ref> and <ref> to prove our three main theorems. §.§ Forcing conditions In order to obtain a layerwise version of strong cone avoidance of  for Σ^0_n operators, our notion of forcing will be parameterized by a partition regular class .Assuming ⊆⟨⟩, this notion of forcing will have a good (n+1)st jump control. All along <Ref>, one should think of  as the partition regular class ⋂_n ∈ω⟨⟩.definition]def:condition Given a partition regular class ⊆ 2^ω, let _ denote the set of all 3-tuples(R⃗,σ,X) such that *  is a finite sequence of tournaments,* X ∩{0, …, |σ| } = ∅, * X ∈.* for all y ∈ X, σ∪{y} is -transitive.* X is included in a minimal R⃗-interval of σIn other words, _ is the set of all 3-tuples (R⃗, σ, X) such that (σ, X) forms an EM condition for each tournament R ∈R⃗, and such that X ∈. Note that no effectiveness restriction is given on the reservoir X. Given a tournament T, in order to produce an infinite T-transitive subset, one will work with sufficiently generic filters containing the condition (T, ∅, ω). From now on, fix a partition regular class . We define the partial order over _ as following : we say that (τ,Y, S⃗) ≤ (σ,X, R⃗) if σ≼τ, Y ⊆ X, τ∖σ⊆ X, and R⃗⊆S⃗. Here again, the extension relation (τ,Y, S⃗) ≤ (σ,X, R⃗) is the usualMathias extension (τ, Y) ≤ (σ, X), but in addition, one commits to be transitive for more and more tournaments simultaneously. Given a collection ⊆_, we let G_ := ⋃_(R⃗,σ,X) ∈σ.lemma]lem:gf-transitive Letbe a _-filter. For all c := ( R⃗,σ,X), the set G_ is R⃗-transitive.Suppose otherwise. Then, there exists x < y < z ∈ G_ a R⃗-cycle. By definition of G_, there exists d :=( R⃗S⃗, τ, Y) ∈, such that d ≤ c and { x,y,z }⊆τ. Since τ is R⃗S⃗-transitive, and thus R⃗-transitive, (x,y,z) cannot be a R⃗-cycle.§.§ Forcing question In this section, we design a forcing question as explained in <Ref>. The general idea is the following: given a Σ^0_1 formula (∃ x )φ(G, x) and a condition (R⃗, σ, X), one would like to ask whether there exists some x ∈ω and a finite set τ⊆ X satisfying some good combinatorial properties, such that φ(σ∪τ, x) holds. This naive question is Σ^0_1(R⃗⊕ X). As explained in <Ref>, one can get rid of the parameter R⃗ by universally quantifying over all m-tuples of tournaments, where m = |R⃗|. By compactness, the question becomes Σ^0_1(X), which is not enough, since X can be computationally very complex. The same overapproximation trick cannot be applied for X, since the class of all infinite sets is closed, but not compact. One must therefore use a second trick: consider the class Ł of all reservoirs Y such that this property holds, that is, of all Y such that there exists some x ∈ω and a finite set τ⊆ X satisfying some good combinatorial properties, such that φ(σ∪τ, x) holds. Then, ask whether Ł∩^_0_C_0 is large. If it does, then by _0-minimality of , ⊆Ł, and since X ∈, the property holds for X in particular.Before giving the actual definition of the forcing question, let us introduce a very convenient piece of notation. As explained, given a condition (R⃗, σ, X), since no effectiveness constraint is imposed on R⃗, one will often resort to an over-approximation of R⃗. This over-approximation  has two essential properties: (1) it must contain R⃗, and (2) it must be X-effectively compact. One can exploit the Π^0_1 constraints in the definition of a condition to obtain a finer over-approximation: X is included in a minimal interval of σ and σ∪{y} is R⃗-transitive for every y ∈ X. This yields the following definition: Let (σ, X) be a Mathias condition. For every m ∈ω, _m(σ, X) is the class of all m-tuples R⃗ of tournaments such that for all y ∈ X, σ∪{y} is -transitive, and such that X is included in a minimal R⃗-interval of σ.In particular, for every condition (R⃗, σ, X), letting m = |R⃗|, R⃗∈_m(σ, X). Thus, the class _m(σ, X) is an over-approximation of R⃗. On the other direction, if (σ, X) is a Mathias condition such that X ∈, then for all m ∈ω and for all R⃗∈_m(σ,X), the 3-tuple (R⃗, σ, X) is a condition in _. The following lemma shows that the over-approximation _m(σ, X) is X-effectively compact, as desired.lemma]lem:approx-pi01 For every Mathias condition (σ, X) and every m ∈ω, _m(σ, X) is Π^0_1(X). Immediate. Both properties are Π^0_1 formulas in X. The following combinatorial lemma is essentially a reformulation of <Ref>. If one furthermore assumes that Y ∈, then (R⃗, σ∪τ, Y) is a valid _-condition.lemma]lem:extension Let (R⃗,σ,X) be a condition, τ⊆ X be a finite R⃗-transitive set and Y ⊆ X be an infinite set such that for every R ∈R⃗, τ→_R Y or Y →_R τ. Then, R⃗∈_|R⃗|(σ∪τ, Y). Since (R⃗,σ, X) is a condition, X is included in a minimal interval of σ for all R ∈R⃗. Furthermore, since τ∪ Y ⊆ X, and τ→_R Y or Y →_R τ, then Y is included in a minimal interval of σ∪τ for all R ∈R⃗.Let y ∈ Y. Suppose for the contradiction that there exists a 3-cycle x < y < z in σ∪τ∪{y}.* If x, y ∈σ then since (R⃗, σ, X) is a condition, σ∪{z} is R-transitive, so {x,y,z} is not a 3-cycle.* If x ∈σ and y, z ∈τ∪{a}, then since X is included in a minimal interval of σ, x →_R {y, z} or {y, z}→_R x, hence {x,y,z} is not a 3-cycle.* If x, y, z ∈τ, then {x,y,z} is not a 3-cycle by R-transitivity of τ* If x, y ∈τ and z = a, then since τ→_R Y or Y →_R τ, then {x,y,z} is not a 3-cycle.We are now ready to define the forcing question. Since R⃗ is only accessed through an over-approximation, and X through largeness, the forcing question is parameterized only by the initial segment σ and the number m of tournaments, rather than by the condition (R⃗, σ, X). Letc := ( R⃗, σ,X) ∈_ be a condition and m ≥ |R⃗|. Consider (∃ x) ψ_e(G,x) a Σ_1^0 formula.We define therelation as follows: σ_m (∃ x) ψ_e(G,x)holds if the class of all Y ∈_C_0^_0 such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y and every S⃗∈_m(σ,Y), there is a finite τ⊆ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and some x ∈ω such that ψ_e(σ∪τ,x) is large.Inductively, for n ≥ 1, consider (∃ x) ψ_e(G,x) a Σ_n+1^0 formula. We define therelation as follows:σ_m (∃ x) ψ_e(G,x) holds if the class of all Y ∈_C_n^_n such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y and every S⃗∈_m(σ,Y), there is a finite τ⊆ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, some x ∈ω and ℓ≥ m such that σ∪τ_ℓψ_e(G, x) is large. The over-approximation of R⃗ comes at no extra cost from a definitional complexity viewpoint. On the other hand, over-approximating the reservoir X by a large class yields a Π_1^0(_n+1) forcing question for Σ^0_n+1 formulas, which is sufficient for arithmetic reductions, but not a layerwise cone avoidance.lemma]lem:question-below-complexity Let n ∈ω and c := ( R⃗, σ,X) ∈_. Consider (∃ x) ψ_e(G,x) a Σ_n+1^0 formula. The formula (σ_m (∃ x) ψ_e(G,x)) is Π_1^0(_n+1), for all m ∈ω. We prove this result inductively over n. The sentence is of the form Ł∩_C_n^_n is large, where Ł is the class of all Y such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y and every S⃗∈_m(σ,Y), there is a finite τ⊆ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and some x ∈ω such that* for n = 0, ψ_e(σ∪τ,x).* for n > 0, there is some ℓ≥ m such that σ∪τ_ℓψ_e(G, x). By a compactness argument, Ł is also the class of all Y such that there exists t ∈ω such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^t and every m-tupleS⃗ of tournaments over {0, … t} such that for all y ∈{0,…,t }∩ Y, σ∪{y} is S⃗-transitive, there is a finite τ⊆{0, … ,t }∩ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and some x ∈ω such that * for n = 0, ψ_e(σ∪τ,x).* for n > 0, there is some ℓ≥ m such that σ∪τ_ℓψ_e(G, x).The first item is Σ^0_1, and by induction hypothesis, the second item σ∪τψ_e(G,x) is Σ_1^0(_n), hence Ł is aΣ^0_1(_n) class.By <Ref>, the class Ł∩_C_n^_n is large if and only if for all finite set F ⊆ C_n, Ł∩_F^_n is also large. Since Ł∩_F^_n is Σ^0_1(M_n) uniformly in F, then by a relativized <Ref>, the sentence Ł∩_F^_n is large is Π_2^0(M_n) uniformly in F, hence a Π_1^0(M_n') sentence uniformly in F, and thus, by <Ref>, a Π_1^0(∅^(n+1)) sentence. This makes Ł∩_C_n^_n largeness a Π_1^0(C_n ⊕∅^(n+1)) sentence, Moreover, by <Ref> and <Ref>,(C_n ⊕∅^(n+1)) ∈_n+1, hence, the sentence Ł∩_C_n^_n is large is a Π_1^0(_n+1) sentence.We now define the forcing relation for arithmetic formulas. The base cases for Σ^0_1 and Π^0_1 formulas, as well as the Σ^0_n+1 case, are quite straightforward. The interesting case is for Π^0_n+1 formulas (∀ x)ψ_e(G, x): it asserts that for every x ∈ω and extension d = (R⃗S⃗, σ∪τ, Y) of c, the forcing question σ∪τ_|R⃗S⃗|ψ_e(G, x) will hold. Assuming that the forcing question meets its specifications, that is, if the forcing question holds for a formula, then there exists an extension forcing this formula, then the forcing relation for Π^0_n+1 formulas is a density statement: it asserts that for every x ∈ω, the set of conditions forcing ψ_e(G, x) is dense below c. Thus, for every sufficiently generic filter  containing c and every x ∈ω, there will be a condition d_x ∈ forcing ψ_e(G, x). Let c := (R⃗,σ,X) ∈_. Consider (∃ x) ψ_e(G,x) a Σ_1^0 formula. We define the ⊩ relation as follows: * c ⊩ (∃ x) ψ_e(G,x) if (∃ x) ψ_e(σ,x). * c ⊩ (∀ x) ψ_e(G,x) if (∀τ⊆ X)(∀ x)( τ is R⃗-transitive ψ_e(σ∪τ,x)). Then, inductively, for n ≥ 1, let (∃ x) ψ_e(G,x) be a Σ_n+1^0 formula. Then, * c ⊩ (∃ x) ψ_e(G,x) if (∃ x)(c ⊩ψ_e(G,x)).* c ⊩ (∀ x) ψ_e(G,x) if (∀τ⊆ X)(∀ x)(∀ℓ≥ |R⃗| )(τ is R⃗-transitiveσ∪τ_ℓψ_e(G,x)). remark]rem:monotonicity-sigma01 Every Σ^0_1(G) formula ψ(G) can be expressed without loss of generality of the form Φ^G_e(0)↓. By the use property, the notion of Turing functional can be extended to finite length oracles, which induces an extension of the formula ψ(G) to finite strings, such that ψ(G) holds iff ψ(G _k) holds for some k ∈ω. Moreover, the formula ψ(σ) can be chosen so that if ψ(σ) holds, then so does ψ(τ) for every τ≽σ. Throughout this article, we will always assume that Σ^0_1 formulas are in this normal form. The following lemma shows that the forcing relation is stable under condition extension.lemma]lem:forcing-relation-closure Fix n ≥ 0. Let d, c ∈_ be such that d ≤ c, and let (∃ x) ψ_e(G,x) be a Σ_n+1^0 formula.(1) If c ⊩ (∃ x) ψ_e(G,x) then d ⊩ (∃ x) ψ_e(G,x).(2) If c ⊩ (∀ x) ψ_e(G,x) then d ⊩ (∀ x) ψ_e(G,x). Let c := (R⃗,σ, X) and d := (R⃗S⃗, σ∪τ, Y).Suppose n = 0. * Ifc ⊩ (∃ x) ψ_e(G,x), then (∃ x) ψ_e(σ,x). Moreover, σ∪τ≽σ, so by monotonicity of ψ_e (see <Ref>), (∃ x) ψ_e(σ∪τ,x), hence d ⊩ (∃ x) ψ_e(G,x). * If c ⊩ (∀ x) ψ_e(G,x), then (∀ρ⊆X)(∀ x)( ρ is R⃗-transitive ψ_e(σ∪ρ,x)). Then, pick x ∈ω and ρ⊆ Y. Then, τ∪ρ⊆τ∪ Y ⊆ X. Moreover, since d is a condition, if ρ is R⃗S⃗-transitive, σ∪τ∪ρ is R⃗S⃗-transitive, and in particular, τ∪ρ is R⃗-transitive, hence ψ_e(σ∪τ∪ρ, x). This yields that (∀ρ⊆Y)(∀ x)( ρ is R⃗S⃗-transitiveψ_e(σ∪τ∪ρ,x)), i.e.,d ⊩ (∀ x) ψ_e(G,x). Inductively, suppose n > 0. * Ifc ⊩ (∃ x) ψ_e(G,x), then (∃ x) c⊩ψ_e(σ,x), hence, by induction hypothesis, (∃ x) d⊩ψ_e(σ,x), i.e., d ⊩ (∃ x) ψ_e(G,x). * If c ⊩ (∀ x) ψ_e(G,x), then (∀ρ⊆ X)(∀ x)(∀ℓ≥ |R⃗| )(ρ is R⃗-transitiveσ∪ρ_ℓψ_e(G,x)). Then, pick x ∈ω, ρ⊆ Y and ℓ≥ |R⃗S⃗|. Then, τ∪ρ⊆τ∪ Y ⊆ X. Moreover, since d is a condition, if ρ is R⃗S⃗-transitive, σ∪τ∪ρ is R⃗S⃗-transitive, and in particular, τ∪ρ is R⃗-transitive, thus, since ℓ≥ |R⃗|, σ∪ρ_ℓψ_e(G, x) holds. This yields that (∀ρ⊆Y)(∀ x)(∀ℓ≥R⃗S⃗)( ρ is R⃗S⃗-transitive σ∪τ∪ρ_ℓψ_e(G,x)), i.e.,d ⊩ (∀ x) ψ_e(G,x).We now prove the core lemma for this notion of forcing: the forcing question meets its specifications. It implies in particular the density of the set of conditions forcing a property or its complement. Until now, the only hypothesis on the class  was its partition regularity. Here, since we over-approximate the reservoir X by a class Ł such that Ł∩ is large, one needs to assert some compatibility between  and Ł to deduce that X ∈Ł. Since Ł will be an intersection of Σ^0_1(_n) classes, assuming _n-minimality of , that is, ⊆⟨⟩, we will have X ∈⊆Ł.lemma]lem:question-below-validityLet n ∈ω and c := (R⃗,σ,X) ∈_ such that ⊆⟨⟩. Consider (∃ x) ψ_e(G,x) a Σ_n+1^0 formula. Let m ≥ |R⃗|. * If σ_m (∃ x) ψ_e(G,x) then ∃ d ≤ c such that d ⊩ (∃ x) ψ_e(G,x). * If σ_m (∃ x) ψ_e(G,x) then ∃ d ≤ c such that d ⊩ (∀ x) ψ_e(G,x). First, suppose n =0. * Suppose σ_m (∃ x) ψ_e(G,x). Let Ł denote the class of all Y such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y and every S⃗∈_m(σ,Y), there is a finite τ⊆ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and some x ∈ω such that ψ_e(σ∪τ,x). Then, Ł∩_C_0^_0 is large. Since _C_0^_0 is _0-cohesive, ⟨_C_0^_0⟩ is _0-minimal. This yields that ⟨_C_0^_0⟩⊆Ł. Moreover, X ∈⊆⟨_C_0^_0⟩⊆Ł. By a compactness argument, this yields that there exists t ∈ω such that for every 2-colorings h⃗ = h_0, …, h_m-1∈ 2^t, there is a finite τ⊆ X ∩{0, …, t } which is R⃗-transitive and h⃗-homogeneous and some x ∈ω such thatψ_e(σ∪τ,x). Let us build a specific h⃗ such that the τ we get gives us the extension of c we look for. For every i < m, a ≤ t, and y ∈ X, y > t, let g_i,a(y) := 1 if aR_iy, and 0 otherwise (if m > i ≥ |R⃗|, then let R_i be a fixed dummy tournament). Since  is partition regular, then there is some g⃗-homogeneous set H ⊆ X in . For every i < m and a ≤ t, let h_i(a) = 1 if {a}→_R_i H, and 0 otherwise. Since X ∈Ł, there exists a finite τ⊆ X ∩{0,…,t} which is R⃗-transitive and h⃗-homogeneous and some x ∈ω such that ψ_e(σ∪τ,x) holds. Moreover, by <Ref>, R⃗∈_|R⃗|(σ∪τ, H ). This makes d := (R⃗,σ∪τ, H) a valid condition such that d ⊩ (∃ x) ψ_e(G,x).* Suppose σ_m (∃ x) ψ_e(G,x). Then, there exists s ∈ω and Y_0, … Y_s a partition of ω such that for all i ≤ s, (†) either Y_i ∉_C_0^_0 or there exists an m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y_i and an S⃗∈_m(σ,Y_i), such that for all τ⊆ Y_i ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and for all x ∈ω,ψ_e(σ∪τ,x). Since  is partition regular, then there is some i ≤ s such that X ∩ Y_i ∈. In particular, X ∩ Y_i ∈_C_0^_0, so by upward-closure of partition regularity, Y_i ∈_C_0^_0. Let h⃗ and S⃗∈_m(σ, Y) be witnesses of (†). By partition regularity of , there is a h⃗-homogeneous subset H ⊆ X ∩ Y_i in . The condition d := (R⃗S⃗, σ, H) is a valid extension of c such that d ⊩ (∀ x) ψ_e(G,x). Now, inductively, suppose n > 0.* Suppose σ_m (∃ x) ψ_e(G,x). Let Ł denote the class of all Y such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y and every S⃗∈_m(σ,Y), there is a finite τ⊆ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and some x, ℓ∈ω such that σ∪τ_ℓψ_e(G, x). Then, Ł∩_C_n^_n is large. Since _C_n^_n is _n-cohesive, ⟨_C_n^_n⟩ is _n-minimal. This yields that ⟨_C_n^_n⟩⊆Ł. Moreover, X ∈⊆⟨⟩⊆Ł. By a compactness argument, this yields that there exists t ∈ω such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^t,there is a finite τ⊆ X ∩{0, …, t } which is R⃗-transitive and h⃗-homogeneous and some x, ℓ∈ω such that σ∪τ_ℓψ_e(G, x). Let us build a specific h⃗ such that the τ we get gives us the extension of c we look for. For every i < m, a ≤ t, and y ∈ X, y > t, let g_i,a(y) := 1 if aR_iy, and 0 otherwise. By partition regularity of , there is some g⃗-homogeneous set H ⊆ X in .For every i < m and a ≤ t, let h_i(a) = 1 if {a}→_R_i H, and 0 otherwise. Now, there exists a finite τ⊆ X ∩{0,…,t} which is R⃗-transitive and h⃗-homogeneous and some x, ℓ∈ω such that σ∪τ_ℓψ_e(G, x). Moreover, by <Ref>, R⃗∈_m(σ∪τ, H). This makes d := (R⃗,σ∪τ, H) a valid condition such that for some ℓ≥ m, σ∪τ_ℓψ_e(G, x), hence, by induction hypothesis, there exists p ≤ d ≤ c such that p ⊩ψ_e(G, x), hence, p ⊩ (∃ x) ψ_e(G,x).* Suppose σ_m (∃ x) ψ_e(G,x). Then, there exists s ∈ω and Y_0, … Y_s a partition of ω such that for all i ≤ s, (†) either Y_i ∉_C_n^_n or there exists an m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y_i and an S⃗∈_m(σ,Y_i), such that for all τ⊆ Y_i ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and for all x ∈ω and ℓ≥ m, σ∪τ_ℓψ_e(G, x). By partition regularity of , there is some i ≤ s such that X ∩ Y_i ∈. In particular, X ∩ Y_i ∈, so by upward-closure of partition regularity, Y_i ∈. Let h⃗ and S⃗∈_m(σ, Y) be witnesses of (†). By partition regularity of , there is a h⃗-homogeneous subset H ⊆ X ∩ Y_i in .The condition d := (R⃗S⃗, σ , H) is a valid extension of c such that d ⊩ (∀ x) ψ_e(G,x).Let n ∈ω, and ⊆_ be a filter. The setis said n-generic if for all k < n, and every Σ_k+1^0 formula (∃ x)ψ_e(G,x), there exists a condition c ∈ such that c ⊩ (∃ x)ψ_e(G,x) or c ⊩ (∀ x)ψ_e(G,x) lemma]lem:n-genericity Fix n ∈ω, and letbe a sufficiently generic _-filter, where ⊆⟨^_n__n⟩. Thenis n-generic.Let k<n, and let (∃ x)ψ_e(G,x) be a Σ_k+1 formula. Let  be the collection of all conditions deciding (∃ x)ψ_e(G,x). We claim that  is dense. Let c := ( R⃗,σ,X) ∈, and m := |R⃗|. Suppose σ_m (∃ x)(ψ_e(G,x). Then, by <Ref> there exists d ≤ c such that d ⊩ (∃ x)(ψ_e(G,x). Otherwise, σ_m (∃ x)(ψ_e(G,x). Then, by <Ref> there exists d ≤ c such that d ⊩ (∀ x)(ψ_e(G,x). Either way, there is some d ≤ c in , sois dense. Sinceis sufficiently generic, ∩≠∅. lemma]lem:non-contradiction Let n ∈ω, and let c := (R⃗,σ,X) ∈_ such that if n > 0 then ⊆⟨_C_n-1^_n-1⟩. Consider (∃ x) ψ_e(G,x) a Σ_n+1^0 formula. Then, (c ⊩ (∃ x) ψ_e(G,x))(c ⊩ (∀ x) ψ_e(G,x)) never holds.We prove this inductively. Suppose otherwise. * First, suppose n=0. Then, (∃ x)ψ_e(σ,x) holds, and (∀τ⊆ X)(∀ y)( τ is R⃗-transitive ψ_e(σ∪τ,y)). In particular, for τ = ∅, τ is R⃗-transitive, hence, ψ_e(σ,x) holds, yielding a contradiction.* Now, suppose n>0. Then, (∃ x)(c ⊩ψ_e(G,x)), and (∀τ⊆ X)(∀ y)(∀ℓ≥ |R⃗| )(τ is R⃗-transitiveσ∪τ_ℓψ_e(G,y)). In particular, for τ = ∅, and ℓ = R⃗, τ is R⃗-transitive, hence, σ_ℓψ_e(G,x)). Since ⊆⟨_C_n-1^_n-1⟩, this yields by <Ref> that there exists d ≤ c such that d ⊩ψ_e(G,x). However, by <Ref>, d ⊩ψ_e(G,x). This contradicts induction hypothesis.The following lemma is known as the forcing implies truth lemma: if a condition forces a formula, then for every sufficiently generic filter containing this condition, the formula will hold.lemma]lem:forcing-implies-truth Let n ∈ω, and ⊆_ be a filter such that if n > 0 thenis (n-1)-generic and ⊆⟨_C_n-1^_n-1⟩. Let (∃ x)ψ_e(G,x) be a Σ_n+1^0 formula. Let c ∈.* If c ⊩ (∃ x)ψ_e(G,x), then (∃ x)ψ_e(G_,x) holds.* If c ⊩ (∀ x) ψ_e(G,x), then (∀ x) ψ_e(G_,x) holds.Suppose n=0. * If c ⊩ (∃ x)ψ_e(G,x), then (∃ x)ψ_e(σ,X), and since G_≽σ, (∃ x)ψ_e(G_,X).* If c ⊩ (∀ x) ψ_e(G,x), then (∀τ⊆ X)(∀ x)( τ is R⃗-transitive ψ_e(σ∪τ,x)). Suppose for the contradiction that (∃ x) ψ_e(G_,x). Then, by<Ref>, there exists τ⊆ G_⊆ X such that ψ_e(σ∪τ,x). However, by <Ref>, G_ is R⃗-transitive, hence τ is R⃗-transitive, contradicting hypothesis. Inductively, suppose n > 0. * If c ⊩ (∃ x)ψ_e(G,x), then c ⊩ψ_e(σ,x) for some x ∈ω. By induction hypothesis, (∃ x) ψ_e(G_,x). * If c ⊩ (∀ x) ψ_e(G,x), then (∀τ⊆ X)(∀ x)(∀ℓ≥ |R⃗| )(τ is R⃗-transitiveσ∪τ_ℓψ_e(G,x)). Fix some x ∈ω. By (n-1)-genericity of , there exists some d ∈ such that d ⊩ψ_e(G,x) or d ⊩ψ_e(G,x).By compatibility of the conditions in a filter, and by <Ref>, we can suppose without loss of generality that d ≤ c. In particular, d := (R⃗S⃗, σ∪τ, Y). Since τ⊆ X and is R⃗-transitive and since |R⃗S⃗| ≥ |R⃗|, σ∪τ_|R⃗S⃗|ψ_e(G,x)).By <Ref>, there is some p ≤ d such that p ⊩ψ_e(G,x)). Since p ≤ d and ⊆⟨_C_n-1^_n-1⟩, then by <Ref>, d ⊩ψ_e(G,x). By induction hypothesis, ψ_e(G_, x) holds, and this, for every x, so (∀ x) ψ_e(G_, x).lemma]lem:1-generic-infinite Let ⊆_ be a 1-generic filter. Then G_ is infinite. By 1-genericity of , for all x, there exists c := (R⃗, σ, X) ∈ such that c ⊩ (∃ y > x)( y ∈ G) or c ⊩ (∀ y > x) (y ∉G). Suppose the latter holds for some x.Unfolding the definition of the forcing relation for Π^0_1 formulas, (∀τ⊆ X)(∀ y)(τR⃗ y ≤ x ∨ y ∉σ∪τ). Since X is infinite, there is some y ∈ X such that y > x. Letting τ = {y}, τ is R⃗-transitive, y > x and y ∈σ∪{y}. Contradiction. § STRONG CONE AVOIDANCE FOR ARITHMETIC REDUCTIONS section]sect:strong-avoidance-arithWe now use the framework developed in <Ref> to prove thatadmits strong cone avoidance for arithmetic reductions. In other words, the goal of this section is to prove our first main theorem:thm:arithmetic-main If B is not arithmetic, then for every tournament T, there is an infinite transitive subtournament H such that B is not H-arithmetic. Recall that in <Ref>, we stated the existence of an infinite sequence of Scott sets _0 ⊆_1 ⊆… respectively coded by some sets M_0, M_1, …, together with a sequence of sets C_0, C_1, … such that ∅^(n+1)⊕ C_n ∈_n+1, M'_n ≤_T ∅^(n+1) and ⟨_C_0^_0⟩⊇⟨_C_1^_1⟩⊇…are partition regular classes. By <Ref>, the class _ω = ⋂_n ⟨_C_n^_n⟩ is again partition regular.definition]def:omega-condition Let _ω = __ω. Since _ω = ⋂_n ⟨_C_n^_n⟩, then all the hypothesis of the form ⊆⟨_C_n^_n⟩ hold in <Ref>. By <Ref>, the forcing question to decide Σ^0_n(G) formulas is Π^0_1(_n), hence the definitional complexity of the forcing question is not at the same level in the hierarchy as the formula we force. Thankfully, in the case of arithmetic reductions, this difference is not relevant. Indeed, all the sets in _n are arithmetic, so if a set B is not arithmetic, it is in particular not Π^0_2(_n) for any n.lemma]lem:arith-diag Suppose B is not arithmetic.Letbe a sufficiently generic _ω-filter. Then for every n ∈ω and every Σ^0_n formula φ(G, x), there exists d ∈ such that (∃ x ∉ B)(d ⊩φ(G, x)) (∃ x ∈ B) (d ⊩φ(G, x)). Fix some c = (R⃗, σ, X) ∈, and let φ(G, x) be a Σ^0_n formula for some n > 0. Say m = |R⃗|. Let W = { x : σ_m φ(G, x) }. By <ref>, the set W is Π^0_n(_n). Since B is not arithmetic, W ≠ B. Let x ∈ W Δ B = (W ∖ B) ∪ ( B ∖ W). One of the two cases holds: * x ∈ W ∖ B, then, by <ref>, there exists a condition d ≤ c such that d ⊩φ(G, x).* x ∈ B ∖ W, then, by <ref>, there exists a condition d ≤ c such that d ⊩φ(G, x).In both cases, by genericity of , there is such a d in . We are now ready to prove our first main theorem. Fix a non-arithmetic set B, a tournament T, and let  be a sufficiently generic _ω-filter containing the condition (T, ∅, ω). By <Ref>,is n-generic for every n ∈ω. By <Ref>, G_ is T-transitive, and by <Ref>, G_ is infinite.We claim that B is not G_-arithmetic: Fix a Σ^0_n formula φ(G, x). By <Ref>, there exists some c ∈ such that (∃ x ∉ B)(c ⊩φ(G, x)) (∃ x ∈ B) (c ⊩φ(G, x)).By <Ref>, (∃ x ∉ B)φ(G_, x) (∃ x ∈ B) φ(G_, x), hence B is not G_-arithmetic. § LAYERWISE STRONG CONE AVOIDANCE section]sect:layerwise-avoidanceIn this section, we are going to twist the previous notion of forcing to obtain a layerwise version of <Ref>. More precisely, the goal of this section is to prove the following theorem:thm:layerwise-main Fix n ≥ 1. If B is not Σ^0_n, then for every tournament T, there is an infinite transitive subtournament H such that B is not Σ^0_n(H). As explained in <Ref>, the proof of such theorems is closely related to the existence of a uniformly Σ^0_n-preserving forcing question, that is, a forcing question for Σ^0_n(G) formulas which is Σ^0_n uniformly in its parameters.Unfortunately, by <Ref>, forcing a Σ^0_n(G) formula is Π^0_1(_n), which is not the desired definitional complexity. We are going to use the same trick as Monin and Patey <cit.> and define a twisted notion of forcing on the top, that is, leaving the lower levels unchanged, we are going to replace the Scott set _n by another Scott set _n with more suited properties, and define a different forcing question on the top level.Intuitively, the bad complexity of the forcing question comes from the fact that, since there is no effectiveness restriction on the reservoir X, the only way to decide properties is to check whether the class of sets satisfying this property is large. Largeness of a Σ^0_n property is Π^0_n+1. Therefore, at the top level, the forcing question will have to directly involve the reservoir. The counterpart is that the forcing conditions will need to impose effectiveness restrictions on the reservoirs, which will raise a few technical difficulties. §.§ Top Scott set Suppose B is a non-Σ^0_n+1 set. The forcing question on the top for Σ^0_n+1 formulas will be Σ^0_1(_n), so for <Ref> about diagonalization to work, one needs B not to be Σ^0_1(_n). We will therefore replace the Scott set _n with another Scott set _n coded by a set N_n with the following two properties: * ∅^(n) is coded by an element of _n ;* B is not Σ^0_1(_n) The second fact replaces the previous assumption that N_n' is computable in ∅^(n+1). The existence of such a Scott set follows from the following proposition by Wang <cit.>:proposition]prop:wang Let Z, B such that B is not Σ^0_1(Z). For every Z-computable tree T ⊆ 2^ <, there exists an infinite path P ∈ [T] such that B is not Σ^0_1(Z ⊕ P). However, the two properties above are not sufficient to define _n. Indeed, there are still no effectiveness restrictions on the initial tournament T, and in particular, T cannot be assumed to be in _n, but in the proof of <Ref>, one needs to split the reservoir X based on a Z ⊕ T-computable finite partition, for some Z ∈_n. The resulting reservoir must still belong to _n and to the partition regular class . The Scott set _n must therefore enjoy the following property: * For every X ∈_n ∩, every Z ∈_n and every T ⊕ Z-computable set A, there exists an infinite set Y ⊆ X ∩ A or Y ⊆ X ∩A such that Y ∈_n ∩. For this, we will prove the following proposition, which is an adaptation of an alternative proof by Hirschfeldt <cit.> of a theorem by Dzhafarov and Jockusch <cit.>. The statement of the proposition can be found in Monin and Patey <cit.> and <cit.>, but without the assumption that H ∈^Z_C. We therefore give a direct proof of it for the sake of simplicity.proposition]prop:dzhafarov-jockusch Let Z, B such that B is not Σ^0_1(Z).Let ^Z_C be a partition regular class. For every set A, there exists an infinite subset H ⊆ A or H ⊆A such that B is not Σ^0_1(Z ⊕ H) and H ∈^Z_C. Fix Z, B, ^Z_C and A. Say A_0 = A and A_1 = A. We are going to build two sets G_0, G_1 by a variant of Mathias forcing whose conditions are tuples of the form (σ_0, σ_1, X), where * (σ_i, X) is a Mathias condition with σ_i ⊆ A_i* X ∈^Z_C and B is not Σ^0_1(X ⊕ Z)Consider the following two kind of requirements, for every e ∈ω and k ∈ C:^G_eW_e^G ⊕ Z≠ B ^G_kG ∈^Z_kWe are going to construct G_0, G_1 such that they satisfy the following requirements for every e_0, e_1 ∈ω and k_0, k_1 ∈ C: ^G_0_e_0∨^G_1_e_1^G_0_k_0∨^G_1_k_1^G_0_e_0∨^G_1_k_1^G_0_k_0∨^G_1_e_1 By a pairing argument, then there is some i < 2 such that G_i ∈^Z_C and B is not Σ^0_1(G_i ⊕ Z). We will need the following three lemmas. The fourth case follows by symmetry.lemma]lem:prop-dj-rr Let c be a condition and e_0, e_1 ∈ω. There is an extension forcing ^G_0_e_0∨^G_1_e_1. Say c = (σ_0, σ_1, X). Let W be the Σ^0_1(X ⊕ Z) set of all x ∈ω such that for every 2-partition R_0 ⊔ R_1 = X, there is some i < 2 and some ρ⊆ R_i such that x ∈ W_e_i^(σ_i ∪ρ) ⊕ Z. Since W is Σ^0_1(X ⊕ Z) while B is not, there is some x ∈ W Δ B = (W ∖ B) ∪ (B ∖ W).* If x ∈ W ∖ B, then, letting R_i = X ∩ A_i, there is some i < 2 and some ρ⊆ X ∩ A_i such that x ∈ W_e_i^(σ_i ∪ρ) ⊕ Z. The condition (σ_i ∪ρ, σ_1-i, X ∖{0, …, maxρ}) is an extension of c forcing ^G_i_e_i.* If x ∈ B ∖ W, then, let  be the Π^0_1(X ⊕ Z) class of all R_0 ⊕ R_1 such that R_0 ⊔ R_1 = X, for every i < 2 and every ρ⊆ R_i, x ∉W_e_i^(σ_i ∪ρ) ⊕ Z. The class  is non-empty, so by <Ref>, there is some R_0 ⊕ R_1 ∈ such that B is not Σ^0_1(R_0 ⊕ R_1 ⊕ X ⊕ Z). By partition regularity of ^Z_C, there is some i < 2 such that R_i ∈^Z_C. The condition (σ_0, σ_1, R_i) is an extension of c forcing ^G_i_e_i.lemma]lem:prop-dj-ss Let c be a condition and k_0, k_1 ∈ C. There is an extension forcing ^G_0_k_0∨^G_1_k_1. Say c = (σ_0, σ_1, X). Since ^Z_C is partition regular and X ∈^Z_C, there is some i < 2 such that X ∩ A_i ∈^Z_C. In particular, X ∩ A_i ∈^Z_k_i. Let ρ⊆ X ∩ A_i be such that ρ∈^Z_k_i. Then the condition (σ_i ∪ρ, σ_1-i, X ∖{0, …, maxρ}) is an extension of c forcing ^G_i_k_i. lemma]lem:prop-dj-rs Let c be a condition, e_0 ∈ω and k_1 ∈ C. There is an extension forcing ^G_0_e_0∨^G_1_k_1. Say c = (σ_0, σ_1, X). Let W be the Σ^0_1(X ⊕ Z) set of all x ∈ω such that for every 2-partition R_0 ⊔ R_1 = X, either there is some ρ⊆ R_0 such that x ∈ W_e_0^(σ_0 ∪ρ) ⊕ Z, or there is some ρ⊆ R_1 such that ρ∈^Z_k_1. Since W is Σ^0_1(X ⊕ Z) while B is not, there is some x ∈ W Δ B = (W ∖ B) ∪ (B ∖ W).* If x ∈ W ∖ B, then, letting R_i = X ∩ A_i, either there is some ρ⊆ X ∩ A_0 such that x ∈ W_e_0^(σ_0 ∪ρ) ⊕ Z, or some ρ⊆ X ∩ A_1 such that ρ∈^Z_k_1. The condition (σ_i ∪ρ, σ_1-i, X ∖{0, …, maxρ}) is an extension of c forcing ^G_0_e_0 in the first case, and ^G_1_k_1 in the second case.* If x ∈ B ∖ W, then, let  be the Π^0_1(X ⊕ Z) class of all R_0 ⊕ R_1 such that for every ρ⊆ R_0, x ∉W_e_0^(σ_i ∪ρ) ⊕ Z, and R_1 ∉^Z_k_1. The class  is non-empty, so by <Ref>, there is some R_0 ⊕ R_1 ∈ such that B is not Σ^0_1(R_0 ⊕ R_1 ⊕ X ⊕ Z). By partition regularity of ^Z_C, there is some i < 2 such that R_i ∈^Z_C, and by choice of , i = 0. The condition (σ_0, σ_1, R_0) is an extension of c forcing ^G_0_e_0.We are now ready to prove <Ref>. Fix Z, B, ^Z_C and A. Say A_0 = A and A_1 = A. Let  be a sufficiently generic filter for this notion of forcing. For every i < 2, let G_i = ⋃_(σ_0, σ_1, X) ∈σ_i. By definition of a condition, G_0 ⊆ A and G_1 ⊆A. By <Ref>, <Ref>, <Ref> and its symmetric version, there is some i < 2 such that G_i ∈^Z_C and B is not Σ^0_1(G_i ⊕ Z). In particular, assuming ^Z_C contains only infinite sets, G_i is infinite. This completes the proof of <Ref>. Thanks to <Ref> and <Ref>, one can prove the existence of a Scott set _n satisfying the properties mentioned above:proposition]prop:top-extension Fix n > 0. Let B be a non-Σ^0_n+1 set and T be a tournament.There exists a Scott set _n such that* ∅^(n)∈_n ; B is not Σ^0_1(_n) ;* for every X ∈_n ∩⟨_C_n-1^_n-1⟩, every Z ∈_n and every T ⊕ Z-computable set A, there exists an infinite set Y ⊆ X ∩ A or Y ⊆ X ∩A such that Y ∈_n ∩⟨_C_n-1^_n-1⟩.By <Ref> and <Ref>, there exists an infinite sequence Z_0, Z_1, … such that Z_0 = ∅^(n), and for every s ∈ω: (1) For every Z_0 ⊕…⊕ Z_s-computable infinite binary tree T, there is some t such that Z_t ∈ [T] ;(2) For every Z_0 ⊕…⊕ Z_s-computable infinite set X ∈⟨_C_n-1^_n-1⟩ and every T ⊕ Z_0 ⊕…⊕ Z_s-computable set A, there exists some t such that Z_t ⊆ X ∩ A or Z_t ⊆ X ∩A and Z_t ∈_n ∩⟨_C_n-1^_n-1⟩.(3) B is not Σ^0_1(Z_0 ⊕…⊕ Z_s)Let _n = { X : (∃ s) X ≤_T Z_0 ⊕…⊕ Z_s }. By construction, _n is a Turing ideal containing ∅^(n). Moreover, by (1), _n, by (2), _n satisfies the second item of the lemma, and by (3), B is not Σ^0_1(_n). As explained in <Ref>, given a Scott set  coded by a set M, one can compute the index set C of an -cohesive class ^_C in any PA degree over M' (see Monin and Patey <cit.> for a full proof).Since M_n-1' ≤_T ∅^(n) and ∅^(n)∈_n which is a Scott set, then one can find the index set C_n-1 of an _n-1-cohesive large class ^_n-1_C_n-1 in _n. §.§ Top forcing conditions The notion of forcing for layerwise cone avoidance resembles the previous notion of forcing, with a few distinctive features.* First, since one needs to control only Σ^0_n+1(G) properties, the partition generic class  will only need to be included in a minimal class of a finite level of the hierarchy of Scott sets. We will actually choose = ⟨_C_n-1^_n-1⟩.* Second, since the forcing question on the top will depend on the reservoir X, one must require that X ∈_n, in order to obtain a Σ^0_1(_n) forcing question. Since  B is not Σ^0_1(_n), the diagonalization lemma will hold.* Last, as explained above, since the proof of <Ref> splits the reservoir based on 2-partitions computable in the tournaments, one must require that _n is closed under this operation. By construction of _n, the closure is ensured for the original tournament T. However, in the proof of <Ref>, new tournaments will be added to the condition. Thankfully, all the new tournaments can be chosen as members of Π^0_1(_n) classes, and since _n is a Scott set, one can require that R⃗∈_n. The new notion of condition will therefore distinguish between the original tournament T which can be of arbitrary strength, and the new tournaments R⃗ added along the construction of a generic filter, and which will belong to _n.definition]def:top-condition Fix a tournament T. Let n > 0. Let _n denote the set of all 3-tuples (R⃗,σ,X) such that *  is a finite sequence of tournaments,* X ∩{0, …, |σ| } = ∅, * X ∈⟨_C_n-1^_n-1⟩,* R⃗, X ∈_n,* for all y ∈ X, σ∪{y} is -transitive and T-transitive,* X is included in a minimal R⃗-interval and T-interval of σ. In other words, _n is the set of all _-conditions ((T,R⃗), σ, X) for = ⟨_C_n-1^_n-1⟩ such that R⃗, X ∈_n.As an element of _, a _n condition inherits the definitions of the forcing relation and the forcing question. The Scott set _n has been designed so that the proof of <Ref> still holds while ensuring that R⃗ and X belong to _n. §.§ Top forcing question We now define a forcing question which is very similar to <Ref>. Fix n > 0. Let c = (R⃗, σ, X) ∈_n and let (∃ x)ψ_e(G, x) be a Σ^0_n+1 formula. Say m := |R⃗|+1. Define the relation c(∃ x)ψ_e(G, x) to hold if for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^X and every S⃗∈_m(σ,X), there is an h⃗-homogeneous and S⃗-transitive τ⊆ X and some x ∈ω such that σ∪τ_m ψ_e(G,x). lemma]lem:question-complexityFix n > 0. Let c = (R⃗, σ, X) ∈_n and let (∃ x)ψ_e(G, x) be a Σ^0_n+1 formula. The sentence (c(∃ x)ψ_e(G, x)) is Σ^0_1(_n).Let m := |R⃗|+1. By a compactness argument, (c(∃ x)ψ_e(G, x)) holds if there exists t ∈ω such that for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^t and every every m-tuple of tournaments S⃗ over {0, …, t} such that S⃗∈_m(σ,X ∩{0, …, t}), there is an h⃗-homogeneous andS⃗-transitive τ⊆ X ∩{0, … ,t } and some x ∈ω such that σ∪τ_m ψ_e(G,x). The formula ψ_e(G,x) is Σ^0_n, so by <ref>, the formula(σ∪τ_m ψ_e(G,x)) is Π^0_1(C_n-1⊕∅^(n)) uniformly in its parameters, hence (σ∪τ_m ψ_e(G,x)) is Σ^0_1(_n). This yields the expected result since _n ⊆_n.The following lemma states that the forcing question on the top meets its specifications, that is, based on its answer, there is an extension forcing the property or its complement. lemma]lem:question-validityLet n ∈ω and c := (R⃗,σ,X) ∈_n. Consider (∃ x) ψ_e(G,x) a Σ_n+1^0 formula. * If c(∃ x) ψ_e(G,x) then ∃ d ≤ c such that d ⊩ (∃ x) ψ_e(G,x). * If c(∃ x) ψ_e(G,x) then ∃ d ≤ c such that d ⊩ (∀ x) ψ_e(G,x). Let m = |R⃗|+1. For simplicity of notation, let R_m-1 = T. First, suppose c(∃ x) ψ_e(G,x). Then, in particular, for every m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^X, there exists an h⃗-homogeneous, R⃗-transitive and T-transitive τ⊆ X and some x ∈ω such that σ∪τψ_e(G,x). By a compactness argument, there exists t ∈ω such that we can restrict the considered set of m-tuples of 2-colorings of X to 2-colorings of {0,…, t}. We again build the same m-tuple of 2-colorings as follows: for every i < m, a ≤ t, and y ∈ X, y > t, let g_i,a(y) := 1 if R_i(a,y) holds, and 0 otherwise. Note that g⃗ is T ⊕R⃗⊕ X-computable, hence T ⊕ Z-computable for some Z ∈_n (since R_m-1 = T). By choice of _n, with = ⟨_C_n-1^_n-1⟩, by <Ref>, there exists H ⊆ X a g⃗-homogeneous set in _n ∩⟨_C_n-1^_n-1⟩. For every i < m and a ≤ t, let h_i(a) = 1 if {a}→_R_i H, and 0 otherwise. Now, there exists a finite τ⊆ X ∩{0,…,t} which is R⃗-transitive, T-transitive and h⃗-homogeneous and some x ∈ω such that σ∪τ_m ψ_e(G, x).Moreover, by <Ref>, (R⃗,T) ∈_m(σ∪τ, H). This makes d := (R⃗,σ∪τ, H) a valid _n condition under c such that d _m ψ_e(G, x), hence, by <Ref>, there exists p ≤ d ≤ c such that p ⊩ψ_e(G,x). Now, suppose c(∃ x) ψ_e(G,x). Then, there exists an m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^X and an m-tuple of tournaments S⃗∈_m(σ,X) such that for every h⃗-homogeneous and S⃗-transitive finite chain σ⊆ X and every x ∈ω, σ∪τ_m ψ_e(G,x). Let Ł be the class of such m-tuples of 2 colorings h⃗ and m-tuples of S⃗∈_m(σ,X). By <Ref>, _m(σ, X) is Π^0_1(X). Since X ∈_n, the class Ł is Π_1^0(_n), hence, since _n, there exists (h⃗, S⃗) ∈_n ∩Ł. By partition regularity of = ⟨_C_n-1^_n-1⟩, there is a h⃗⊕ X-computable h⃗-homogeneous set Y ⊆ X in . The 3-tuple d := (R⃗S⃗, σ, Y) is a valid -condition such that d ⊩ (∀ x) ψ_e(G,x). The following diagonalization lemma is a specialization of <Ref> to this notion of forcing.lemma]lem:layerwise-diag Fix n > 0. Letbe a sufficiently generic _n-filter. Then, for every Σ_n+1^0 formula φ(G,x), there exists d ∈ such that(∃ x ∉ B)(d ⊩φ(G, x)) (∃ x ∈ B) (d ⊩φ(G, x)). Fix some c = (R⃗, σ, X) ∈, and let φ(G, x) be a Σ^0_n+1 formula. Let W = { x : c(φ (G, x) }. By <ref>, the set W is Σ^0_1(_n). By construction of _n in <Ref>, B is not Σ_1^0(_n), hence, W ≠ B. Let x ∈ W Δ B = (W ∖ B) ∪ ( B ∖ W). One of the two cases holds: * x ∈ W ∖ B, then, by <ref>, there exists a condition d ≤ c such that d ⊩φ(G, x).* x ∈ B ∖ W, then, by <ref>, there exists a condition d ≤ c such that d ⊩φ(G, x).In both cases, by genericity of , there is such a d in . We are now ready to prove <Ref>. The case n = 0 is proven independently by the first author and Wang (unpublished) and is a consequence of <Ref>. Fix n>0, and fix a non-Σ_n+1^0 set B, a tournament T, and let  be a sufficiently generic _n-filter containing the condition (∅, ∅, ω) (recall that T is a parameter of the notion of forcing). By <Ref>,is (n-1)-generic. By <Ref>, G_ is T-transitive, and by <Ref>, G_ is infinite.We claim that B is not Σ^0_n+1(G_): fix a Σ^0_n+1 formula φ(G, x). By <Ref>, there exists some c ∈ such that (∃ x ∉ B)(c ⊩φ(G, x)) (∃ x ∈ B) (c ⊩φ(G, x)).By <Ref>, sinceis (n-1)-generic, (∃ x ∉ B)φ(G_, x) (∃ x ∈ B) φ(G_, x), hence B is notΣ^0_n(G_). § EFFECTIVE CONSTRUCTIONS AND LOWNESS section]sect:effective-constructionsThis last section of our article is devoted to the proof of the third main theorem:thm:effective-main Fix n ≥ 1. Every Δ^0_n tournament T has an infinite transitive subtournament of low_n+1 degree. First of all, notice that this bound is tight, in that there exists a computable tournament with no infinite Σ^0_2 transitive subtournament (seePatey <cit.>). By relativizing the argument, for every n ≥ 1, there is a Δ^0_n tournament with no infinite Σ^0_n+1 transitive subtournament, hence no infinite low_n transitive subtournament. We will actually prove the following stronger theorem:theorem]thm:effective-strong-main Fix n ≥ 1. For every set P of PA degree over ∅^(n), every Δ^0_n tournament T has an infinite transitive subtournament H such that H^(n)≤_T P. <Ref> follows from <Ref> using the low basis theorem: Fix n ≥ 1 and a Δ^0_n tournament T. By the low basis theorem relative to ∅^(n) (see Jockusch and Soare <cit.>), there is a set P of PA degree over ∅^(n) such that P' ≤_T ∅^(n+1). By <Ref>, there is an infinite T-transitive subtournament H such that H^(n)≤_T P. In particular, H^(n+1)≤_T P' ≤_T ∅^(n+1), hence H is of low_n+1 degree. The rest of this section is therefore devoted to the proof of <Ref>. The goal will be to construct, given a Δ^0_n tournament T and a set P of PA degree over ∅^(n), an infinite decreasing sequence of conditions c_0 = (T, ∅, ω) ≥ c_1 ≥… such that the induced filter = { d : (∃ n) c_n ≤ d } is n-generic. Then, the set G_ be will be an infinite T-transitive subtournament such that G_^(n)≤_T P.We will work with a notion of forcing _n which is very similar to _n, with the same distinction between the forcing question of the top and the ones at the lower levels. The main difference between _n and _n comes from two facts: * The tournament T is Δ^0_n, hence belongs to _n. There is therefore no need to distinguish the original tournament T from the sequence of tournaments R⃗ obtained with the question of forcing.* The resulting filter will be n-generic, but there will be no diagonalization against a fixed non-Σ^0_n+1 set B. We therefore does not require that a set B is not Σ^0_1(_n).Because of these differences, there is no need to use a different Scott set at the level n. We will therefore keep _n instead of replacing it with _n. Note that any PA degree over ∅^(n) can compute a sequence of sets M_0, M_1, …, M_n satisfying the properties of <Ref>, except that the last set M_n is not required to satisfy the last item, but simply to be P-computable.definition]def:top-condition Let n > 0. Let _n denote the set of all _-conditions (R⃗, σ, X) for = ⟨_C_n-1^_n-1⟩ such that R⃗, X ∈_n. In order to analyze the computational power needed to construct an n-generic decreasing sequence of conditions, one must fix a coding of the conditions into finite objects. Since _n = { Z_e : e ∈} is countably coded by the set M_n = ⊕_e ∈ω Z_e, every element X ∈_n can be represented by an integer e such that X = Z_e. We call such an e an M_n-index of X. Note that any set X ∈_n can be represented by infinitely many M_n-indices. Let c := (R, σ,X). A _n-index of c is a 3-tuple ⟨ e_R,σ,e_X ⟩ such that Z_e_X = X, i.e. e_X is an M_n-index of X, and such that Φ_e_R^∅^(n)(i,a,b) = R_i(a,b). In what follows, fix some n ≥ 1 and a set P of PA degree over ∅^(n) computing M_n.lemma]lem:effective-ownership-pr The statement X ∈^_n-1_C_n-1 is Π^0_1(C_n-1⊕ (X ⊕ M_n-1)') uniformly in X. In particular, if X ∈_n-1, then it is Π^0_1(C_n-1⊕ M_n-1') uniformly in an M_n-index of X.The sentence X ∈_C_n-1^_n-1 is equivalent to the formula (∀ (e,i) ∈ C_n-1)( X ∈_e^Z_i), where M_n-1 = ⊕_i ∈ω Z_i. In other words, the sentenceX ∈_C_n-1^_n-1 is equivalent to ∀ e ∀ i, (e, i) ∉C_n-1∨ (∃ρ⊆ X) ρ∈ W^Z_i_e The left-hand side of the disjunction is Δ^0_0(C_n-1), and the right-hand side is Σ^0_1(X ⊕ M_n-1), hence Δ^0_0((X ⊕ M_n-1)').The whole sentence is therefore Π_1^0(C_n-1⊕ (X ⊕ M_n-1)') uniformly in X. Fix a set Z and a sequence of pairs of Π^0_1(Z) formulas (φ_s, ψ_s)_s ∈ such that for every s, at least one of φ_s and ψ_s is true. It is well-known that any set P of PA degree relative to Z computes a set H such that for every s, if s ∈ H then φ_s is true, and if s ∉H then ψ_s is true. Combining this fact with <Ref>, we obtain the following lemma:lemma]lem:effective-pigeonhole Let X ∈_n ∩^_n-1_C_n-1 and f : X → 2 be a 2-coloring in _n. Then there is some f-homogeneous set Y ⊆ X such that Y ∈_n ∩^_n-1_C_n-1. Moreover, an M_n-index of Y can be P-uniformly from M_n-indices of X and and f. Let _X be the class of all H such that for every i, if i ∈ H then X ∩ Z_i ∈^_n-1_C_n-1, and if i ∉H, then X ∖ Z_i ∈^_n-1_C_n-1. By partition regularity of ^_n-1_C_n-1 and since X ∈^_n-1_C_n-1, then _X is non-empty. Moreover, by <Ref>, the class _X is Π^0_1(C_n-1⊕ M_n-1') uniformly in an M_n-index of X, hence is Π^0_1(_n). Thus, given an M_n-index of X, one can find an M_n-index of a tree T_X such that [T_X] = _X, and of a member H ∈ [T_X], and given an M_n-index i_f of f, one can H-decide whether X ∩ f^-1(0) or X ∩ f^-1(1) belongs to ^_n-1_C_n-1 and thus compute an M_n-index of the corresponding set.Note that almost all the operations are manipulations of codes, hence do not use the oracle P. The only place it is used is when deciding whether X ∩ f^-1(0) or X ∩ f^-1(1) belongs to ^_n-1_C_n-1. Indeed, it requires to evaluate the M_n-index of H into its actual set, thanks to the oracle M_n which is P-computable. The following two lemmas analyze the uniformity of the forcing questions at the lower levels and at the top level.lemma]lem:effective-question-below-validityIn <Ref>, a _n-index of an extension d can be found P-uniformly from a _n-index of c and the Σ^0_k+1 formula (∃ x)ψ_e(G, x) for k < n. Let c := (R⃗,σ,X) with _n-index ⟨ e_R, σ, e_X ⟩.* Suppose σ_m (∃ x) ψ_e(G,x). As in the proof of <Ref>, by a compactness argument, there exists t ∈ω such that for every 2-colorings h⃗ = h_0, …, h_m-1∈ 2^t, there is a finite τ⊆ X ∩{0, …, t } which is R⃗-transitive and h⃗-homogeneous and some x ∈ω such that * if k = 0, ψ_e(σ∪τ,x),* if k > 0, σ∪τ_ℓψ_e(G, x) for some ℓ≥ m. M_n-indices of the colorings g⃗ defined in <Ref> are uniformly P-computable in ⟨ e_R, σ, e_X ⟩. By <Ref>, one can P-compute uniformly in M_n-indices of g⃗ and e_X an M_n-index of a g⃗-homogeneous set H ⊆ X in _n ∩^_n-1_C_n-1.The colorings h⃗ are defined uniformly from the colors of g⃗-homogeneity of H. Thus, the finite R⃗-transitive and h⃗-homogeneous set τ⊆ X ∩{0,…,t} is found P-uniformly in ⟨ e_R, σ, e_X ⟩. One can therefore P-compute a _n-index of d := (R⃗,σ∪τ, H) uniformly from a _n-index of c.If k = 0, d is the desired extension. If k > 0, since σ∪τ_ℓψ_e(G, x), then by induction hypothesis, one can P-computably find a _n-index of an extension p ≤ d such that p ⊩ψ_e(G, x), uniformly in a _n-index of d, hence in a _n-index of c.* Suppose σ_m (∃ x) ψ_e(G,x).For every set Y, let _Y be the class of all m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^Y and S⃗∈_m(σ,Y), such that for all τ⊆ Y ∖{0, …, |σ|} which is S⃗-transitive and h⃗-homogeneous, and for all x ∈ω,* if k = 0, ψ_e(σ∪τ,x),* if k > 0, σ∪τ_ℓψ_e(G, x) for all ℓ≥ m.Note that the class _Y is Π^0_1(M_k ⊕ Y) uniformly in Y.Since σ_m (∃ x) ψ_e(G,x), there exists s ∈ω and Y_0, … Y_s a partition of ω such that for all i ≤ s, (†) either Y_i ∉_C_k^_k or _Y_i≠∅. Let Ł be the class of all such s-partitions of ω. Since the statement Y_i ∉_C_k^_k is Π^0_1(C_k ⊕ M_k') and the statement _Y_i≠∅ is Π^0_1(M_k ⊕ Y_i) uniformly in Y_i, the class Ł is Π^0_1(C_k ⊕ M_k') and in particular is Π^0_1(_n) since k < n. One can P-compute uniformly in an index of Ł an M_n-index of some (Y_0, …, Y_s) ∈Ł. By <Ref>, one can P-compute uniformly in an M_n-index of (Y_0, …, Y_s) and an M_n-index of X some i ≤ s and an M_n-index of some set H_0 ⊆ X ∩ Y_i in _n ∩^_n-1_C_n-1. In particular, Y_i ∈_C_k^_k, thus _X_i≠∅ and one can P-computably find an M_n-index of an m-tuple h⃗ and S⃗∈_m(σ, Y) in _X_i.By <Ref>, one can P-compute the M_n-index of a h⃗-homogeneous subset H ⊆ H_0 in _n ∩^_n-1_C_n-1 uniformly in ⟨ e_R, σ, e_X ⟩. Thus, a _n-index of the condition d := (R⃗S⃗, σ, H) can be P-computed uniformly in a _n-index of c and the formula (∃ x)ψ_e(G, x).lemma]lem:effective-question-validity In <Ref>, a _n-index of an extension d can be found P-uniformly from a _n-index of c and the Σ^0_n+1 formula (∃ x)ψ_e(G, x). Let m = |R⃗|+1.* First, suppose c(∃ x) ψ_e(G,x). The proof isessentially the same as the first case of <Ref>: One define colorings g⃗ and h⃗ similarly, and refine the reservoir into a g⃗-homogeneous subset thanks to <Ref>. One therefore obtains a _n-index of d := (R⃗,σ∪τ, H) uniformly from a _n-index of c, where σ∪τ_ℓψ_e(G, x).Then, by <Ref>, one can P-computably find a _n-index of an extension p ≤ d such that p ⊩ψ_e(G, x), uniformly in a _n-index of d, hence in a _n-index of c. * Now, suppose c(∃ x) ψ_e(G,x). Then, there exists an m-tuple of 2-colorings h⃗ = h_0, …, h_m-1∈ 2^X and an m-tuple of tournaments S⃗∈_m(σ,X) such that for every h⃗-homogeneous and S⃗-transitive finite chain σ⊆ X and every x ∈ω, σ∪τ_m ψ_e(G,x). Let Ł be the class of such m-tuples of 2 colorings h⃗ and m-tuples of S⃗∈_m(σ,X). By <Ref>, _m(σ, X) is Π^0_1(X). Since X ∈_n, the class Ł is Π_1^0(_n), hence, since P ≥_T M_n, one can P-compute M_n-indices of a pair (h⃗, S⃗) ∈_n ∩Ł. By <Ref>, one can P-compute a _n-index of a h⃗-homogeneous set Y ⊆ X in  uniformly in _n-indices of X and h⃗. The 3-tuple d := (R⃗S⃗, σ, Y) is a valid -condition such that d ⊩ (∀ x) ψ_e(G,x). We are now ready to prove <Ref>.Let n ≥ 1. Let P be a set of PA degree over ∅^(n). Let (ψ_s(G))_s > 0 be an enumeration of all Σ_k+1^0 formulas for all k ≤ n. Let us begin with a condition c_0 := (T, ∅, ω). By induction, suppose c_s-1 built, with i_s-1 a _n-index of c_s-1, Consider the Σ^0_k+1 formula ψ_s(G) , for k ≤ n. By <Ref> if k < n and by<Ref> if k = n, there exists an extension d ≤ c_s-1 such that d ⊩ψ_s(G) ord ⊩ψ_s(G). Moreover, a _n-index i_d of d is P-computable uniformly from i_s-1 and s. Let c_s := d, and i_s := i_d. The (c_s)_s ∈ω sequence is a decreasing sequence of conditions such that := { d ∈_n : (∃ s)c_s ≤ d } is n-generic. By <Ref> and <Ref>, G_ is infinite and T-transitive, and by <Ref>, G_^(n)≤_T P. plain
http://arxiv.org/abs/2310.17968v1
{ "authors": [ "Ludovic Levy Patey", "Ahmed Mimouni" ], "categories": [ "math.LO", "03B30" ], "primary_category": "math.LO", "published": "20231027083131", "title": "The weakness of the Erdős-Moser theorem under arithmetic reductions" }
firstpage–lastpage Gluon helicity from global analysis of experimental data and lattice QCD Ioffe time distributions S. Zafeiropoulos January 14, 2024 ===================================================================================================Studies of the dynamics of globular clusters assume different values of bar parameters (mass, velocity, size) and analyse the results of orbit classifications over the range of the chosen values. It is also a usual thing that a spherical bulge component is converted into the bar to obtain a non-axisymmetric potential from an axisymmetric one. The choice of bar parameters and the way the bar is converted from the bulge introduce systematics into the orbit classifications that we explore in the present study. We integrate orbits of 30 bulge globular clusters residing in the inner area of the Galaxy (R ≲ 5 kpc) backwards in time for three different potentials, two of which are obtained by fitting the rotation curve, and one is taken from the surrogate N-body model representing our Galaxy. We analyse each orbit in terms of dominant frequencies obtained from its coordinate spectra. We find that the bar pattern speed is a key factor in orbital classification. With an increase of it, frequencies deviate more and more from the “bar” frequency ratio 2:1. The bar-to-bulge mass ratio (assuming the total mass of the bar plus the bulge is fixed) and size of the bar play a smaller role.We also find that, in the N-body potential, the fraction of orbits that follow the bar is higher than in those obtained from fitting the rotation curve.(Galaxy:) globular clusters: general – Galaxy: kinematics and dynamics – Galaxy: bulge § INTRODUCTIONSeveral physical components co-exist within the area of about 5 kiloparsecs from the centre of our Galaxy. These components are a bar, its vertically thick part, which is usually referred to as the boxy/peanut-shaped (B/P) bulge <cit.>, and possibly an another bulge, commonly referred to as the classical one. The existence of the latter has came under the question in the past few years due to various indicators pointing out that bulge stars exhibit cylindrical rotation <cit.>, i. e. support the B/P bulge rather than the classical one, although there are some exceptions (, also see the review by ). We do not mention here the most inner part subsystems, such as the nuclear disc and the nuclear star cluster <cit.>, since they are not relevant to the present work and are important for considering on much smaller spatial scales than those considered here. Globular clusters (GCs) are tracers of the secular evolution of bar and bulge components, since GCs include a large bulk of stars that reflect how these components form/evolve in their metallicity and stellar populations. However, the question of whether a particular GC belongs to a certain component (e.g. a bulge, a bar, a disc, or a halo) is not easy to answer. On the contrary, determining the origin of a globular cluster is a rather difficult task, which requires reliable knowledge of the clusters' proper motions, their radial velocities, positions, and metallicity  <cit.>. For example, <cit.> recently found that the CGsTerzan 10and Djorgovski 1 have typical halo orbits, while their orbits are contained within the bulge volume. Another illustrative example is that <cit.> and <cit.> showed that several GCs, while do not belong to either the disc or the halo and appear to belong to the bulge, nevertheless do not follow the bar. Meaning that these GCs move either faster or slower than the bar, but not synchronously with it. The ambiguity in the classification of GCs stems from the fact that several physical components of the Galaxy mentioned above overlap in physical spaceand, at the same time, the observations of the inner part of the Galaxy are affected by heavy extinction and crowding <cit.>. An additional problem, which especially concerns the dynamics of the GCs of the inner Galaxy and the classifications based on it, is that the parameters of the bar itself are also not set in stone. Bar pattern speed estimates range from about 30 km/s/kpc to 40 km/s/kpc <cit.>, while some authors provide an even higher value of about 50 km/s/kpc <cit.>. Naturally, the centrifugal force that influence the motion of GCs depends on the bar pattern speed. It is also important that the changes in the pattern speed force the resonances to move, and, thus, orbits will differ depending on how close the GC to a particular resonance. Therefore, the classifications of the orbits should differ depending on the bar pattern speed, and one should consider a set of bar pattern speed values, as it was done, for example, in  <cit.> and <cit.>. In <cit.>, the authors calculated the probability that an orbit belongs to one or another component separately for each of the pattern speeds considered there. The uncertainty in the existence of classical bulge mentioned above can also implicitly affect the results of GCs' classification. One of the approaches to modelling of GCs' orbits is to transform the spherical central component into a bar. This means the the central spherical bulge in the originally axisymmetric model of the Milky Way is replaced by an elongated bar with exactly the same mass as the bulge. This approach has been used in recent studies by <cit.> and many previous ones. At the same time, various N-body studies showed that the inclusion of even a small classic bulge component can drastically change the overall evolution of the model, leading to the formation of the so-called barlenses <cit.> or preventing the bar buckling <cit.> altogether.In the present work, we want to address the mentioned issues in the context of the capturing of CGs by the bar. We want to explore how the choice of the bar parameters (pattern speed, mass, size) affects the state of the CGs relative to the bar, i. e. is there any systematics in the frequency rations of GCs orbits f_R/f_x (see definition is Section <ref>) depending on the bar parameters.To this aim, we study the motion of GCs in three different instances of the Milky Way potential. Two of them are based on observational data from <cit.> and <cit.> and one is based on the N-body model from <cit.>, which was specifically prepared to represent the mass distribution of the Milky Way and has a spatial resolution of about 30 pc. This N-body model also contains a classical bulge and a naturally formed bar, thus providing an opportunity to study the GCs kinematics in case of a self-consistent model, obtained without transforming one component into another. The article is structured as follows. In Section <ref>, we describe our sample of GCs. In Section <ref>, we provide details on the potentials considered in the present work and how the classification and integration of the orbits backwards in time was carried out. In Section <ref>, we analyse the systematics in the classification of orbits introduced by changing bar parameters using one GC, NGC 6266, as an example. Section <ref> presents the results of the classification for all GCs in the sample. We compare our results with those of previous works in Section <ref>. In Section <ref>, we give our conclusions.§ DATA To study the kinematics of GCs in different barred potentials, we first selected 30 CGs, which were previously identified in <cit.> as those that belong to the bar/bulge. These GCs were selected from a catalogue of 152 GCs from <cit.> based on the following criteria. First, a geometric criterion was applied to retain only those GCs whose apocentric distance r_apo is less than 3.5 kpc <cit.>. This reduces the sample to 39 members. Then, nine GCs were found to belong to the disc based on the angular momentum and eccentricity of the corresponding orbits (see details in ) and, thus, were removed from the sample. Table <ref> and Table <ref> list the chosen GCs, as well as their observational parameters and Cartesian coordinates and velocities, used below to integrate orbits backwards in time for 5 Gyr.Coordinates and velocities are obtained from equatorial coordinates (α_J2000,δ_J2000), line-of-sight velocities from the catalogue of <cit.>, distances from <cit.>, and proper motions from <cit.>. The catalogue of <cit.> is compiled based on the Gaia DR2 data, while the catalogues of <cit.> contain new proper motions and refined distances based on Gaia EDR3 data, Hubble Space Telescope (HST) data, and some literature estimates. The transformation from angular coordinates and velocities is performed using the values obtained by <cit.> from rotation curve fitting, i.e. under the assumption that the distance from the Galaxy centre to the Sun R_=8.3 kpc, the height of the Sun above the disc plane Z_=17 pc <cit.>, the velocity of local standard of rest (LSR) V_=244 km/s. The peculiar velocityof the Sun relative to LSR (u_,v_,w_)=(-11.1,12.2,7.3) km/s is taken from <cit.>. For the bar viewing angle, the value 23 deg was taken from <cit.>, where it was estimated from fitting the boxy/peanut bulge intensity profile for different viewing angles.§ SIMULATIONS §.§ Mass modelsIn the present work, we consider several types of mass models of the Milky Way.The first one was obtained by <cit.> (hereinafter, BB2016) via fitting the rotation curve to the kinematic data of a set of different objects with distances up 200 taken from <cit.>. The mass model consists of three distinct components, namely the bulge <cit.>, the disc <cit.>, and the halo <cit.>:Φ_bulge(r)= - M_b/(r^2 + b_b^2)^1/2, Φ_disc (R, Z)= - M_d/[R^2 + (a_d + √(Z^2 + b_d^2))^2]^1/2, Φ_halo(r) = - M_h/rln(1 + r/a_h).Description of the parameters and their respective values are given in Table <ref>. The second model is taken from <cit.> (hereinafter, MC2017) and consists of six different components, namely thin and thick stellar discs, dark matter halo, and H I and molecular discs. In this model, the dark halo is also described by a Navarro-Frank-White profile, given in eq. (<ref>). The stellar discs are exponential both in the plane and the vertical direction:ρ_(R, z) =Σ_0/2 z_exp(-|z|/z_ - R/R_),while gaseous discs are exponential in the plane and isothermal in the vertical direction and have a hole in the centre with the scale of R_m:ρ_(R, z) = Σ_0/4z_exp(-R_m/R - R/R_)sech^2[z/(2z_)]The central component (bulge) is implemented via the following parametric model:ρ_b = ρ_0,b/(1+r'/r_0)^αexp[-(r/r_cut)^2],where r'=√(R^2 + (z/q_bulge)^2). To avoid repeatance, we refer the reader to <cit.> for a description of the parameters and their values.In both models, we introduce a bar component by decreasing the mass of the central component (bulge) by a certain value and then assigning the bar mass to this value. Essentially, this means that, for all models considered below (except the N-body one), the total mass of a spherical bulge and the bar is fixed:M_bar + M_b = M_b,0,where M_b,0 is the initial bulge mass of the axisymmetric model and M_b is the residue mass of the bulge. Hereinafter, we refer toM_b,0 simply as M_b, since we do not consider the residue bulge mass as an independent parameter at any part of this work. Below, we consider a set of bar mass values, or, more precisely, a number of bar-to-bulge mass ratios M_bar/M_b (see Table <ref>). <cit.> assigned all the bulge mass to the bar component in their models. Here, we introduce the ratio of the bulge and bar masses as a free parameter to investigate how uncertainty in the classical bulge parameters possibly existing in our Galaxy can affect the results of orbital classification. For the bar density profile, we take a Ferrers profile:ρ = 105M_bar/32π p q a^3[ 1-(r̃/a_)^2]^2,where M_ is the bar mass, a_ is the bar major axis, r̃=√(x^2 + (y/p)^2 + (z/q)^2) is the elliptical radius and p and q characterise the flattening of the bar in disc plane and along the vertical axis, respectively. The bar parameters and their description are given in Table <ref>. The third type of potential is taken from a recent work by <cit.> (hereinafter, TG2021), where a surrogate Milky Way N-body model was presented. Time snapshots of the model were made publicly available by the authors. At start of the simulations, the model consisted of two spherical components, NWF-like halo <cit.> and a stellar bulge <cit.>, and an exponential disc isothermal in the vertical direction (similar to eq. (<ref>), but without the hole). The evolution of the model was followed up to about 4.3 Gyr. There is no need to insert the bar component separately or transform the bulge as the bar in this model is formed naturally (see Fig. <ref>). For simplicity, we consider here only the last snapshot of  <cit.>'s simulations, neglecting the time evolution of the bar properties. We leave this for future studies.For the selected time moment, the N-body bar has the size a_bar about 4.5 kpc and the pattern speed Ω_p≈39 km/s/kpc. The mass of the bar was not estimated directly in <cit.>, but the authors provided an overall estimate M_bar+M_disc+M_bulge=3.5×10^10 M_ of stellar mass inside the area of R<5 kpc (bar region), where M_disc is the mass associated with the inner area of the disc and M_bulge is the mass of the classic bulge originally included in the model. The number of particles in the N-body model is about 7· 10^7. To avoid very time-consuming calculations of gravitational force at each time-step when integrating the orbits, we prepared a multipole expansion of the potential using the convenientsubroutine fromsoftware package <cit.>:Φ(r, θ, φ)=Σ_l,mΦ_l,m(r)Y_l^m(θ, φ),where Y_l^m are spherical functions of degree l and order m. We truncate the series at l_max=6 and m_max=6 and impose a triaxial type of symmetry (only even harmonics are calculated). Isolines of the potential approximations are shown in the right panel of Fig. <ref>. Note that the potential isolines are rounder than the density isolines, as they should be (, Chapter 2), but still showing the flattening in the bar area. In the very central part, the classic bulge overweighs other components and, thus, the isolines are circular here.For potentials of BB2016 and MC2017, we consider a range ofbar pattern speeds (Ω_p) and sizes (a_bar, the half length of the bar major axis), from 30 km/s/kpc to 60 km/s/kpc and from 5.0 kpc to 2.5 kpc, respectively. Fig. <ref> shows how the mentioned limits correspond to the main resonances in the potentials of BB2016 and MC2017. The dynamics of the bars is usually characterised by the rotation rate parameter ℛ=R_CR/a_bar <cit.>. As can be seen from the figure, we consider both slow (ℛ≫ 1) and fast bars (ℛ≲ 1) here. For other galaxies, ℛ spans the range from almost 0 to about 4 <cit.>. For our bars, ℛ is from about 0.5 for Ω_p=60 km/s/kpc and a_bar=5 kpc to about 10 for Ω_p=10 km/s/kpc and a_bar=2.5 kpc. For generality, we consider bars with major axes and pattern speeds having the values from the suggested ranges indisciminately, although longer bars tend to have lower pattern speeds (see figure 15 in ). §.§ Orbit integration and classificationFor each of the described potentials we add the rotation in accordance with the chosen value of the pattern speed Ω_p and integrate orbits of GCs backwards in time for a time period of 5 Gyr. In the present work, we are interested in the orbital families which an orbit can belong to potentially. This “property” should not depend on the type of the integration (forward or backaward) for regular orbits[For chaotic orbits, forward and backward integration can produce different results in terms of frequencies, but we are mostly concerned with the regular orbits captured or oscillating around the bar in the present work.], since orbital frequencies are “integral” properties of the orbit. Here, we consider backward integration, because it is in line with our previous studies, e. g. <cit.>. Integration is carried out using the  software package.performs integration via 8th order Runge–Kutta scheme with an adaptive time-step. We choose an output time step Δ t=1 Myr. The latter is unrelated to the actual time-step of integration, which is determined internally in ODE solver based on the imposed value of the relative accuracy, 10^-8 in our case. For each orbit, we traced the evolution of the Jacobi energy as an indicator of the accuracy of our calculations. A typical example is shown in Fig. <ref>. In short, the energy is well conserved during integration (up to six decimal places). To classify orbits, we apply themethods of spectral dynamics pioneered by <cit.>. In this approach, one calculates the coordinate spectra of the orbit, i.e. Fourier transforms of the time series x, y, z, and R taken in a bar rotating frame, and then finds dominant frequencies f_x, f_y, f_z, and f_R corresponding to the highest spectral lines.The spectra are calculated as followsP_j =1/N_t|∑_k=0^N_t-1 x_k exp (-2π i f_j t_k) |,wheref_j = j/Δ T, Δ T = 5 Gyr, t_k = k Δ t, Δ t =1 Myr, 0≤ j ≤ (N_t-1)/2, and N_t is the length of the time series. To improve the resolution of the peaks, we use a subroutine similar to zero-padding (see details in , where a similar analysis was applied to the study the orbital families of B/PS bulges). For regular orbits, the spectra consist of discrete lines, these lines can be distinguished, and the corresponding frequencies can be studied to understand which orbital group or family the orbit belongs to <cit.>. This approach made it possible to obtain many fruitful results on the orbital composition of the bar and the importance of various resonances for the structure of the bar in a number of studies <cit.>. <cit.> also calculated the orbital frequencies to determine whether a particular GC follows the bar or not. Here, we use the same approach and assume that if f_R/f_x=2.0 ± 0.1, then the GC with such a ratio of frequencies is the bar supporting one, i.e. follows the bar.A typical example of the orbit of NGC 6266, along with its time series of coordinates and their spectra, is presented in Fig. <ref>. Hereinafter, all orbits presented in the figures are shown in the bar rotating frame if not specified otherwise. Bar parameters are Ω_p=45 km/s/kpc, M_bar/M_b=0.95, a_bar=5 kpc, p=2.0, and q=3.0 in this case. Integration is carried out in the potential of BB2016. We note that, although the orbit has a nice-looking regular profile, it does not actually follow the bar, since f_R/f_x≈3.5. § ORBIT TYPE DEPENDING ON THE BAR PARAMETERS First of all, we would like to explore how the choice of bar parameters affects the type of orbit.We begin this Section by considering only one GC, namely NGC 6626. There is no particular reason for this choice, except that this example is illustrative. By a detailed analysis of one orbit, we outline the systematics in the classification of orbits that arise due to changes in the bar parameters.Fig. <ref> shows how frequencies f_x and f_R and their ratio f_R/f_x change with the bar pattern speed, mass, and size for the potential of BB2016. To study dependencies, we first consider the one-dimensional case, where one parameter changes, while the rest are fixed. Unlessotherwise specified, all bar parameters are fixed at the following values: Ω_p=45 km/s/kpc M_bar/M_b=0.95, a_bar=5.0 kpc, p=2.0, and q=3.0. We present orbital profiles in Fig. <ref> to illustrate how they change when the corresponding parameter is changed. As can be seen from the individual subpanels, there are clear systematic shifts in frequencies and, accordingly, frequency ratios: * With an increase of the pattern speed, frequency of radial oscillations decreases. This continues up to the point at about 24 km/s/kpc, then the frequency increasesabruptly, after which it remains constant.* Frequency f_x decreases monotonically with an increase in the pattern speed.* The frequency ratio f_R/f_x shows an interesting behaviour as a result of changes in individual frequencies. Initially, f_R/f_x=2 (a typical ratio for orbits following the bar), but at Ω_p≈24 km/s/kpc and after that, it deviates more and more from this value.The described changes of frequencies are reflected in the orbit profile. In the case of f_R/f_x≈2, one observes a very regular orbit captured by the bar. For f_R/f_x≳2, the orbit becomes more “windy” and now oscillates around the bar. For the bar-to-bulge mass ratio and size (second and third rows of Fig. <ref>), one can see that changing these parameters affects the orbit profile and the corresponding frequency ratios, but their influence is not so strong compared to the pattern speed. An increase in the bar mass and size leads to a slight decrease in f_x, which leads to small changes in the frequency ratios, from f_R/f_x≈2.2-2.4 at the left boundary of the interval to about f_R/f_x≈ 3.0 on the right. Comparing the results for BB2016 (Fig. <ref>) and MC2017 (Fig. <ref>), one can see that the trends in changes of frequencies between them are similar, i.e. there is a sudden change in the frequency ratio at a particular value of the bar pattern speed. For the MC2017 potential, this change occurs at a somewhat smaller value of Ω≈20 km/s/kpc. In the case of MC2017, changing the bar-to-bulge mass ratio has almost no effect on the frequency ratio. This can be explained by the fact that the bulge in the model of MC2017 already has a certain degree of flatness (along the vertical direction) and its transformation into an elongated component does not significantly affect the potential. In Fig. <ref> and Fig. <ref>, we fixed all bar parameters, except for one, which then varied. However, doing so, we did not take into account the possibility that, with a different combination of bar parameters, the observed dependencies may well change or simply disappear. To explore such a behaviour in more detail, we conduct a following suit of simulations. We run Monte-Carlo simulations, choosing a set of bar parameters from the intervals specified in Table <ref> uniformly, then we calculate the orbit and the corresponding ratio of its frequencies. We performed 10^5 of such iterations. Fig. <ref> show the results in a form of a matrix plot for all parameters, with the average value of frequency ratio for a given pixel highlighted in different colours. Each subplot presents a 2D histogram obtained by averaging the values within 100 bins from minimum to maximum values for each axis. The subplots show qualitatively similar results compared to those presented in the 1D plots (Fig. <ref> and Fig. <ref>). Again, the pattern speed is the most importantparameter, i. e. in each subpanel in the first column there is a gradual progression of colours. For other parameters, there is no such correlations, except for a weak correlation of frequency rations with M_bar. Thus, changing all other parameters does not strongly affect the frequency ratio. This means that the pattern speed may be very well the most important factor when one is trying to asses orbit families and check whether a particular orbit follows a bar or not. To understand why frequencies abruptly change with the pattern speed, we calculated the Poincare surface of sections (SoSs) for the range of pattern speeds.For 3D orbits, SoSs are four dimensional objects, i.e. (x,z,V_x, V_z) taken at y=0 and V_y>0 (or any other similar combinations).Here, we plot SoS projections on (x, V_x) plane taken at y=0 and V_y>0. A similar approach was used in <cit.>, where 3D N-body orbits were studied. Fig. <ref> demonstrate how the SoSs change either with Ω_p or with the corresponding frequency ratio f_R/f_x. Note that the SoSs presented are not exactly typical.Usually, the Jacobi energy is a fixed variable and one investigates various orbits for a chosen energy value. In Fig. <ref>, the pattern speed (and, thus, the corresponding Jacobi energy) changes from orbit to orbit, not the initial velocity or position. Nevertheless, as can be seen from the figures, the family towhich the orbit belongs gradually changes with the pattern speed. The orbit starts on the island close to x1 family (they reside in rightmost corner of the plot), then gradually expands to the left side of the diagram. At some point (after Ω_p≳30), new islands appear. The orbit clearly ceases to be a member of the x1 family, as indicated by its increasing frequency ratio f_R/f_x. As for the question of which family an orbit ends up, this question is not easy to answer, since a bar can be populated by orbits with multiplicity greater than 2:1, see a recent work by <cit.>. From frequency ratios, it follows that the orbit considered here gradually changes its multiplicity with an increase of the pattern speed, becoming a 3:1 orbit, then a 4:1 orbit, and so on. § FREQUENCY RATIOS FOR THE SAMPLE OF GLOBULAR CLUSTERSHere we consider in detail how the frequencies change with the pattern speed for all GCs in our sample. Fig. <ref> shows the frequencies f_x and f_R for all three potentials, and Table <ref> lists the exact values. For clarity, we investigate only three values of the pattern speed, Ω_p=(30,45,60) km/s/kpc, while the rest of the bar parameters are fixed at M_bar/M_b=0.95, a_bar=5.0 kpc, p=2.0, q=3.0. For each orbit, we use Monte-Carlo simulations (10^3 iterations) to estimate frequency errors due to uncertainty in GCs' positions and velocities. It can be seen from the figure that the orbital frequencies of almost all GCs behave in the same way as it was shown earlier for NGC 6266. As the pattern speed increases, the frequencies ratio f_R/f_x begins to deviate more and more from the resonance line 2:1. In general, Fig. <ref> demonstrates that, in analytical potentials (both in BB2016 and MC2017), most GCs are do not follow the bar for all the considered values of the pattern speed. We should also note that one cannot overstep the limits of the pattern speed considered here, since they are motivated by observations. The rightmost panel of Fig. <ref> shows orbital frequencies obtained for the same GCs in the N-body potential. For the N-body model, we do not consider different pattern speeds, since in this case its value 39 km/s/kpc follows from direct and precise measurements of the bar properties in the model <cit.>. As can be seen, there is much more orbits with the resonance frequency ratio of 2:1 in such a potential. We have compiled a list of them in Table <ref>. Based on the orbital profiles, we distinguish the orbits into two types, the well-known x1 family consisting of orbits elongated along the bar and supporting its structure and x2 orbits which are elongated in the direction perpendicular to the bar major axis and observed in the most central regions (, see also reviews by  and a more recent one by ). In Fig. <ref> and Table <ref>, f_R/f_x values are compared for all considered potentials. The orbits themselves are presented in Fig. <ref>, Fig. <ref>, and Fig. <ref>. For a better comparison, we fixed the bar pattern speed in analytical potentials at the value of the pattern speed in the N-body simulation, i. e. Ω_p=39 km/s/kpc for all cases. The rest bar parameters are the same as in the previous section: M_bar/M_b=0.95, a_bar=5.0 kpc, p=2.0, q=3.0. We should note that one can try to change the bar-to-bulge mass ratio somewhat to make the BB2016 and MC2017 pontentials resemble the N-body potential more, but, in practice, it is hard to estimate the ratio in the N-body model itself. For example, if one considers the ratio of masses of the classical bulge and the bar plus the said bulgein the N-body model, it is about 0.6. However, at the same time, the total mass of the bar plus the bulge is about the half of the original disc mass (see table 1 in ), while, for the BB2016 potential, the bulge plus bar is about 20% of the disc. The root of the problem is that, for the N-body model, the bar is formed from the disc material, and the disc itself does not go all the way towards the centre (see , figure 1 there). This is clearly not the case for the potentials of BB2016 and MC2017 obtained from the velocity curve fitting, where the disc goes all the way towards the centre and, thus, has high contribution in terms of mass there. One can possibly alleviate this issue by reducing the disc mass in the center or by initially considering the disc with the hole in the centre. We leave the solution of this problem for future studies. Here, we stick to the appoach by <cit.>, where the whole or almost the whole bulge is thought to be a bar.As can be seen, f_R/f_x in the analytical potentials are shifted towards larger values compared to those in the N-body potential, for both BB2016 and MC2017. Although, we should note that the difference between BB2016 and N-body is a bit smaller on average than between MC2017 and N-body. In addition to the orbits following the bar, we want to mention some of the interesting ones withfrequency ratios above or below 2:1. Liller 1 in all three potentials has a frequency ratio of about 3:2, the orbit itself looks regular, but has circle-like profile, and clearly does not follow the bar. NGC 6380 in BB2016 has f_R/f_x close to 3, which is reflected in its overall trefoil-like shape. We should also mention, that, while most of orbits do not follow the bar in BB2017 and MC2017, some of their profiles look regular and resemble those previously shown for NGC 6266 in Fig. <ref>. These are NGC 6642, NGC 6558, Terzan 1, Terzan 5 for BB2016 and NGC 6380, NCC 6440, NGC 6522, NGC 6642, Terzan 1, Terzan 4, and Terzan 5 for MC 2017. It is interesting to note that most of these orbits have a rather small error in their frequency ratios (Δ f_R/f_x = 0.1-0.2)§ DISCUSSION The change in the ratio of frequencies with the pattern speed has been indirectly observed in some of the previous works. In particular,  <cit.> found that the percentage of orbits following a bar decreases with bar rotation, except for NGC 6304, NGC 6342, and NGC 6637, which are not considered in the present work. If we assume the percentage of orbits following the bar should increase as the frequency ratio gets closer to 2:1, which is reasonable, then results of <cit.> support the idea that decreasing the pattern speed causes the frequency ratios f_R/f_x get closer to the bar frequency ratio 2:1. A decrease inthe frequency f_x with the patter speed, which is one of the reasons why the frequency ratio deviates from 2:1, was also observed by <cit.> for the self-consistent N-body model. Strictly speaking, what was observed is an increase of f_x with a decrease in Ω_p (i.e. bar slow downing), which is essentially coincides as our result. We should also note that attributing the effect to a change in the pattern speed only in the case of <cit.> may be somewhat biased, since other properties of the bar (mass and size), were also changing there in accordance with the self-consistent evolution of the model. As for the particular GCs, <cit.> found that for Liller 1, NGC 6304, NGC 6522, NGC 6528, NGC 6540, NGC 6553, Terzan 5, and Terzan 9, more then 20 percent of orbits follow the bar. Comparing our results to <cit.>, we find that, for all potentials, Liller 1 and Terzan 9 do not follow the bar, while Terzan 5 follow the bar in the N-body model. NGC 6522 follows the bar in the potentials of BB2016 and MC2017, but perpendicular to it in the N-body potential (x2 family). For NGC 6528, the frequency ratio is close to 2:1, but the orbits themselves have an irregular profile, therefore, it is hard to say that this orbit can support a bar.§ CONCLUSIONS * We calculated the evolution of 30 globular clusters located in the inner area of the Galaxy (R≲5) backwards in time for 5 Gyr in a non-axisymmetric galaxy potential using Gaia DR2 data for line-of sight velocities <cit.> and the newest Gaia DR3 data for proper motions and distances <cit.>. Throughout this work, we have compared the results for three potentials, two of which are analytical, obtained by fitting the rotation curve from <cit.> and  <cit.>, and one is taken directly from N-body simulations recently preparedby <cit.> (“surrogate Milky Way”). * For all orbits, we calculated their coordinate spectra and determined the corresponding main frequencies, f_x and f_R, for a range of bar parameters (pattern speed, mass, size, shape) in analytical potentials and for a fixed pattern speed for the N-body model. * We distinguish orbits by their frequency ratio f_R/f_x to test whether a particular orbit follows the bar. Most of orbits in both considered analytical potentials do not support the bar in the "usual" sense (either f_R/f_x ≳ 2.1 or f_R/f_x ≲ 1.9) for a physically reasonable values of the pattern speed, while, for the case of the N-body potential, 10 GCs follow the bar (f_R/f_x≈2.0).* On the example of one orbit (NGC 6266), we verified how the the frequency ratio changes depending on the pattern speed, the mass and size of the bar tracking the changes in a wide range of parameters using a small relative step. We found the the frequency ratio does not depend much on the mass ratio of the bar and the spherical bulge (“classic” one), bar size, or its shape parameters. Most of the changes occur due to the changes in the pattern speed.For Ω_p≲20 km/s/kpc, the orbit perfectly follows the bar (f_R/f_x≈2.0) for all values of the pattern speed and has a typical “bar”-like profile. Then, at a certain value of the pattern speed depending on the potential, the frequency ratio changes abruptly, becoming either greater or smaller than f_R/f_x≈2.0. The orbit then begins to oscillate around the bar and no longer supports it.Overall, our results show that comparing orbital classifications between different potentials is indeed valuable, as the results turn out to be vastly different between them. An interesting question that we could not find an answer to in the present work, is why the N-body model demonstrates a lot more bar following orbits compared to the analytic potentials. This can possibly indicate that the self-consistency of the potential should play an important factor in orbital studies of GCs. § DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the corresponding author.§ ACKNOWLEDGEMENTS We acknowledge the use of the  <cit.>package without which the present work would not be possible. We are grateful to the anonymous reviewer for their valuable comments that contributed to a improvement of the scientific quality of the manuscript and a clearer presentation of our results. mnras
http://arxiv.org/abs/2310.18172v2
{ "authors": [ "Anton A. Smirnov", "Anisa T. Bajkova", "Vadim V. Bobylev" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231027143144", "title": "Globular clusters and bar: captured or not captured?" }
Article Title]The Non-zonal Rossby-Haurwitz Solutions of the 2D Euler Equations on a Rotating EllipsoidChenghao [email protected] Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, 10012, New York, USAIn this article, we investigate the incompressible 2D Euler equations on a rotating biaxial ellipsoid, which model the dynamics of the atmosphere of a Jovian planet.We study the non-zonal Rossby-Haurwitz solutions of the Euler equations on an ellipsoid, while previous works only considered the case of a sphere. Our main results include:the existence and uniqueness of the stationary Rossby-Haurwitz solutions;the construction of the traveling-wave solutions;and the demonstration of the Lyapunov instability of both the stationary and the traveling-wave solutions. [ * January 14, 2024v2.2 - notao G_p^n=======================================Acknowledgements The author is grateful to Professor Pierre Germain for introducing the topic and providing illuminating suggestions. The author thanks Professor Vlad Vicol for his help with article organization. The author is also grateful to Professor Katherine Zhiyuan Zhang for her consulting assistance.§ INTRODUCTION The incompressible Euler equations on a rotating 2D manifold M embedded in ℝ^3 has been widely-studied. In this setting, the Euler equations are taken as a model of the behavior of the atmosphere of a rotating planet.[It is an interesting topic for further investigation to derive the 2D Euler equations from a physics perspective on a rotating ellipsoid that models a Jovian planet, or on a more general manifold.The case that has been done in the literature is when M is a sphere (see Constantin and Germain <cit.>, Constantin and Johnson <cit.>, Gill <cit.>). Even though, scholars have taken the 2D Euler equations as the governing equations of planetary atmospheric flows in the case when M is an ellipsoid (see Tauchi and Yoneda <cit.>) or a rotationally symmetric manifold (see Talyor <cit.>).] Of great interest is the analysis of the solutions of the Euler equations, together with their stability properties. Taking M as a perfect sphere seems to be the most natural approach (see Cheng and Mahalov <cit.>, Constantin and Germain <cit.>).In this case, two classes of solutions are well-studied in the literature, which are zonal solutions and non-zonal Rossby-Haurwitz solutions. Zonal solutions represent the arrangement of the atmospheric band structure of outer planets, which consists of alternating westward and eastward winds.The complex atmospheric dynamics of a Jovian planet can be viewed as a background zonal solution presenting fluctuations: some stable, while other are unstable and may develop wildly over time.Studying the stability properties of these zonal solutions may offer a physical insight into the atmosphere science. Zonal solutions are automatically stationary and their stability properties are developed and stated for instance in Constantin and Germain <cit.>, or Marchiorio and Pulvirenti <cit.>. Moreover, due to the rich symmetry of the sphere, some non-zonal solutions can be obtained from zonal solutions by utilizing the invariance of the stationary Euler equations through the action of 𝕆(3).On a unit sphere, classical non-zonal Rossby-Haurwitz solutions are complete nonlinear and non-trivial solutions, which were first found by Craig <cit.>, of the Euler equations, obtained by Rossby <cit.> and Haurwitz <cit.>.Non-zonal Rossby-Haurwitz solutions can be either stationary or traveling, with the latter being derived from the former. They are also the only known non-trivial solutions of the Euler equations with explicit expressions. It is widely recognized that the corresponding Rossby-Haurwitz waves contribute significantly to the atmosphere dynamics.For instance, some non-zonal Rossby-Haurwitz waves of degree 2 can be predominant in the atmosphere on a Jovian planet (see Dowling <cit.>).Their stability properties are crucial.For example, one of the main reasons for the difficulty in making accurate long-term weather forecasts is the instability of these waves (see Bénard <cit.>). In particular, Constantin and Germain <cit.> studied the non-zonal Rossby-Haurwitz solutions and proved their Lyapunov instability.Furthermore, owning to the abundant symmetries of the sphere, Cao, Wang, and Zuo <cit.> proved that all the Rossby-Haurwitz solutions of degree 2 are orbitally stable.However, modeling a real-world planet, even one with small eccentricity like the Earth, as a perfect sphere is inaccurate. An ellipsoidal model can provide a much more accurate representation of the oblate-spherical geometry. Along this research direction, Constantin and Johnson <cit.> <cit.> derived the leading-order 3D compressible Navier–Stokes equations of the atmospheric flows on a rotating ellipsoid, via a thin-shell approximation based on the Earth's atmospheric and geographical data.These works may inspire future research on deriving the 2D Euler equations on a rotating ellipsoid for modeling an outer planet (see relation to footnote note1).Furthermore, a spherical model may diverge significantly from the actual shape of outer planets such as Jupiter or Saturn. A fast rotating Jovian planet usually has a relatively large flattening rate. I.e., Jupiter deviates much from a perfect sphere by flattening at the poles and bulging at the equator (see Berardo and Wit <cit.>).Saturn has a large flattening rate about 0.1 (see Elkins-Tanton <cit.>), and Haziot <cit.> revealed that a spherical model turned to be unsuitable for flows on Saturn. Therefore, it is necessary to use a biaxial ellipsoid model that provides a better approximation of the shape of an outer planet.Sometimes, an ellipsoidal model can make a crucial difference.For instance, Tauchi and Yoneda <cit.> studied the mechanisms behind stable multiple zonal jet flows, such as the famous Great Red Spot on Jupiter, by analyzing the 2D Euler equations on an ellipsoid from a differential geometry perspective. However, their arguments do not hold when applied to a sphere.Additionally, Taylor <cit.> studied the 2D Euler equations on a general rotationally symmetric manifold to gain a better understanding of the planetary atmosphere, with a particular focus on the stability analysis of the zonal solutions. Inspired by these works, this article investigates the case when M is an ellipsoid and considers the 2D Euler equations (<ref>) as the governing equations. As expected, the equation (<ref>) coincides with the 2D Euler equations in Taylor <cit.>.On the surface of a rotating biaxial ellipsoid (see Figure <ref>), with the major axis equal to 1 and the minor axis equal to b < 1, the incompressible Euler equations can be expressed in terms of the stream function ψ as( ∂_t + 1/cosθ√(sin^2θ+b^2cos^2θ)[-∂_θψ∂_φ + ∂_φψ∂_θ] ) (Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ)) =0. ℰ_ωHere, the polar coordinate (φ, θ) ∈[-π,π)×[-π/2, π/2] is used to parameterize the surface 𝕊^2 as (x,y,z) = (cosφcosθ, sinφcosθ, bsinθ).In equation (<ref>), ω is the rotating speed of the ellipsoid and Δ is the Laplace-Beltrami operator on 𝕊^2.Consequently, the stationary Euler equations become[-∂_θψ∂_φ + ∂_φψ∂_θ](Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0.Notice that both local and global well-posedness on H^s (s≥ 2) can be guaranteed (see Taylor <cit.>). On an ellipsoid, the stream function ψ of a zonal solution only depends on the latitude angle θ, namely, ψ = ψ (θ). Notice that any zonal solution is stationary and solves (<ref>). Taking M to be a rotationally symmetric surface, Talyor <cit.> proved the stability results for zonal solutions, including both linear stability criteria (Rayleigh’s and Fjortoft’s) and nonlinear stability criterion (Arnold's). As a special case, these properties can be inherited by an ellipsoid.Nevertheless, non-zonal Rossby-Haurwitz solutions of the 2D Euler equations on a rotating ellipsoid were not much studied in the literature. Thus, we establish theories and analyze the stability properties of these solutions in this article.In the ellipsoidal setting, we analogously propose stationary non-zonal Rossby-Haurwitz solutions to beψ = g(θ) + Y_l^m(φ, θ),which solves (<ref>). Here, g(θ) is some specific function in C^3,α((-π/2,π/2)) (see Section <ref>), and Y_l^m(φ, θ) belongs to the (l,m)-th eigenspace of -Δ associated with eigenvalue λ_l,m, where l ∈ℕ and m ∈{-l,…,l}\{0} (see Section <ref>).We will show the existence, uniqueness and Lyapunov instability of the proposed solution in Section <ref>.Notice that when b=1, these solutions reduce to classical Rossby-Haurwitz solutions on a sphere. In this case, g(θ) becomes αsinθ for some constant α, and Y^m_l reduces to a linear combination of spherical harmonics of degree l (see Section <ref>).Travelling-wave Rossby-Haurwitz solutions can be constructed through non-zonal stationary Rossby-Haurwitz solutions. Specifically, the solutions traveling with speed c are constructed to beψ_c(φ, θ ,t) = g(θ) + c λ_l,m f(θ) + Y_l^m(φ-ct, θ),where f(θ) is some specific function in C^3,α((-π/2,π/2)), which is formally defined in (<ref>). As expected, these time dependent solutions solve the Euler equations (<ref>). For a given c and Y_l^m, we will demonstrate the existence and uniqueness of the traveling-wave solution ψ_c and its Lyapunov instability in Section <ref>.§.§ Main results The stream functions of stationary Rossby-Haurwitz solutions on a sphere areψ = αsinθ + Y_l(φ, θ), α = 2ω/2-l(l+1),where Y_l belongs to the l^th eigenspace of Δ on a unit sphere (l≥2). However, they fail to be stationary solutions of (<ref>) on an ellipsoid.Naturally, we propose a modification ψ = g(θ) + Y_l^m(φ, θ), θ→±π/2lim g'(θ)= 0,where g(θ) ∈ C^3,α((-π/2,π/2)) and Y_l^m belongs to the (l,m)-th eigenspace of Δ on 𝕊^2 (see Section <ref>) to be the solutions of the stationary Euler equations (<ref>).In Theorem <ref>, we prove the existence of g in (<ref>) that solves the ODE (<ref>) with the boundary conditions (<ref>). As a result, ψ = g(θ) + Y_l^m(φ,θ) solves the stationary Euler equations (<ref>)[-∂_θψ∂_φ + ∂_φψ∂_θ](Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0,with the boundary conditions (<ref>). Unlike on a unit sphere, there is no exact formula for Rossby-Haurwitz solutions on an ellipsoid.Instead, the proof of the existence requires solving a second order ODE (<ref>) on (-π/2, π/2) with regular singularities and Neumann conditions on both boundaries.The approach is to rewrite the ODE as a Volterra integral equation (VIE) and utilize the Variation-of-Constants formula to show the existence of a smooth solution on (-π/2, 0] that vanishes at the origin and satisfies the left Neumann condition.Then through an odd extension, a solution can be constructed on the whole interval.Lastly, the smoothness of the solution at the origin can be guaranteed by exploiting a special property of the ODE, i.e. if a solution vanishes at the origin, its second derivative also vanishes at the origin.In Theorem <ref>, we prove the uniqueness (up to a constant) of ψ = g(θ) + Y_l^m(φ,θ) for a given Y_l^m. As a result, the corresponding velocity field U=J gradψ is unique. The uniqueness can be proved by contradiction.Specifically, under the non-uniqueness assumption, the difference (up to a constant) of two solutions g(θ) must fall into the (l,m)-th eigenspace of Δ. However, this is not true by the spectral theory of Δ on 𝕊^2. In Theorem <ref>, we construct the travelling-wave Rossby-Haurwitz solutionsψ_c(φ, θ ,t) = g(θ) + c λ_l,m f(θ) + Y_l^m(φ-ct, θ),that solve the Euler equations (<ref>), where c∈ℝ is the speed and f is some function defined in (<ref>).We seek for non-stationary solutions of (<ref>) travelling with speed c. Plugging ψ_c(φ,θ,t) of the form (<ref>) into the Euler equations (<ref>), we end up with a complicated second order ODE (<ref>) that is similar to the ODE (<ref>) in Theorem <ref>. Thus, the ODE (<ref>) can be solved by a similar approach and the solution is f.In Theorem <ref>, we state the uniqueness of the travelling-wave solution for a given speed c and function Y_l^m. The uniqueness can be proved in a similar way as in Theorem <ref>, which is straightforward and thus omitted in this article. In Theorem <ref>, we prove the Lyapunov instability for both travelling-wave and stationary (c=0) non-zonal Rossby-Haurwitz solutions.Specifically, for a given solution ψ_c, a sequence of travelling-wave solutions ψ^n_c with initial data ψ^n_c(0) →ψ_c(0) are constructed, such thatn→∞lim inf{sup_t>0ψ^n_c(t)-ψ_c(t)^2_L^2(𝕊^2,dσ)}≥ϵ > 0.for some positive ϵ.The constructed solutions ψ^n_c(t) travel with speed c+1/n, which exceed ψ_c(t) a little bit.By formula (<ref>), one can show the initial data ψ^n_c(0) converge to ψ_c(0).However, although the traveling speeds also converge to c, the tiny differences will be amplified by time t, which leads to the Lyapunov instability. This can be shown by expanding both ψ_c^n(t) and ψ_c(t) in terms of the basis of the eigenspace of Δ and choosing a proper time t. §.§ Organization of this article In Section <ref>, we start with deriving the Euler equations (<ref>) on a rotating ellipsoid.In Section <ref>, we present the spectral theory of Δ on a sphere and on a biaxial ellipsoid.After a briefly introduction of classical Rossby-Haurwitz solutions on a sphere (see Section <ref>), we innovatively propose stationary Rossby-Haurwitz solutions on a rotating ellipsoid (see Section <ref>).Then, we demonstrate their existence (in Section <ref>) and uniqueness (in Section <ref>).In Section <ref>, we construct the travelling-wave Rossby-Haurwitz solutions and state the uniqueness.Finally, in Section <ref>, we show the Lyapunov instability of the non-zonal Rossby-Haurwitz solutions.§ DERIVATION OF THE INCOMPRESSIBLE EULER EQUATIONS ON A ROTATING ELLIPSOID For a biaxial ellipsoid with major axis 1 and minor axis b (b<1), the standard coordinate chart we use here is (φ, θ) ∈ (-π,π)×(-π/2,π/2) ↦ (cosφcosθ, sinφcosθ, bsinθ) ∈𝕊^2.The singularity introduced by the coordinate chart at the poles can be resolved by taking smoothness into account (see relation (<ref>)). The double-valued ambiguity at φ=±π can be handled by assuming a periodic dependence on variable φ (see relation (<ref>)). For p ∈𝕊^2\{N,S}, the tangent space T_p𝕊^2 has a basis {𝐞_φ, 𝐞_θ} which is{1/cosθ∂_φ, 1/√(sin^2θ + b^2cos^2θ)∂_θ}.Correspondingly, the Riemannian volume element becomesdσ = cosθ√(sin^2θ + b^2cos^2θ) dφ dθ. In this framework, for function ψ: 𝕊^2 ↦ℝ and the velocity field U = u(φ,θ) 𝐞_φ + v(φ,θ) 𝐞_θ, the basic operators have expressions as below:grad ψ = ∂_φψ/cosθ𝐞_φ + ∂_θψ/√(sin^2θ + b^2cos^2θ)𝐞_θ ∇· U = 1/cosθ∂_φ u + 1/cosθ√(sin^2θ + b^2cos^2θ)∂_θ(cosθ v) Δψ = 1/cos^2θ∂_φφψ - tanθ/(sin^2θ + b^2cos^2θ)^2∂_θψ + 1/sin^2θ + b^2cos^2θ∂_θθψ.For a path c(t) on the surface 𝕊^2,d/dtψ∘ c(t) = gradψ· c'(t)gives a definition for the gradient. The formula for divergence comes from duality (see Richtmyer and Burdorf <cit.>), while the Laplace–Beltrami operator Δ can be computed through the Voss-Weyl formula Δ=1/√(|det(g)|)∑_i, j=1^2 ∂/∂ x^i(g^i j√(|det(g)|)∂/∂ x^j), where local coordinates (x^1, x^2)=(φ, θ) and (g^i j) is the inverse of the Riemannian metric g=(g_i j) in this coordinate system (see Grinfeld <cit.>). Define the stream function ψ(φ,θ) such that U = (u, v)^T = J gradψ = (-1/√(sin^2θ + b^2cos^2θ)∂_θψ, 1/cosθ∂_φψ)^T,where J is the counter-clockwise 90^∘ rotation matrix. The vorticity is Ω = Δψ, and the material derivative has the expressionD_t = ∂_t + (U·∇_U) = ∂_t + u∇_𝐞_φ + v∇_𝐞_θ.When applying D_t on the vorticity Δψ, the formula becomesD_tΔψ = ( ∂_t + 1/cosθ√(sin^2θ+b^2cos^2θ)[-∂_θψ∂_φ + ∂_φψ∂_θ] ) Δψ.The Euler equations on the surface of an ellipsoid rotating with angular velocity ω can be written as( ∂_t + 1/cosθ√(sin^2θ+b^2cos^2θ)[-∂_θψ∂_φ + ∂_φψ∂_θ] ) (Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0 ℰ_ω.The equation (<ref>) coincides with the one in Taylor <cit.>, where more details about the Euler equations can be found. As a consequence, the stationary Euler equations reduces to [-∂_θψ∂_φ + ∂_φψ∂_θ](Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0. The coordinate chart in (<ref>) and the definition of the stream function ψ in (<ref>) bring in the artificial singularities at the North and South poles. For any C^1 function ψ, the continuity of velocity field U at N and S implies lim_θ→±π/2∂_φψ(φ,θ) = 0.A periodic condition is also imposed on ψ, namely, ψ(φ,θ) = ψ(φ+2π,θ),which ensures the existence of ∂_φψ globally (see Constantin and Germain <cit.>). The Euler equations in terms of the velocity field U becomes{[ D_t U+2 ωsinθ/√(sin^2θ+b^2cos^2θ)JU=-grad p; div U=0 ].where p is the pressure field.It is feasible to recover the velocity U directly from the vorticity Ω, though not locally. The method is stated in Dritschel and Boatto <cit.>. § LAPLACE-BELTRAMI OPERATOR In this part, we will introduce and present the spectral theorem of the Laplace-Beltrami operator Δ on a sphere and on an ellipsoid, which serves as the foundation of the Rossby-Haurwitz solutions. §.§ On a unit sphereA spherical coordinate chart writes(φ, θ) ∈(-π, π) ×(-π/2, π/2) ↦(-cosφcosθ, sinφcosθ, sinθ).The Laplace-Beltrami operator Δ applied on a scalar function ψ(φ,θ) becomes Δψ = 1/cos^2θ∂_φφψ - tanθ∂_θψ + ∂_θθψ.The eigenvalues of -Δ on a unit sphere are {j(j+1), j ∈ℕ}. Each corresponding eigenspace 𝔼_j is of dimension (2j+1), of which the basis consists of the spherical harmonicsX_j^m(φ, θ)=(-1)^m √((2 j+1)(j-m) !/4 π(j+m) !) P_j^m(sinθ) e^i m φ,m=-j, …, j,where P_j^m is the associated Legendre polynomials given byP_j^m(x)=1/2^j j !(1-x^2)^m / 2d^j+m/ d^j+m x(x^2-1)^j,m=-j, …, j.Notice the symmetry X_j^-m = (-1)^m X_j^mand X_j^0 is zonal.Moreover, the spherical harmonics are orthonormal with respect to the inner product ⟨ f_1, f_2 ⟩ = ∬_𝔹^2f_1 f_2 dσ_B,where 𝔹^2 stands for the unit sphere and the Riemannian volume element on 𝔹^2 isdσ_B = cosθ dφ dθ.More discussions about spherical harmonics can be found in Lea <cit.>, Tung <cit.>, Constantin and Germain <cit.>.§.§ On a biaxial ellipsoid The Laplace-Beltrami operator on the surface of an ellipsoid has been widely studied (see Pankratova <cit.>, Eswarathasan and Kolokolnikov <cit.>).In our setting, when applied on ψ, it becomesΔψ = 1/cos^2θ∂_φφψ - tanθ/(sin^2θ + b^2cos^2θ)^2∂_θψ + 1/sin^2θ + b^2cos^2θ∂_θθψ.Here, we present the spectral theory for Δ on a biaxial ellipsoid close to a unit sphere provided by Eswarathasan and Kolokolnikov <cit.>.Let L ∈ℕ and β∈ℝ\{0}. Consider the biaxial ellipsoid (major axis = 1; minor axis = b) where b=1+εβ for ε∈ℝ^+ and g_ε the metric from ℝ^3 restricted to the ellipsoid. Then there exists ε_0 such that for all ε<ε_0 and Λ∈spec(-Δ_g) ∩[0, L(L+1)], we haveΛ=l(l+1)+εΛ_1+O(ε^2) for l=0,1,2, … L and m=-l, …, l with Λ_1 being given by the explicit formulaΛ_1 = (-β) 2 l(l+1)/(2 l+3)(2 l-1)(2 l^2-2 m^2+2 l-1).Moreover, each Λ has multiplicity two except for those whose expansion has m=0, which in this case corresponds to multiplicity one.Furthermore, let 𝔼_l,m be the (l,m)-th eigenspace of -Δ associated with eigenvalue λ_l,m. When m0, the basis of 𝔼_l,m is of the form{y_1(θ)e^imφ,y_2(θ)e^-imφ},for some smooth functions y_1 and y_2 that depends on (l,m).The elements of the basis are orthonormal with respect to the inner product ⟨ f_1, f_2 ⟩ = ∬_𝕊^2f_1 f_2 dσ.Though there is no exact formula for y_1 and y_2 like in the case of a sphere, an approximation up to O(ε^2) can be conducted (see Eswarathasan and Kolokolnikov <cit.>).§ NON-ZONAL ROSSBY-HAURWITZ SOLUTIONS§.§ Rossby-Haurwitz solutions on a rotating sphere Extensive research about the classical Rossby-Haurwitz solutions on a rotating sphere has been conducted in the literature (see Craig <cit.>, Haurwitz <cit.>, Constantin and Germain <cit.>, Rossby <cit.>, Verkley <cit.>).Here, we briefly state the primary results.The stream functions of the stationary Rossby-Haurwitz solutions of degree l areψ(φ,θ) = αsinθ + Y_l(φ, θ),where Y_l is in the l-th eigenspace of Δ on a sphere andα = 2ω/2 - l(l+1).Here, we focus on the case when l ≥ 2. It can be easily verified that ψ(φ,θ) solves the stationary Euler equations (<ref>) in the case b=1. The existence of the stationary solution is trivial since Craig <cit.> discovered the exact expression utilizing spherical harmonics.The travelling-wave Rossby-Haurwitz solutions with speed c are of the formψ(φ-ct,θ,t) = αsinθ + Y_l(φ-ct, θ),where α is given by α = 2ω - l(l+1)c/2 - l(l+1).The travelling-wave solutions can be obtained from the stationary solutions. The stability properties have been discussed in details by Constantin and Germain <cit.>. In particular, the non-zonal Rossby-Haurwitz solutions are not Lyapunov stable.§.§ Stationary Rossby-Haurwitz solutions on a rotating ellipsoid Due to the inaccuracy of modeling a planet as a perfect sphere, it is natural to generalize Rossby-Haurwitz solutions from a sphere to an ellipsoid and expect the instability property to be inherited.By employing the spectral theory of Laplace-Beltrami operator on an ellipsoid (see Section <ref>), we are able to discover non-zonal solutions of the stationary Euler equations[-∂_θψ∂_φ + ∂_φψ∂_θ](Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0. Unlike the case on a sphere, for any α∈ℝ, ψ=αsinθ+Y_l^m(φ,θ) cannot be a solution for equation (<ref>).Thus, a natural generalization is to find some function g(θ) ∈ C^3, α((-π/2, π/2)) such thatψ=g(θ)+Y_l^m(φ,θ),m ≠ 0solves (<ref>) with the boundary conditions (<ref>) that islim_θ→±π/2∂_φψ(φ,θ) = 0.Here, Y_l^m belongs to 𝔼_l,m associated with eigenvalue λ_l,m.We propose the stationary Rossby-Haurwitz solutions on a rotating ellipsoid to be ψ in (<ref>) solving (<ref>) with the boundary conditions (<ref>) satisfied. The first natural question is about the existence.We will provide the proof in Section <ref>. §.§ Existence of the stationary Rossby-Haurwitz solutions on a rotating ellipsoidIn this section, we will prove there exists stationary non-zonal Rossby-Haurwitz solutions ψ = g(θ) + Y_l^m(φ,θ)of the stationary Euler equations[-∂_θψ∂_φ + ∂_φψ∂_θ](Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0.The boundary conditions for ψ has been discussed in (<ref>), which are lim_θ→±π/2∂_θψ(φ,θ) = 0.Furthermore, since equation (<ref>) takes derivatives for three times, the regularity conditionψ∈ C^3, α((-π,π)×(-π/2, π/2))should also be imposed.Plugging <ref> into <ref>, we end up with a third order ODE for g(θ)-λ_l,m g'(θ) = (Δ g(θ))' + (2ωsinθ/√(sin^2θ+b^2cos^2θ))'.By the formula of Δ in (<ref>), the ODE (<ref>) can be derived from the following ODE -λ_l,m g(θ)s= 1/cosθ√(sin^2θ+b^2cos ^2θ)(cosθ/√(sin^2θ+b^2cos^2θ) g'(θ))' + 2ωsinθ/√(sin^2θ+b^2cos^2θ).The boundary conditions for g arelim_θ→±π/2 g'(θ) = 0,and the regularity condition for g is g ∈ C^3, α((-π/2, π/2)).There exists function g ∈ C^3, α((-π/2,π/2)) that solves the ODE (<ref>) with the boundary conditions (<ref>). As a result, the corresponding ψ = g(θ) + Y_l^m(φ,θ) is a solution of the stationary Euler equations (<ref>). With denotation ρ(θ) =√(sin^2θ+b^2cos^2θ), the ODE (<ref>) can be expanded as-λ_l,m g(θ) = ( -tanθ/ρ^2(θ)+(1-b^2)sinθcosθ/ρ^4(θ)) g'(θ) + 1/ρ^2(θ)g”(θ) + 2ωsinθ/ρ(θ).Notice that in (<ref>), -λ_l,m and 1/ρ^2(θ) are even functions, while-tanθ/ρ^2(θ)+(1-b^2)sinθcosθ/ρ^4(θ)and2ωsinθ/ρ(θ)are odd functions.Thus, if we can find a solution g_left(θ) on (-π/2, 0] with g_left'(-π/2) = 0 and g_left(0) = 0, the odd extensiong(θ) =g_left(θ)on θ∈(-π/2, 0]- g_left(-θ)on θ∈(0, π/2),solves the ODE (<ref>), with the boundary conditions (<ref>) satisfied. The differentiability of the constructed g(θ) at 0 is at least of three orders.This is because the first and third derivatives of the odd function g match automatically.Moreover, g”(0)=0 can be implied from the expansion (<ref>), together with g(0) = 0. Therefore, the proof of Theorem <ref> can be reduced to Theorem <ref>.There exists C ∈ℝ, and a solution g of the ODE (<ref>) on (-π/2,0], such that g ∈ C^3,α((-π/2, 0]),g'(-π/2) = 0 and g(0) = 0.Writing the equation (<ref>) into an integral equation, we haveg(y)-C= -λ_l,m∫_-π/2^y [ F(y)-F(θ) ] cosθ√(sin^2θ+b^2cos^2θ) g(θ)dθ + ω∫_-π/2^ycosθ√(sin^2θ+b^2cos^2θ) dθfor any constant C ∈ℝ, where F satisfiesF(0) = 0,F'(x) = √(sin^2x+b^2cos^2x)/cos xon x∈(-π/2,0].Define the kernel K to beK(y, θ) := -λ_l,m[F(y)-F(θ)]cosθ√(sin ^2θ+b^2cos^2θ) on -π/2<θ≤ y ≤ 00on -π/2 = θ≤ y ≤ 0;the function r to ber(y) := ω∫_-π/2^ycosθ√(sin^2θ+b^2cos^2θ) dθfor y∈[-π/2,0];and the domain D to beD:= { (y, θ): -π/2≤θ≤ y ≤ 0 }}.The equation (<ref>) becomesg(y) = ∫_-π/2^yK(y,θ)g(θ)dθ + r(y) + C.For any constant C: * The integral equation (<ref>) has a unique and continuous solution since K(y,θ) is continuous on D, by Lemma <ref>.* Check the boundary condition at -π/2:g'(y) = -λ_l,m∫_-π/2^y√(sin^2 y+b^2cos^2 y)/cos ycosθ√(sin^2θ+b^2cos^2θ) g(θ)dθ+ ωcos y√(sin ^2y+b^2cos^2y).As y → -π/2, g'(y) → 0 because g is continuous and bounded on the closed domain D. Now, we are going to prove ∃ C ∈ℝ, such that the solution g of (<ref>) satisfies g(0) = 0. By Lemma <ref>, it can be implied thatg(0) = S(0, -π/2)C + ∫_-π/2^0 S(0,s)r'(s) ds,where S(0,s) is the (unique) continuous solution ofS(0,s) = 1 + ∫_s^0 K(0,v)S(v,s)dv,(0,s) ∈ D,and r'(s) = ωcos s √(sin^2s+b^2cos^2s).Since S and r' are determined, choosing C = -∫_-π/2^0 S(0,s)r'(s)ds/S(0, -π/2)makes g(0) = 0.The last thing to prove is S(0, -π/2) ≠ 0.Proof by contradiction: Suppose S(0, -π/2) = 0, let u(y) = S(y, -π/2), then by Lemma <ref>,u satisfiesu(y) = 1 + ∫_-π/2^y K(y,v)u(v)dv,u(0) = 0.However, together with the following Volterra integral equationu(y) = 0 + ∫_-π/2^y K(y,v)u(v)dvand the solution u≡ 0, the assumption u(0) = 0 contradicts with Lemma <ref>, by choosing c in Lemma <ref> to be 0.§.§ Uniqueness of the stationary Rossby-Haurwitz solutions on a rotating ellipsoidWe have proved the existence of the stationary Rossby-Haurwitz solutions ψ=g(θ)+Y_l^m(φ,θ), for non-zonal Y_l^m ∈𝔼_l,m (m0).In this section, we will prove the solution ψ, or equivalently g, is uniquely determined by Y_l^m up to a constant. Consequently, the corresponding velocity U is unique for a given Y_l^m. Given a non-zonal function Y_l^m(φ,θ) ∈𝔼_l,m, let ψ(φ,θ) = g(θ) + Y_l^m(φ,θ) for some function g(θ) ∈ C^3,α((-π/2, π/2)). If ψ(φ,θ) solves the stationary Euler equations[-∂_θψ∂_φ + ∂_φψ∂_θ](Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ))=0,with the boundary conditionslim_θ→±π/2 g'(θ) = 0,g(θ) is unique up to a constant. Suppose ψ_1 = g_1(θ) + Y_l^m(φ,θ) and ψ_2 = g_2(θ) + Y_l^m(φ,θ) solves the equation (<ref>). Then, g(θ) = g_1(θ) - g_2(θ) solves -λ_l,mg'(θ) = (Δg(θ))',which is equivalent to -λ_l,mg(θ) = (Δg(θ))+C, ∀ C ∈ℝ.The equation (<ref>) has a trivial solution g(θ) = -C/λ_l,m. We now prove this solution is unique. If there exists another g^*(θ) solves the equation (<ref>), the difference q(θ) = g(θ) - g^*(θ) must satisfy-Δ q(θ) = λ_l,mq(θ),which implies q(θ) ∈𝔼_l,m. However, since 𝔼_l,m has basis {y_1(θ)e^imφ,y_2(θ)e^-imφ},m 0(see Section <ref>), q(θ) ∉𝔼_l,m except for q(θ)≡ 0 because non-trivial functions in 𝔼_l,m must depend on φ.This proves the solution g(θ) of (<ref>) is unique. Furthermore, it is implied that g(θ) is unique up to a constant.§.§ Travelling-wave Rossby-Haurwitz solutions on a rotating ellipsoid Similar to classical travelling-wave Rossby-Haurwitz solutions on a sphere, the travelling-wave solutions on an ellipsoid can also be obtained from the stationary solutions. The construction is stated in Theorem <ref>. Let c ∈ℝ and ψ = g(θ) + Y_l^m(φ,θ) be a solution of the stationary Euler equations (<ref>). A traveling-wave solution ψ_c with travelling speed c is constructed asψ_c(φ, θ ,t) = g(θ) + c λ_l,m f(θ) + Y_l^m(φ-ct, θ),where f(θ) is the solution of the following ODE-λ_l,m f(θ) = 1/cosθ√(sin^2θ+b^2cos ^2θ)(cosθ/√(sin^2θ+b^2cos^2θ) f'(θ))' + P(θ),in which P(θ) = ∫_-π/2^θcos(s)√(sin^2(s)+b^2cos^2(s))ds - ∫_-π/2^0cos(s)√(sin^2(s)+b^2cos^2(s))ds. We aim to find some β∈ℝ and f(θ) ∈ C^3,α( (-π/2, π/2)), such thatψ_c(φ, θ ,t) = g(θ) + β f(θ) + Y(φ-ct, θ)solves the Euler equations (<ref>) ( ∂_t + 1/cosθ√(sin^2θ+b^2cos^2θ)[-∂_θψ∂_φ + ∂_φψ∂_θ] ) (Δψ+2ωsinθ/√(sin^2θ+b^2cos^2θ)) =0. ℰ_ω Since ψ = g(θ) + Y_l^m(φ,θ) is a stationary solution of (<ref>), g should satisfy-λ_l,mg = Δ g + 2ωsinθ/√(sin^2θ+b^2cos^2θ),which can be implied from the ODE (<ref>). With the help of (<ref>), we can computeΔψ_c = -λ_l,mg+βΔ f - λ_l,mY_l^m - 2ωsinθ/√(sin^2θ+b^2cos^2θ).Plugging (<ref>) into (<ref>), we haveλ_l,m(∂ Y_l^m/∂φ)c + 1/cosθ√(sin^2θ+b^2cos^2θ)(∂ Y_l^m/∂φ)β( λ_l,m f' + (Δ f)') = 0.Then, by setting β = c λ_l,m and f to satisfyλ_l,mf' + ( Δ f )' = -cosθ√(sin^2θ+b^2cos^2θ),the equation (<ref>) holds, which means ψ_c(φ,θ,t) is a solution of (<ref>).Note that the ODE (<ref>) can be implied from the ODE (<ref>), whose existence can be proved through a similar method in Theorem <ref>, providing P(θ) is odd, smooth and bounded. Similarly, the solution f is odd and belongs to C^3,α((-π/2,π/2)).for a given Y_l^m ∈𝔼_l,m and speed c, the constructed travelling-wave solutionψ_c(φ, θ ,t) = g(θ) + c λ_l,m f(θ) + Y_l^m(φ-ct, θ)is unique in terms of the velocity field U = J gradψ.The proof is similar to Theorem <ref>. It is straightforward to verify the zonal part g(θ)+cλ_l,mf(θ) of ψ_c(φ,θ,t) is unique up to a constant. Then, the velocity field is uniquely determined. §.§ Instability of non-zonal Rossby-Haurwitz solutions on a rotating ellipsoid The stability properties of both stationary and travelling-wave solutions are of great interest. Non-zonal Rossby-Haurwitz solutions have been shown to be Lyapunov unstable on a rotating sphere (see Constantin and Germain <cit.>). We will establish an analogous result on a rotating ellipsoid.We will only prove the instability of traveling-wave solutions, as stationary solutions can be viewed as a special case of traveling-wave solutions with traveling speed zero. In the following, we will use ψ(t) to denote ψ_c(φ,θ,t) for notation simplicity. The Non-zonal Rossby-Haurwitz solutions travelling with speed cψ_c = g(θ) + cλ_l,mf(θ) + Y_l^m(φ-ct,θ)are Lyapunov unstable. Specifically, for a given ψ_c(t) with initial data ψ_c(0), there exists a sequence of perturbed waves ψ^n_c(t) with initial data ψ^n_c(0) →ψ_c(0), such thatn→∞lim inf{sup_t>0||ψ^n_c(t)-ψ_c(t)||^2_L^2(𝕊^2,dσ)}≥ϵ > 0,for some ϵ∈ℝ^+.For a given travelling-wave solutionψ_c(t) = g(θ) + cλ_l,mf(θ) + Y_l^m(φ-ct,θ),with initial data ψ_c(0), we construct a sequence of solutions ψ^n_c(t) with initial dataψ_c^n(0) = ψ_c(0) + 1/nf(θ) = g(θ) + (cλ_l,m+1/n)f(θ) + Y_l^m(φ,θ).The solutions ψ^n_c(t) of the Euler equations (<ref>) areψ^n_c(t) = g(θ) + (cλ_l,m+1/n)f(θ) + Y_l^m(φ-c_nt,θ), c_n = c + 1/nλ_l,m Since Y_l^m ∈𝔼_l,m, it can be decomposed to beY_l^m(φ, θ) = a_1y_1(θ) e^imφ + a_2y_2(θ) e^-imφ,for some a_1,a_2∈ℝ. Here, {y_1(θ) e^imφ,y_2(θ) e^-imφ} is the orthonormal basis of 𝔼_l,m (see Section <ref>).Without loss of generality, assuming a_10, y_1 ≢0, we have sup_t>0ψ^n_c(t)-ψ_c(t)^2_L^2(𝕊^2,dσ)= sup_t>01/nf(θ) + Y_l^m(φ-c_nt, θ) - Y_l^m(φ-ct, θ)^2_L^2(𝕊^2,dσ)≥sup_t>0 a_1y_1(θ) (e^im(φ-c_nt) - e^im(φ-ct)) + a_2y_2(θ) (e^-im(φ-c_nt) - e^-im(φ-ct))^2_L^2(𝕊^2,dσ) - 1/nf^2_L^2(𝕊^2,dσ)= sup_t>0{ a_1y_1(θ) (e^im(φ-c_nt) - e^im(φ-ct))^2_L^2(𝕊^2,dσ) + a_2y_2(θ) (e^-im(φ-c_nt) - e^-im(φ-ct))^2_L^2(𝕊^2,dσ)} - 1/nf^2_L^2(𝕊^2,dσ)≥sup_t>0 a_1y_1(θ) (e^im(φ-c_nt) - e^im(φ-ct)) ^2_L^2(𝕊^2,dσ) - 1/nf^2_L^2(𝕊^2,dσ)= sup_t>0{(∫_-π/2^π/2a_1^2y_1^2(θ)cosθ√(sin^2θ+b^2cos^2θ) dθ) ×(∫_0^2π|e^im(φ-c_nt) - e^im(φ-ct)|^2 dφ) } - 1/nf^2_L^2(𝕊^2,dσ)=sup_t>0{2π|1 - e^im(c_n-c)t|^2 }( ∫_-π/2^π/2a_1^2y_1^2(θ)cosθ√(sin^2θ+b^2cos^2θ) dθ) - 1/nf^2_L^2(𝕊^2,dσ)= 8π(∫_-π/2^π/2a_1^2y_1^2(θ)cosθ√(sin^2θ+b^2cos^2θ) dθ) - 1/nf^2_L^2(𝕊^2,dσ).Since ||f||^2_L^2(𝕊^2,dσ) < ∞, there exists N ∈ℕ, such that for all n ≥ N,sup_t ≥ 0||ψ^n_c(t)-ψ_c(t)||^2_L^2(𝕊^2,dσ)≥ 4π∫_-π/2^π/2a_1^2y_1^2(θ)cosθ√(sin^2θ+b^2cos^2θ) dθ > 0.*Data Availability Data sharing not applicable to this article as no datasets were generated or analysed during the current study.*Statements and Declarations *Conflict of interestThe author states that there is no conflict of interest.§ RESULTS ABOUT INTEGRAL EQUATIONS The following two lemmas are used to prove Theorem <ref>. In our setting, they are stated as follows. Let D to be a closed domain of kernel K, in the formD := {(y, θ): -π/2≤θ≤ y ≤ 0 }.For each constant C, assume that r(y) ∈ C^1([-π/2, 0]),s.t. r(-π/2) = 0, and K ∈ C(D). Then the unique solution g ∈ C([-π/2, 0]) of the Volterra integral equationg(y) = ∫_-π/2^y K(y,θ)g(θ) dθ + r(y) + C,y ∈[-π/2, 0]is given by the variation-of-constants formulag(y) = S(y, -π/2)C + ∫_-π/2^y S(y,s)r'(s) ds,y ∈[-π/2, 0],where S(y,s) is the unique continuous solution ofS(y,s) = 1 + ∫_s^y K(y,v)S(v,s)dv,(y,s) ∈ D.Let the equation u(y) = u_0 + ∫_-π/2^y K(y,v)u(v)dvsatisfy the following assumptions: * for every -π/2≤τ_1 ≤τ_2 ≤ y, the integral ∫_τ_1^τ_2 K(y,v)u(v)dvand∫_-π/2^y K(y,v)u(v)dvare continuous functions of y.* K(y,·) is absolutely integrable for all y ∈ [-π/2, 0]* there exsit points -π/2 = Y_0 < Y_1 < Y_2 < … < Y_N = 0,Y_i ∈ℝ, such that with y ≥ Y_i,∫_Y_i^min(y, Y_i+1) |K(y,v)|dv ≤γ < 1/2 * for every y ≥ -π/2lim_δ→ 0^+∫_y^y+δ|K(y+δ,v)|dv = 0Then, the equation (<ref>) has a unique continuous solution. Furthermore, for every c ∈ℝ, there exists precisely one value of u_0 ∈ℝ for which the solution u of (<ref>) satisfies u(0) = c.
http://arxiv.org/abs/2310.17854v1
{ "authors": [ "Chenghao Xu" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20231027021526", "title": "The Non-zonal Rossby-Haurwitz Solutions of the 2D Euler Equations on a Rotating Ellipsoid" }
Random Fields from Quenched Disorder in an Archetype for Correlated Electrons: the Parallel Spin Stripe Phase of La_1.6-xNd_0.4Sr_xCuO_4 at the 1/8 Anomaly B. D. Gaulin January 14, 2024 =========================================================================================================================================================== As a popular and easy-to-implement machine learning method for solving differential equations, the physics-informed neural network (PINN) sometimes may fail and find poor solutions which bias against the exact ones. In this paper, we establish a framework of modified equation to explain the failure phenomenon and characterize the implicit bias of a general residual minimization (RM) method. We provide a simple way to derive the modified equation which models the numerical solution obtained by RM methods. Next, we show the modified solution deviates from the original exact solution. The proof uses a by-product of this paper, that is, a necessary and sufficient condition on characterizing the singularity of the coefficients. This equivalent condition can be extended to other types of equations in the future. Finally, we prove, as a complete characterization of the implicit bias, that RM method implicitly biases the numerical solution against the exact solution and towards a modified solution. In this work, we focus on elliptic equations with discontinuous coefficients, but our approach can be extended to other types of equations and our understanding of the implicit bias may shed light on further development of deep learning based methods for solving equations. 35D30, 35D35, 35R05, 35R06, 65N15§ INTRODUCTIONThe application of machine learning, particularly deep neural networks (DNNs), has gained significant attention in recent years for solving partial differential equations (PDEs) <cit.>. Machine learning techniques show great potential in addressing challenging problems <cit.>. Compared with traditional numerical schemes, such as finite difference, finite elements methods, and spectral methods, which are often limited by the “curse of dimensionality", DNNs have demonstrated success in solving many high-dimensional problems <cit.>. Although the traditional numerical methods are powerful for low-dimensional problems, it can be challenging to design a proper scheme to solve low-dimensional problems with low-regularity solutions or boundaries <cit.>. Therefore, DNNs are also promising in solving low-dimensional problems with low-regularity solutions or complex boundaries, such as problems with discontinuous elastic or dielectric constants in composite materials. A widely used approach to solving PDEs is to utilize DNNs to parameterize the solution and optimize the parameters in an objective function which usually formulated as a least-squares or variational loss function (also known as risk function). The physics-informed neural network (PINN) method was first proposed in the 1990s <cit.>, then also studied by Sirignano and Spiliopoulos under the name Deep Galerkin Method (DGM) <cit.>, and later popularized and known as PINN by Raissi et al. <cit.>. In this method, a DNN is trained to minimize the sum of the residual of the PDE and the residual of the boundary condition. The Deep Ritz method (DRM) was proposed by <cit.>, where a variational formulation is used to obtain a neural network solution by minimizing an energy functional. Alongside PINN and DRM, many other methods have been proposed or developed for solving PDE problems, for example, <cit.>. For further advances in PINN, we refer readers to the review articles <cit.> and the references therein. For completeness, let us mention that the operator learning also seems promising in both solving PDE problems and their inverse problems <cit.>. Despite the increasing variety of PDE-solving methods, PINN has gained substantial attention due to its simplicity and ease of implementation. The PINN risk function is simply the residual of the PDE, without requiring additional knowledge such as the variational form in DRM, which is difficult or even impossible to obtain in many problems.A theoretical study of DNNs can have significant implications and applications to design DNN-based PDE solvers. For instance, the universal approximation theorem <cit.> highlights the strength of wide (two-layer) NNs in approximating functions. Recent research in this direction includes the convergence rate with respect to network size, which has been explored in <cit.>, and studies on the rate of DNN approximation with respect to depth and width for PINNs when solving second-order elliptic equations with Dirichlet boundary condition <cit.>. Several works have also focused on the generalization error of PINNs when the exact solution to the PDE has high regularity. For example, Lu et al. <cit.> derive the generalization error bounds of two-layer neural networks in the framework of DRM for solving equation and static Schrödinger equation on the d-dimensional unit hypercube; Mishra and Molinaro <cit.> provide upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs; and Shin et al. <cit.> prove that the minimizers of PINN converge uniformly to the exact solution under the case of second order elliptic and parabolic PDEs. As low-regularity problem plays the role of potential application of DNNs in solving (probably low-dimensional) PDEs, in this paper, we investigate the use of neural networks with a least-squares risk function to solve linear and quasilinear elliptic PDEs with solutions that exhibit low regularity. This is a challenging problem due to several features inherent in the problem. Firstly, the exact solution is less regular, as it is affected by the discontinuity of coefficients. Additionally, the function space of neural networks is typically higher in regularity, as certain order derivatives are required in neural network methods, such as the derivative in the PDE and gradient training. Finally, the risk function is discretized, which brings another type of complexity to the problem. We conduct a detailed analysis of the neural network solution that results from these interactions between the neural network and the PDE, with the aim of gaining a deeper understanding of whether and how neural networks can be effectively used in the context of less regular solutions to PDEs.In this work, we first point out key observations from one-dimensional numerical experiments using PINN, and then we develop continuum model and prove theorems to explain the observed phenomena. Roughly speaking, for an elliptic equation with discontinuous coefficients, the PINN finds a numerical solution which deviates from the exact solution. With strong numerical evidences, we model this numerical solution by a modified solution, where the latter satisfies a modified equation. We will derive this modified equation in a very simple and heuristic way. It is proved that the modified solution severely deviates from the original exact solution in a quite generic way. Moreover, we obtain necessary and sufficient condition under which this deviation occurs. Furthermore, we prove theorems to explain the implicit bias phenomenon that even given a good initial guess (such that the NN function is sufficiently close to the exact solution), after training, the numerical solution still deviates from the original exact solution and approximates to the modified solution. Besides, we extend most of our results to the case of quasilinear elliptic equations. Our results are independent of the structure of neural network and work for any RM methods, not being exclusive to the PINN. They are starkly different from previous results that focus on the high-regularity problem and show PINN solution can converge to the exact solution as the sample number and the network size increase <cit.>. In contrast, we point out that there is an essential gap between the exact solution and the limit of numerical solution, and that the RM method in solving low-regularity PDEs may implicit bias the numerical solution against the exact solution and has a non-convergence issue. By unravelling the theoretical mechanisms, our work not only explains the implicit bias phenomena but also provides a theoretical guidance for further design of DNN algorithms in solving low-regularity PDEs. We believe this phenomenon of “failure” is more or less known to many researchers, and our main contribution of this work is to design a mathematical framework for systematically studying this failure and hence to shed light on understanding the implicit bias of the RM methods. In particular, let us highlight our understanding of the implicit bias of RM method: RM method implicitly biases the numerical solution towards the solution to a modified equation. We expect our approach will be quite general and can be developed for many other problem concerning the implicit bias of optimization algorithms for deep learning problems. The rest of this paper is organized as follows. A brief introduction to DNNs and PINN and the failure of PINN example can be found in Section <ref>. In Section <ref>, we summarize our main contribution with thorough discussion. Moreover, for readers' convenience, two flow charts are provided at the end of this section showing the connection of main results. In Section <ref>, we obtain a necessary and sufficient condition of removable singularity for equations with BV functions. Section <ref> proves that the deviation of the modified solution occurs generically and can be large and that the RM methods implicit bias the RM solution towards the modified solution. Hence this explains the failure phenomenon. In Section <ref>, we extend some of the results to the case of quasilinear elliptic equations. In appendix, we collect notations, recall the definition of BV function, and rephrase some well-known existence theorem for linear and quasilinear elliptic equations from the literature. We also give a short proof of the failure example in one-dimension and complete the proof of extensions to the quasilinear case. § PRELIMINARIESWe begin this preliminary section by introducing basic concepts of deep neural networks and deep learning based methods for solving PDEs, in particular, the method of physics-informed neural networks (PINN). Next, by a one-dimensional example, we illustrate that PINN can fail even in an extremely simple situation. We hope the reader keep this inspiring example in their mind when reading the analytic details in the rest part of this paper. The last subsection is left to depict the general setting, namely the linear elliptic equations and systems with BV coefficients, under which we will derive a continuum model for the numerical solution and thus prove theorems based on this model.§.§ Deep neural networks (DNN) An L-layer fully-connected neural network function u_θ:^d→^d' is defined as for each x∈^du_θ(x) = W^[L-1]σ(W^[L-2]σ(⋯ (W^[1]σ(W^[0] x + b^[0] ) + b^[1] )⋯)+b^[L-2])+b^[L-1],where the matrix W^[l]∈^m_l+1× m_l and the vector b^[l]∈^m_l+1 are called parameters, m_l∈^+ is the width of the l-th layer, and the (nonlinear) function σ:→ is known as the activation function. With a little bit abuse of notation, σ applied on a vector means entry-wise operation, namely (σ(z))_i=σ(z_i) for any subscript i. The matrices are usually reshaped and concatenated into a column vector θ, that is, θ=vec({W^[l]}_l=0^L-1,{b^[l]}_l=0^L-1). Note that the input dimension d=m_0 and the output dimension d'=m_L. §.§ Residual minimization (RM) and physics-informed neural networks (PINN) The physics-informed neural network (PINN) is a popular method for solving PDEs via neural networks. It is proposed by many research groups. Among them, the first one might belong to Sirignano and Spiliopoulos <cit.>, although they use a different name, the deep Galerkin method. The most famous work is perhaps the one written by Raissi, Perdikaris and Karniadakis <cit.>, and it gives rise to the more commonly-used name, the PINN. For further developments of the PINN, we refer the readers to the review paper <cit.> and the references therein. In our numerical experiments to be presented in the next subsection, the PINN is used to solve a given boundary value problem (BVP) of a partial differential equations (PDE). We emphasize that the phenomenon recognized and analyzed in this work will remain the same if we replace the PINN by any other residual minimization method. Here the residual of an equation refers to the difference between the left-hand-side and the right-hand-side, and we say residual minimization because these PINN type methods follow a common approach, that is, minimizing the residual risk of both the PDE and boundary conditions.The term residual minimization is also used by other researchers, for example, <cit.>. Besides, some works refer the PINN to the least-squares method, for example, <cit.>.For a given function w, a residual minimization method for solving (the BVP of) an equation is to minimize the following population risk R(w)=∫_Ω(Lw-f)^2x + γ∫_∂Ω(Bw-g)^2x.Here L is the differential operator, B is the boundary condition operator, and f (respectively, g) is a given function defined in the interior (respectively, on the boundary) of the domain Ω. The factor γ is the weight for adjusting the importance of the boundary constraints versus the one in the interior of the domain. But in practical applications, one often uses the empirical risk (also known as the objective function in optimization) as:R_S(w)= R_S,int(w)+γ R_S,bd(w)R_S,int(w) = Ω/n_int∑_x∈ S_int(Lw(x)-f(x))^2R_S,bd(w) =∂Ω/n_bd∑_x∈ S_bd(Bw(x)-g(x))^2,where R_S,int(w) is the interior empirical risk, R_S,bd(w) is the boundary empirical risk, n_int∈^+ and n_bd∈^+ are the numbers of samples in the interior dataset S_int and the boundary dataset S_bd, respectively; Ω and ∂Ω are the Lebesgue measure ^d of Ω and Hausdorff measure ^d-1 of ∂Ω, respectively. In the method of PINN, the output is a neural network and denoted by u_θ(x). Thus we usually abuse notation and write R(θ)=R(u_θ)=∫_Ω(Lu_θ(x)-f(x))^2x + γ∫_∂Ω(Bu_θ(x)-g(x))^2x.Similarly, for empirical risks, we write R_S(θ)=R_S(u_θ), R_S,int(θ)=R_S,int(u_θ), and R_S,bd(θ)=R_S,bd(u_θ).Given an initial parameter θ_0, then the parameters will be updated by first order optimization methods such as the gradient descent which is the forward Euler scheme of the (negative) gradient flow with step size Δ t (also known as the learning rate) as followsθ_k+1= θ_k-Δ t D_θR_S(θ_k).The samples S can be chosen as quadrature method in low dimensions, say no larger than three, while it can be chosen randomly as Monte Carlo sampling method in any dimensions including very high-dimension cases.In implementation, the sample S can vary from one iteration to another. §.§ Numerical example: failure of PINN In this subsection, we demonstrate that using PINN to solve PDEs which has no strong (or classical) solutions could be problematic and cause a non-infinitesimal error. This is illustrated by a one-dimensional example via numerical experiments.Although the example is simple, it provides us correct and insightful intuition. Throughout this work, we will revisit this example several times and explain new understanding with our theorems to be proved in later sections. We also emphasize that this kind of failure of PINN in learning the solution to certain PDE is essential and usually can not be resolved by merely selecting the network architecture, adjusting the optimization algorithm, or tuning hyperparameters of the network.Let us first briefly mention the setup of the experiment. The considered equation is one-dimensional and reads as{ L u =-D_x(AD_x u)=fin Ω=(-1,1),u=0on ∂Ω={-1,1}, .where the coefficient function A and the interior data f in (<ref>) are both piece-wise continuous and read asA(x)={ 12,x∈ (-1,0), 1,x∈ [0,1), .f(x)={0,x∈ (-1,0),-2,x∈ [0,1). .Clearly, there is no strong (or classical) solution, while the weak solution u∈ H^1((-1,1)) to this equation isu(x)= {-23x-23,x∈ (-1,0),x^2-13x-23,x∈ [0,1). .In the series of numerical experiments, we use a 1-256-256-256-1 residual network (ResNet) <cit.>. The empirical risk, interior empirical risk, and boundary empirical risk for a given function w read asR_S(w)=R_S,int(w)+γ R_S,bd(w),R_S,int(w) = Ω/n_int∑_x∈ S_int[D_x(A(x)D_x w(x))+f(x)]^2,R_S,bd(w)= ∂Ω/n_bd[(w(-1))^2+(w(1))^2] ,where we use Ω=2, ∂Ω=2, n_bd=2, and γ=1 in the experiments. We choose 1000 uniformly-sampled points in the interior of the region (namely n_int=1000). We use “tanh” activation function, Adam optimizer, and the Xavier initialization, where the variance of each entry of W^[l] is 2/m_l-1+m_l and m_l represents the width of l-th layer. Under the above setting, we obtain the network function u_θ numerically via the PINN (or more generally the RM method) after training process. This solution u_θ will be called the RM solution to the original equation. We stress that the PINN as one realization of the RM method, is not essential and can be replaced by any other RM method. In practice, u_θ is obtained when the risk function R_S is small and does not decay anymore along the training. We plot the RM solution u_θ and compare it with u in Figure <ref> (a). Obviously, the RM method fails to find the exact solution u. The gap between u and u_θ is as large as magnitude of u. Thus we call this phenomenon the failure of RM method (or failure of PINN). By the way, we also notice that the first order derivative of u_θ seems to be piece-wisely parallel to that of u (see Figure <ref> (b)). This suggests us to focus on the derivatives and regularity of the solutions.The solution u at all points except for x=0 is smooth (even in C^∞ locally), while the first order derivative of the solution u at x=0 has a jump. Therefore, whether x=0 contributes in the numerical method is decisive. Here comes the key observation that in practical experiments the x=0 point can only be sampled with nearly zero probability as long as the distribution for sampling is absolutely continuous with respect to the Lebesgue measure. Hence the point x=0 almost never contributes to numerical experiments! By the product rule, it holds that D_x(AD_xu)=AD^2_xu+(D_xA)D_xu on (-1,1)\{0}. Since the derivative D_xA(x)=0 for the piece-wise constant function A and for all x∈ (-1,1)\{0}, the left-hand-side of the PDE is effectively equal to AD^2_xu on the whole interval (-1,1). Therefore, in the numerical experiments, the empirical risk function equals to an effective empirical risk function with probability nearly one, namelyR_S(u_θ)=R̃_S(u_θ),where the latter at a given function w reads asR̃_S(w)=2/n_int∑_x∈ S_int(A(x)D^2_xw(x)+f(x))^2+γ[(w(-1))^2+(w(1))^2]. This effective empirical risk function is in turn the empirical risk function of the RM method for the modified equation{L̃ũ =-AD^2_xũ =fin Ω=(-1,1),ũ =0on ∂Ω={-1,1}. .whose population risk function at a given function w reads asR̃(w) = ∫_-1^1(A(x)D_x^2w(x)+f(x))^2x+γ(w^2(-1)+w^2(1)).In other words, the risk (<ref>) is the discretization of (<ref>).The exact solution to the modified equation (<ref>) is denoted by ũ and explicitly reads asũ= {-12x-12,x∈ (-1,0),x^2-12x-12,x∈ [0,1). . To sum up, now we have altogether three solutions: u (the exact solution to the original equation (<ref>)), u_θ (the RM solutionto the original equation (<ref>)) and ũ (the exact solution to the modified equation (<ref>)). For completeness, we can also consider the RM solution to the modified equation (<ref>), and we denote it as ũ_θ. Figure <ref> and Table <ref> show a detailed and quantitative comparison between all these four solutions.Throughout the paper, a Banach space Y equipped with the norm ·_Y will be written as an ordered pair (Y,·_Y) when it is needed to emphasize the norm. If the norm is obvious from the context, we will simply denote it as Y. In this work, we mainly focus on Hilbert spaces such as L^2(Ω), H^1(Ω), and H^2(Ω). However, to verify our intuition, we also add one row of the L^∞(Ω) (relative) deviation to Table <ref>. Notice that u and u_θ can both access very small empirical risk, while they have a finite gap in terms of L^∞(Ω) norm as well as H^1(Ω) (and hence L^2(Ω)) norm. In some sense, it indicates the non-uniqueness of the solutions as the local/global minima of the empirical risk function. It seems that the method bias to a special solution in some implicit way. This leads to one of the central problems, the implicit bias problem, of deep learning methods for solving PDEs — why a method find such a particular solution from the infinitely many minima. This implicit bias is obviously connected to the success or failure of the methods, and hence it will be the central object of this research work. A take-away-message is the relation u_θ≈ũ_θ≈ũ≠ u and the smallness R_S(u_θ)≪ 1 and R̃_S(ũ_θ)≪ 1. Roughly speaking, the failure of RM method occurs and the RM solution can be modelled by the modified equation. Looking more carefully at Figure <ref> (b), we observe that the first-order derivative of the RM solution u_θ (as well as ũ_θ) is piece-wisely parallel to the one of the exact solution u. Therefore, except for finitely many points, that is, the only point x=0 in this case, the second-order derivatives of the RM solution and the exact solution are the same. Since the point x=0 can not be sampled with nearly probability one, it is expected that the RM solution have the possibility to achieve very small empirical risk. This is validated in practical experiments.Figure <ref> (c) (or (d), respectively) shows the evolution of the empirical risk R_S (or R̃_S, respectively) along training dynamics of the RM method applied to the problem (<ref>) (or (<ref>), respectively) with the coefficient function given by (<ref>). At the initial stage of the training, the empirical risk is of order one, while at the final stage this risk can reduce to 10^-5 or 10^-6.We believe that such example where u_θ fails to learn u has been seen by many researchers. But our approach is novel and we establish a complete framework to handle such problems with focus on ũ.This eventually leads to the understanding of the failure and implicit bias of the RM methods. §.§ Linear elliptic equations with BV coefficientsIn this subsection, we introduce the general setting on the (systems of) elliptic PDEs used for the main results of this work. Some assumptions are given for the linear elliptic equations and systems. Although main contributions (See detailed description in Section <ref>) of this work focus on these linear problems, we nevertheless stress that some key results will be extended to the quasilinear setting in Section <ref>, and hopefully it can be transferred to more general setting and other PDEs in the future.For the linear case, we consider the system of elliptic equations written in the divergence form:{ L u=f in Ω,u=0 on ∂Ω, .with (L u)^α = -∑_β=1^d'·(A^αβ(x)Du^β)=-∑_β=1^d'∑_i,j=1^dD_i(A_ij^αβD_ju^β),where α,β∈{1,2,…,d'}, i, j∈{1,2,…,d}, Ω⊆^d is a bounded domain with C^1,1 boundary, measurable functions A^αβ∈ S^d× d are also symmetric in α,β, namely A^αβ=A^βα, and f∈ L^2(Ω;^d'). Now we mention the basic assumption to be used throughout the paper. [BV coefficients]Let L be the operator defined in (<ref>). Assume that for each α, β∈{1,…,d'}, there exist a scalar function χ^αβ∈ SBV^∞(Ω) with ^d-1(J_χ^αβ)< +∞ and a matrix-valued function A̅^αβ∈ C^1(Ω̅;S^d× d) such that A^αβ = χ^αβA̅^αβ. Also assume that there is some α_0,β_0∈{1,…,d'} satisfying ^d-1(J_χ^α_0β_0)>0. Furthermore, we assume there are constants χ_min,χ_max,λ̅,Λ̅>0 such that for each α,β∈{1,…,d'} and for all ξ∈^d, x ∈Ωχ_min≤χ^αβ(x)≤χ_max,λ̅ξ^2≤ξ^A̅^αβ(x)ξ≤Λ̅ξ^2. Here an SBV^∞ function is a special function of bounded variation (SBV) whose absolutely continuous part of the gradient has an L^∞ density. The precise definition and basic properties of SBV functions are given in Appendix <ref>. Figure <ref> is an illustration of a SBV function. It is no harm for the readers to think each χ^αβ as a piece-wise constant function throughout the paper. Here we give some comments on our main assumption. Assumption <ref> implies the Hadamard–Legendre condition, which is a standard condition for the existence of solution to systems of elliptic PDEs. Let L be the operator defined in (<ref>). We say that {A^αβ}_α,β=1^d' satisfy the Hadamard–Legendre condition if there exist constants λ,Λ>0 such that for all ξ^α∈^d, x ∈Ωλξ^2≤∑_α,β=1^d'(ξ^α)^ A^αβ(x)ξ^β≤Λξ^2. Here ξ^2=∑_α=1^d'ξ^α^2.When d'=1, the superscripts α,β can only take value 1. Thus for simplicity of notation, we will drop the superscripts throughout the paper when d'=1. In particular, for the case d'=1, the Hadamard–Legendre condition in Remark <ref> coincides with the uniform ellipticity condition, that is to say, there exist constants λ,Λ>0 satisfying λξ^2≤ξ^ A(x)ξ≤Λξ^2for all ξ∈^d, x ∈Ω. We also remark that in the proofs throughout the paper, the constant C may be different from line to line, but we usually keep track of its dependence on basic constants such as χ_min or Λ̅ and thus make the paper more readable. In the proofs, the expression U ≺ζ≺ U' means ζ=1 in U, ζ=0 outside U' and ζ∈ C_c^∞(^d), where U, U' are bounded open sets and U is compactly contained in U'.§ MAIN CONTRIBUTIONSIn this section, we describe our main contributions of this paper. After the introduction to each contribution point, several related theorems will be mentioned in an intuitive way. Most of them not only work for elliptic equations, but also work for elliptic systems, although some technical conditions may be inevitably assumed for the latter.From now on, the term “residual minimization” (or in short “RM”) is used to replace “PINN” in the main results because these analyses provided in this and later sections work for general residual minimization methods, and are not exclusive for PINN. In particular, the DNN representation is not explicitly used in the analysis. Nevertheless, the type of the risk function (also known as loss function) is more responsible to the failure or success of the machine learning based PDE solvers. In Section <ref>, we propose a hypothesis that ũ approximates u_θ well and derive the modified equation for ũ, in a general setting, to model the numerical solution obtained by RM method. This hypothesis serves as our starting point of the analysis and understanding of the implicit bias of RM method.In Section <ref>, we provide an if-and-only-if condition to characterize the singularity which appears naturally because of the discontinuous coefficients in the equations. In particular, this condition characterizes whether u is equal to ũ or not. Next, in Section <ref>, we introduce the RM-invariant subspace (T-I), defined as the set of all f which leads to u=ũ. This subspace (T-I) allow us to identify the occurrence of deviation of the numerical solution. We will show the deviation occurs generically and the relative deviation, even for the data near the RM-invariant subspace, is not small. The last contribution point is mentioned in Section <ref>, where we prove that the exact solution is unstable and hence the RM method implicit bias the exact solution towards the solution to the modified equation. In the last subsection, we present the connections of the main contributions, as well as preliminaries and by-products, by two flow charts. §.§ Modeling numerical solution by modified equationIn any sense, it is very difficult, if not impossible, to study u_θ directly. Fortunately, as shown in Figure <ref> (a), we have the key observation: u_θ≈ũ_θ≈ũ, that is, the RM solutions u_θ and ũ_θ both looks very close to the exact solution ũ, and they are almost indistinguishable.More precisely, part (c) and part (d) of Table <ref> provide more quantitative evidences to show that the (relative) distances among u_θ, ũ_θ, and ũ are very small in either L^∞(Ω), L^2(Ω), or H^1(Ω) norm.This closeness is already intuitively explained in Section <ref> and it provides us a solid evidence to model u_θ by using ũ for the one-dimensional example. We would like to extend this idea and model u_θ by using ũ to more general equations and to the case of system (that is d'>1). Let us start with d'=1 and a general coefficient A. The derivation is almost the same as the one in Section <ref>. But here the coefficient A may not be piece-wise constant. Moreover, A is a d× d matrix-valued function instead of a scalar function. By the product rule, the divergence operator in (<ref>) is applied to A and Du respectively. Furthermore, by the decomposition of SBV function (See Definition <ref>) with d'=1, we haveLu=-∑_i,j=1^dA_ijD_iju -∑_i,j=1^d(D_i^aA_ij+D_i^jA_ij)D_ju.Here D_i^jA_ij is supported on an ^d null set J_χ. Since the number of samples is at most countable in any practical applications, the probability of selecting some points in the support of D_i^jA_ij is zero. Unless the algorithm is specifically designed, the contribution of D_i^jA_ij to the risk function is zero. In other words, D_i^jA_ij will not affect the optimization process. As a result, we simply omit D_i^jA_ij and obtain the approximate model (<ref>) for studying the RM methods.For the general setting with d'≥ 1 (that is including systems), we follow the same idea and thus arrive at the modified equation for (<ref>) as follows{L̃ũ =f in Ω,ũ =0 on ∂Ω, .where for each α∈{1,…,d'}(L̃ũ)^α =-∑_β=1^d'∑_i,j=1^dA_ij^αβD_ijũ^β -∑_β=1^d'∑_i,j=1^dD_i^aA_ij^αβD_jũ^β. Throughout the paper, we call L̃ the modified operator of L and denote u the solution to (<ref>), and ũ the solution to (<ref>). Also, let u_θ and ũ_θ be the RM solutions, that is, the numerical solutions under RM methods (such as PINN), to the original equation (<ref>) and modified equation (<ref>), respectively. When the equation is clear from the context, we call ũ the solution to the modified equation (or simply, the modified solution) and call u_θ the numerical solution (or RM solution). The previous numerical experiments suggest us to make the following hypothesis to model the numerical solution .[modified solution approximates RM solution] The RM solution to the problem (<ref>) can be approximated by the solution to its modified equation (<ref>). More precisely, for any given >0, the RM method can find the a numerical solution u_θ such that u_θ-ũ_H^1(Ω)≤ for all meaningful f.Here we neglect the details of the neural networks such as how to design the network architecture, how to tune the hyper-parameters, and how to train the neural network parameter. These will be left to the future research. This hypothesis will be the foundation of our work, and from now on, we will focus on ũ which is more amenable because it satisfies a modified equation (<ref>). We emphasize that Hypothesis <ref> and our point of view on the modelling of the RM solution are novel. For the modified problem, we obtain a series of theorems. These with Hypothesis <ref> lead to the understanding of the behavior and properties of the RM solution, in particular, its implicit bias. We will work on both elliptic equations and systems. To prove the results in the case of elliptic systems, we need a further technical condition as follows. [A priori estimates for linear system] Assume that for all ũ∈ H_0^1(Ω;^d')∩ H^2(Ω;^d'), there is constant C>0 such that ũ_H^2(Ω;^d')≤ CL̃ũ_L^2(Ω;^d'), where L̃ is defined as in (<ref>). We remark that Assumption <ref> with d'=1 implies Assumption <ref> (See Theorem <ref>). §.§ Characterizing removable singularity Intuitively, it is clear that the failure of PINN, or more generally, the RM method, is due to the singularities in the coefficients of the PDE. Moreover, whether the singularity exists is highly related to whether the exact solution coincides with the modified solution, where the later is introduced in the above subsection. In particular, if the singularity is removable then there is no such failure.In order to study the removable singularity in the coefficients, we introduce a quantity μ based on which we can establish necessary and sufficient condition. Given any ^d-1 measurable set B⊆Ω, χ∈ SBV^∞(Ω), Υ: H_0^1(Ω) → L^∞(Ω; S^d × d), and φ∈ C^1(Ω), we defineμ(B;χ,Υ,φ)=∫_B∩ J_χ(χ^+-χ^-)ν_χ^Υ[φ] Dφ^d-1.For example, if B=Ω, φ∈ C_c^1(Ω), and Υ[w]=A̅ for all w∈ H_0^1(Ω), then we have μ(Ω;χ,A̅,φ)=∫_J_χ(χ^+-χ^-)ν_χ^A̅Dφ^d-1. We thus have an essential result (Theorems <ref> and <ref>) to characterize the removable singularity in terms of μ. Applying these theorems, we can consequently find smooth function v_δ such that μ(Ω;χ,A̅,v_δ)≠ 0.See Theorems <ref> and <ref>. §.§ Identifying the occurrence of deviationWe study the occurrence of deviation u≠ũ. Thanks to Theorems <ref> and <ref>, we prove in Theorem <ref> that for specific interior data f the deviation occurs. To step further, we ask whether this occurrence of deviation is generic and whether it is large in some sense. Affirmative answers to these questions will be obtained by studying the RM-invariant subspace (T-I). The (T-I) basically identifies the interior f where u=ũ.To define (T-I), we should study the properties of L̃ first. Let us consider the largest possible domain of L̃, which is naturally a subset of H^1_0(Ω;^d'). In fact, it should be a subset of H^2(Ω;^d'). Otherwise, we have v∈ H^1_0(Ω;^d')\ H^2(Ω;^d'), and hence D_ijv^β∈ H^-1(Ω) and A^αβ_ij∈ SBV(Ω) imply that the point-wise product A^αβ_ijD_ijv^β may not be a classical function. Thus the largest possible domain of L̃ is dom(L̃)=H^1_0(Ω;^d')∩ H^2(Ω;^d').Consequently, the image of L̃, denoted by X, is X={L̃w w∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') }. Next, we introduce the RM-transformation T. Suppose that Assumptions <ref> and <ref> hold. Let u and ũ be solutions to (<ref>) and (<ref>) with data f∈ L^2(Ω;^d'), respectively. Then we can define the operator T T: X → H^-1(Ω;^d')f ↦ Tf= Lũ.Clearly, ũ is the weak solution to{ L ũ = T f in Ω,ũ =0 on ∂Ω. .In particular, when d'=1, Assumption <ref> holds automatically by Assumption <ref>. Hence, for the single elliptic equation case, namely d'=1, we have X=L^2(Ω) and T:L^2(Ω)→ H^-1(Ω).Let σ(T) be the spectrum of T. Then we will show in Theorem <ref> that the only eigenvalue of T is 1, namely σ(T)={1}. Thus to identify the occurrence of deviation u≠ũ, we only need to characterize the invariant subspace of X under RM-transformation T.This naturally leads to the following kernel _X(T-I), that is, the eigenspace of T corresponding to the eigenvalue 1 restricted to X:(T-I)=_X(T-I)={f∈ X Tf=f}={L̃w∈ X L̃w= Lw}.When the X is clear from the context, we will drop it in the subscript and denote the kernel as (T-I). We also denote its complement with respect to the whole space L^2(Ω;^d') as ((K-I))^c=L^2(Ω;^d')\(T-I).Let us explain why the space (T-I) can characterize the occurrence of the deviation. If there is a non-zero f∈ X\(T-I), then the unique solution ũ to (<ref>) (namely L̃ũ=f) satisfies L̃ũ≠ Lũ. In other words, f≠ Lũ, and hence ũ deviates from u, implying there is a deviation. Therefore, to understand the implicit bias of RM method, we only need to study the properties of (T-I). In particular, Theorem <ref> shows that under a mild condition that the jump is not omnipresent, the complement of the kernel ((T-I))^c is open and dense (see also Theorem <ref> for the case of system). As a direct result, for almost all f∈ L^2(Ω;^d'), thedeviation occurs. Furthermore, Theorem <ref> shows the relative deviation u-ũ_H^1(Ω;^d')/ũ_H^1(Ω;^d') can be even unbounded.Now, with these theorems on the relation between ũ and u, we are ready to explain the phenomenon u_θ≠ u shown in the previous numerical experiments. Let us recall the phenomenon, discuss first intuitively, and then give a more quantitative explanation.Recall that, according to Figure <ref> (a), the RM solution u_θ is entirely deviated from the exact solution u. More precisely, the part (b) of Table <ref> show that the (relative) numerical errors between u and u_θ are not small in both L^∞(Ω) and L^2(Ω) (and hence H^1(Ω)) norm. We first provide an intuitive understanding on the non-zero difference between the exact solution and the RM solution. For the exact solution u, the term AD_xu has to be continuous, otherwise its derivative would be a Dirac-like function. Roughly speaking, the Dirac-like function is only non-zero at a single point but its integration on the whole space is non-zero. Therefore, even in the weak sense, the effect of Dirac function can not be ignored. However, the source term f is a classical function which is defined pointwisely. In particular, it is not a Dirac-like function. Note that A is discontinuous. In order to make AD_xube continuous, D_xu has to be discontinuous as shown in Figure <ref>(b). For the RM solution u_θ, as the isolated discontinuities of A can not be exactly sampled, any solution, whose first-order derivative is piece-wisely parallel to that of the exact solution, is a solution that minimizes the empirical loss. The frequency principle <cit.> shows that deep neural network implicitly prefers a low-frequency function to fit training data. Roughly speaking, compared with all feasible solutions, the one with continuous first-order derivative is a function has low frequency, which is learned by RM as shown in Figure <ref>(b). The rigorous connect between the frequency principle and the implicit bias of RM method is beyond the scope of this paper, and will be left to the future work.Next, we conclude the contribution on the occurrence of the deviation with a more quantitative remark which explains the phenomenon u_θ≠ u.Theorem <ref> leads to a finite error of RM solution. In fact, if we assume Hypothesis (<ref>) with ≤C/2, then we can numerically achieve u_θ such that ũ-u_θ_L^2(Ω;^d')≤C/2. Consequently, the deviation u-ũ_H^1(Ω;^d')≥ C leads to the estimateu-u_θ_L^2(Ω;^d')≥u-ũ_L^2(Ω;^d')-ũ-u_θ_L^2(Ω;^d')≥C/2,which implies a finite (non-infinitesimal) numerical error when the gap u-ũ_L^2(Ω;^d') takes a non-infinitesimal value.§.§ Understanding the implicit biasCompared to the above results on the deviation, it is more important to understand the implicit bias of the RM method. The latter has to be more dynamical. One may ask whether the dynamics (and more precisely the initialization of the dynamics) matters. In particular, shall we still expect the failure of RM method for the previous example, if we take the initial output function u_θ(0) being sufficiently close to the exact solution u? We should study this in both numerical and theoretical way, and eventually this leads to the understanding of the implicit bias of RM methods.Thanks to the well-known universal approximation theorem, the exact solution u from the above failure example can be approximated well by, for example, a two-layer neural network. Thus we can first use supervised learning to find sufficiently good parameter and then apply RM methods with such good initialization.In numerical experiments, we still take the example (<ref>) and the results are shown in Figure <ref>. We use supervised learning to find neural network function u_θ^SV with parameter θ^SV, where the empirical risk function reads asR_S^SV(u_θ^SV)=Ω/n_int∑_x∈ S_int((u_θ^SV(x)-u(x))^2+(D_xu_θ^SV(x)-D_xu(x))^2)+γR_S,bd(u_θ^SV).By Figure <ref> (a), u_θ^SV almost overlaps u as is expected.Next, let θ^SV be the initial parameter and apply the RM method to the original equation (<ref>) and the modified equation (<ref>), respectively. In other words, we train the neural network with R_S and R̃_S, respectively, until the risk is small and does not decay anymore. After training, the output functions are denoted as u_θ^SV→RM and ũ_θ^SV→RM, respectively. We observe that both output functions are very close to ũ, as shown in Figure <ref> (b). For a comprehensive comparison, we also plot u_θ and ũ_θ and their derivatives in Figure <ref>. Roughly speaking, we have u≈ u_θ^SV and u_θ≈ũ_θ≈ũ≈ u_θ^SV→RM≈ũ_θ^SV→RM. Therefore, even given a sufficiently good initialization, the RM method implicitly biases the numerical solution against the exact solution u and towards the solution ũ to the modified equation. We prove Theorems <ref> and Proposition <ref> which essentially explains the implicit bias. In the following remark, we provide such theoretical explanation to the implicit bias phenomenon shown in Figure <ref>. (1) The effective risk is large at u_θ^SV. More precisely, R̃_S(u_θ^SV)≈R̃(u_θ^SV)≥ C_0 for some finite C_0>0. Hence the risk is very likely to be decreased along the training dynamics. This explains the RM method implicitly biases against u_θ^SV.(2) Suppose that the RM method achieves a very small (effective) empirical risk after training. That is R̃_S(u_θ^SV→RM)≤ for very small >0. Thenu_θ^SV→RM-ũ_H^1(Ω;^d')≤ C√(R̃(u_θ^SV→RM))≈ C√(R̃_S(u_θ^SV→RM))≤ C√().This explains the RM method implicitly biases towards ũ.This contribution can also be understood as follows. From the Observation (2), we have the commonly-seen phenomenon that R(u_θ)≪ 1 while u-u_θ≥ C for some finite C>0. Now these experiments and phenomenon answers the reverse statement is also true: if u-u_θ≪ 1, then R(u_θ)≥ C_0 for some constant C_0. It shows that the exact solution is unstable in the sense of RM method and implicitly biases towards the solution to the modified equation. §.§ Connection of the contributionsWe conclude the main contribution section by presenting two figures within which the readers may find the connections between our main results mentioned above as well as more preliminary lemmas, propositions, etc. The arrows show the logic flow and various colors correspond to different types of the results.§ REMOVABILITY OF SINGULARITYLet us explain the title of this section. For χ∈ SBV(Ω), we say the set of singularity J_χ is removable if ^d-1(J_χ)=0. In this section, we obtain the equivalence between the removability of singularity and the condition μ=0. The quantity μ is essentially an integral and defined by (<ref>) in Section <ref>. The advantage of using μ, as what we do in latter sections, is that it allows us to estimate the pairing ⟨ Lu-Lũ,φ⟩_H^-1(Ω),H^1_0(Ω) in a quantitative way, and hence we can estimate the difference u-ũ_H^1(Ω). §.§ Necessary and sufficient condition of removable singularityWe begin with two simple lemmas: one to bound χ^± (defined in Definition <ref>) and the other to construct a smooth cutoff function. The latter one (namely Lemma <ref>) is standard, but we provide the proof for completeness.If χ∈ L^∞(Ω)∩SBV^∞(Ω) with ^d-1(J_χ)>0, then χ^±(x)≤χ_L^∞(Ω) for all x∈ J_χ. We prove by contradiction. Suppose there is an x_0∈ J_χ satisfying χ^±(x_0)>χ_L^∞(Ω). Denote B^±_ρ(x_0,ν)={x∈ B_ρ(x_0) ± (x_0-x)^ν>0}. Thus we havelim_ρ→ 01/B^±_ρ(x_0,ν)∫_B^±_ρ(x_0,ν)χ(y)-χ^±(x_0)y>0,which contradicts the definition of χ^±. Suppose that bounded open sets U⊆ U'⊆^d satisfy (U,∂ U')>2δ>0. Then there is a cutoff function ζ∈ C_c^∞(U') such that U ≺ζ≺ U' and Dζ_L^∞(U')≤C/δ, where C depends only on d.Define ρ(x)=1/Cexp(1/x^2-1) for x∈ B_1(0) and ρ(x)=0 for x∈^d\ B_1(0), where the constant C=∫_B_1(0)exp(1/x^2-1)x only depends on d. Thus ∫_^dρ(x)x=1 and ρ∈ C_c^∞(^d) with compact support B_1(0). For each δ>0, define ρ_δ(x)=δ^-dρ(δ^-1x) and _U_δ the characteristic function of U_δ, where U_δ={x∈^d dist(x,U)<δ}. Let ζ = ρ_δ*_U_δ. It is clear that U ≺ζ≺ U'.Therefore, we have for all x∈ U'Dζ(x)=(_U_δ * Dρ_δ)(x) =∫_B_δ(0)_U_δ(x-y)δ^-d-1Dρ(δ^-1y)y≤δ^-1∫_B_1(0)Dρ(y)y≤C/δ,where C only depends on d. Now we are ready to obtain the first important result in this work. That is a necessary and sufficient condition for the removable singularities of SBV functions. Some key quantities and sets constructed in the proof of Theorem <ref> are illustrated with various colors in the schematic diagram, namely Figure <ref>. To clarify our ideas, we also make three claims which play the roles as milestones on the road of our proof. Let Υ: H_0^1(Ω) → L^∞(Ω; S^d × d) and χ∈ L^∞(Ω)∩ SBV(Ω) with ^d-1(J_χ)< +∞. Suppose there exist constants λ_0,Λ_0>0 satisfyingλ_0ξ^2≤ξ^Υ[w](x)ξ≤Λ_0ξ^2for all w∈ H_0^1(Ω), ξ∈^d, and x∈Ω. We have ^d-1(J_χ)=0 if and only if μ(Ω;χ,Υ,v)=0,∀ v∈ C_c^1(Ω).Before we prove this theorem, we would like to stressthat the sufficient part of this theorem is non-trivial. Even for a special case where Υ[w]=A̅ holds for all w∈ H^1_0(Ω), the result of Theorem <ref> is not standard. By the definition of μ in (<ref>), the quantity in (<ref>) with Υ[w]=A̅ reads asμ(Ω;χ,A̅,v)=∫_J_χ(χ^+-χ^-)ν_χ^A̅Dv^d-1=μ̅(Dv),where μ̅ is a Radon measure on Ω such that for all φ̅∈ C(Ω;^d)μ̅(φ̅)=∫_J_χ(χ^+-χ^-)ν_χ^A̅φ̅^d-1.At first glance, it looks like a version of the fundamental lemma of the calculus of variation (sometimes also known as the Du Bois-Reymond lemma), which says that if a Radon measure μ̂ satisfying μ̂(φ̂)=0 for all φ̂∈ C_c^∞(Ω), then μ̂=0 on Ω. However, the setting in Theorem <ref> is quite different. In fact Dv is in the form of a gradient. Therefore, what we have is not the fact that μ̅(φ̅)=0 for all φ̅∈ C_c^∞(Ω;^d), but only the condition that μ̅(Dv)=0 for all v∈ C_c^1(Ω). In general, such condition will not imply μ̅=0. In our proof of Theorem <ref>, we essentially make use of the geometric structure of the jump part of BV function.Besides, Υ takes a quite general form depending on v and not necessarily being constant A̅. This general form is necessary to develop our theory into the quasilinear case (See Section <ref>).The necessary part of the statement is easy. In fact, by Corollary <ref>, ^d-1(J_χ)=0 implies χ∈ W^1,1(Ω), and hence μ(Ω;χ,Υ,v)=0 holds. In the rest of the proof, we show by the method of contradiction that the condition (<ref>) is also sufficient. Suppose that μ(Ω;χ,Υ,v)=0 for all v∈ C_c^1(Ω) and ^d-1(J_χ)>0. There exist an ^d-1 measurable set Σ, a C^1-hypersurfaces S, and a function V such that Σ⊆ J_χ∩ S andμ(Σ;χ,Υ,V)>0.Let J_χ^±={x∈ J_χ χ^+(x)-χ^-(x)≷ 0}. Clearly, J_χ^± are ^d-1 measurable sets. Noticing ^d-1(J_χ)=^d-1(J_χ^+)+^d-1(J_χ^-)>0, without loss of generality, we suppose that ^d-1(J_χ^+)>0. By the structure theorem of J_χ, namely Theorem <ref>, there is a countable sequence of C^1-hypersurfaces {S_i}_i=1^∞ satisfying ^d-1(J_χ\(⋃_i=1^∞S_i))=0. Thus we can pick an S from {S_i}_i=1^∞ satisfying ^d-1(J_χ^+∩ S)>0. Note that S is a C^1-hypersurface. Thus for each z∈ S, there exist a C^1 function h, a permutation of coordinates mapping τ: ^d→^d, x↦τ(x), and an open ball B_r_z(z'_τ)⊆^d-1 with r_z>0 such that S can locally be represented by the graph of h: (x_τ)_d=h(x'_τ), x'_τ∈ B_r_z(z'_τ). Here the shorthand notation z'_τ means ((z_τ)_1,⋯,(z_τ)_d-1). We also write x_τ=τ(x) for simplicity. Consider the transformation ϕ: B_r_z(z'_τ)×→ B_r_z(z'_τ)×, x_τ↦ y=ϕ(x) defined by{[ y_i=ϕ_i(x_τ)=(x_τ)_i, i∈{1, …, d-1},;y_d=ϕ_d(x_τ)=(x_τ)_d-h(x_τ'), ].and its inverse transformation ϕ=ψ^-1{[ (x_τ)_i= ψ_i(y)=y_i, i∈{1, …, d-1},;(x_τ)_d= ψ_d(y)=y_d+h(y'). ].In short, we writey=ϕ(x_τ)=(ϕ_1(x_τ),…,ϕ_d(x_τ)),x_τ =ψ(y)=(ψ_1(x_τ),…,ψ_d(x_τ)).By this construction, we have Dϕ= Dψ=1.Let B_z=B_r_z(z'_τ)× [-r_z,r_z]. For each y∈ B_z, defineΨ(y) =τ^-1∘ψ(y)=τ^-1(y_1,…,y_d-1,y_d+h(y_1,…,y_d-1)).Thus Ψ is a C^1 diffeomorphism between B_z. Denote U_z=Ψ(B_z). If we define for x∈ U_zΦ(x)=ϕ∘τ(x)=((x_τ)_1,…,(x_τ)_d-1,(x_τ)_d-h((x_τ)_1,…,(x_τ)_d-1)),then Φ∘Ψ=I on B_z and Ψ∘Φ=I on U_z with DΦ= DΨ=1. See Figure <ref> for the relations between these transformations. Let V(x)=Φ_d(x) for all x∈ U_z. Clearly, V(x)=0 on U_z∩ S. By choosing r_z small enough, we can require that 0<DV(x)<C on U_z, where C only depends on d and J_χ. Let us choose an appropriate U_z such that ^d-1(Ψ(B_r_z/2(z'_τ)× [-r_z/2,r_z/2])∩ S)>0. Thanks to Lindelöf's lemma, such U_z exists because the manifold S is second countable and a countable union of such Ψ(B_r_z/2(z'_τ)× [-r_z/2,r_z/2]) must cover S, and hence also cover J_χ^+∩ S. Let Σ=J_χ^+∩Ψ(B_r_z/2(z'_τ)×{0})⊆ J_χ∩ S and Σ'=J_χ^+ ∩ U_z∩ S.Note that DV= DVν_χ on Σ' and χ^+-χ^->0 on Σ. Thereforeμ(Σ;χ,Υ,V)=∫_Σ(χ^+-χ^-)ν_χ^Υ[V]ν_χDV^d-1>0,where we use the uniform ellipticity of Υ[V].There is a “(d-1)-dimensional cube” Q_k⊆ B_r_z/2(z_τ') with Q̅_k=Q_k×{0} satisfyingμ(Ψ(Q̅_k);χ,Υ,V)>0. For each δ>0, there exists countably many dyadic cubes {Q_k}_k=1^∞ which are almost disjoint and satisfy Q_k⊆ B_r_z/2(z'_τ), Σ⊆Ψ(⋃_k=1^∞Q̅_k), and ^d-1(Ψ(⋃_k=1^∞Q̅_k)\Σ)< δ. Thus we have∑_k=1^∞μ(Ψ(Q̅_k);χ,Υ,V)=μ(Ψ(⋃_k=1^∞Q̅_k);χ,Υ,V)=μ(Σ;χ,Υ,V)+μ(Ψ(⋃_k=1^∞Q̅_k)\Σ;χ,Υ,V)≥μ(Σ;χ,Υ,V)-2χ_L^∞(Ω)Λ_0DV_L^∞(U_z)^d-1(Ψ(⋃_k=1^∞Q̅_k)\Σ)>0,where the first equality is due to that {Q_k}_k=1^∞ are almost disjoint, in the third inequality we use Lemma <ref> and uniform ellipticity, and δ is taken small enough in the last line. Thus there is some Q_k satisfying μ(Ψ(Q̅_k);χ,Υ,V)>0. There is a function v∈ C_c^1(Ω) such thatμ(Ω;χ,Υ,v)>0.This claim contradicts μ(Ω;χ,Υ,v)=0, and hence the statement of the theorem is completed.Towards the claim, we consider the -neighborhood of Ψ(Q̅_k) as followsO^={y∈Ω y-x< for somex∈Ψ(Q̅_k)},where < 1/2(Ψ(Q̅_k), ∂ U_z). This is admissible since(Ψ(Q̅_k),∂ U_z)> (Ψ( B_r_z/2(z'_τ)× [-r_z/2,r_z/2]), ∂ U_z)≥ 0.The neighborhood O^/2 is defined similarly. By Lemma <ref> with(O^/2,∂ O^)>/4, there is a cutoff function ζ∈ C_c^∞(O^) such thatO^/2≺ζ≺ O^ and Dζ_L^∞(O^)≤C/,where C depends only on d. Let v = ζ V. Note that μ(O^;χ,Υ,v)=μ(Ω;χ,Υ,v) due to v∈ C_c^1(O^). It is sufficient to consider the integration μ(O^;χ,Υ,v). Let O_S^=O^∩ S. Take the decomposition as shown in Figure <ref>:μ(O^;χ,Υ,v) =μ(O_S^;χ,Υ,v)+μ(O^\ O_S^;χ,Υ,v).We will later show that the first integration has a lower bound, that is,μ(O_S^;χ,Υ,v)≥3/4μ(Ψ(Q̅_k);χ,Υ,V),while the second integration has a small contribution, that is,μ(O^\ O_S^;χ,Υ,v)≤1/2μ(Ψ(Q̅_k);χ,Υ,V).These estimates with the decomposition together lead to μ(O^;χ,Υ,v)≥1/4μ(Ψ(Q̅_k);χ,Υ,V)>0,and therefore the proof is completed.Notice that V(x) = 0 for all x∈Ψ(B_r_z(z'_τ)×{0}). ThusV(x)≤DV_L^∞(O^) <C,∀ x∈ O^,where C depends on d and J_χ. This with (<ref>) leads toDv(x)≤Dζ(x)V(x)+ζ(x)DV(x)<C,∀ x∈ O^, where C depends on d and J_χ. By monotonicity of the measure ^d-1, we havelim_→ 0^d-1(O_S^∩ J_χ)= ^d-1(Ψ(Q̅_k)∩ J_χ).Hence for small enough constant _1>0, we obtainμ(O_S^;χ,Υ,v)=μ(Ψ(Q̅_k);χ,Υ,v)+μ(O_S^\Ψ(Q̅_k);χ,Υ,v)=μ(Ψ(Q̅_k);χ,Υ,V)+ ∫_(O_S^\Ψ(Q̅_k))∩ J_χ(χ^+-χ^-)ν_χ^Υ[v]Dv^d-1≥μ(Ψ(Q̅_k);χ,Υ,V)-2χ_L^∞(Ω)Λ_0Dv_L^∞(O^)^d-1((O_S^\Ψ(Q̅_k))∩ J_χ)≥3/4μ(Ψ(Q̅_k);χ,Υ,V),where the second equality is due to v=V on Ψ(Q̅_k)∩ J_χ and the last step holds by choosing <_1 .Similarly, by monotonicity of the measure ^d-1, we havelim_→ 0^d-1((O^\ O_S^)∩ J_χ)=0.By choosing <_2 for small enough constant _2>0 and combining (<ref>), we obtain μ(O^\ O_S^;χ,Υ,v) ≤ 2χ_L^∞(Ω)Λ_0Dv_L^∞(O^)^d-1((O^\ O_S^)∩ J_χ)≤1/2μ(Ψ(Q̅_k);χ,Υ, V).As discussed above, this completes the whole proof. Now we extend this result to the case of systems.Let Υ^αβ: H_0^1(Ω) → L^∞(Ω; S^d × d) and χ^αβ∈ L^∞(Ω)∩ SBV(Ω) with ^d-1(J_χ^αβ)< +∞ for each α, β∈{1,…,d'} with d'≥ 1. Suppose that there exist λ_0,Λ_0>0 such that for all α,β∈{1,…,d'}, w∈ H^1_0(Ω), ξ∈^d, x∈Ωλ_0ξ^2≤ξ^Υ^αβ[w]ξ≤Λ_0ξ^2. Then ^d-1(J_χ^αβ)=0 for all α, β∈{1,…,d'} if and only if∑_β=1^d'μ(Ω;χ^αβ,Υ^αβ,v^β)=0,∀ v∈ C^1_c(Ω;^d'),∀α∈{1,2,…,d'}. The proof is a standard adaptation of proof of Theorem <ref>. We highlight some key modification, omit the details, and outline the proof by just stating three claims which are counterpart of those in Theorem <ref>. The necessary part is clear. As above, we show by the method of contradiction that the condition (<ref>) is also sufficient still by dividing this proof into three parts. Suppose that for some α_0, β_0∈{1,…,d'}, ∑_β=1^d'μ(Ω;χ^α_0β,Υ^α_0β, v^β)=0 for all v∈ C_c^1(Ω;^d') and ^d-1(J_χ^α_0β_0)>0. There are α_0,β_0 ∈{1,…,d'}, an ^d-1 measurable set Σ, a C^1-hypersurface S and a function V∈ C^1(Ω;^d') such that Σ⊆ J_χ^α_0β_0∩ S andμ(Σ;χ^α_0β_0,Υ^α_0β_0,V^β_0)>0Following the proof of Claim 1 in Theorem <ref> with J_χ replaced by J_χ^α_0β_0, we can construct V^β_0(x)=Φ_d(x) for all x∈ U_z. Thus by defining V=(0,…,V^β_0,…,0)^∈ C^1(U_z;^d'), it is easy to check thatμ(Σ;χ^α_0β_0,Υ^α_0β_0,V^β_0)>0.Comparing with the proof of Theorem <ref>, we emphasize that the new ingredient the definition of V which is a vector valued function with only one non-zero entry. By this construction of V, we further see that for all ^d-1 measurable set B,∑_β=1^d'μ(B;χ^α_0β,Υ^α_0β,V^β)=μ(B;χ^α_0β_0,Υ^α_0β_0,V^β_0). For the rest of the proof, we simply state the two claims and omit their proofs for which one can refer to the proof of Theorem <ref>. There are α_0, β_0∈{1,…,d'} and a “(d-1)-dimensional cube” Q_k⊆ B_r_z/2(z_τ') with Q̅_k=Q_k×{0} such that μ(Ψ(Q̅_k);χ^α_0β_0,Υ^α_0,β_0,V^β_0)>0. There are α_0, β_0∈{1,…,d'} and a function v∈ C_c^1(Ω;^d') such that∑_β=1^d'μ(Ω;χ^α_0β,Υ^α_0,β,v^β)=μ(Ω;χ^α_0β_0,Υ^α_0,β_0,v^β_0)>0.§.§ Singularity in linear elliptic equations Now we apply the removable singularity theorems (Theorem <ref> and Theorem <ref>) to the case of linear elliptic equations as well as systems.In Theorem <ref> (as well as its counterpart for system, namely Theorem <ref>), we construct a more smooth function v_δ in C_c^∞(Ω) instead of a function v just in C_c^1(Ω), as in the statement of Theorem <ref>. On the one hand, to guarantee the well-definedness of L̃v_δ in L^2 space, v_δ needs to be sufficiently regular, at least belonging to W^2,p(Ω). On the other hand, to quantify the deviation of the numerical solution, we have to achieve a particular solution v_δ to the modified equation (<ref>) with a carefully chosen f. Therefore, this improvement on the regularity of v_δ is inevitable. Suppose that Assumption <ref> holds with d'=1. Then there exists a v_δ∈ C_c^∞(Ω) such that μ(Ω;χ,A̅,v_δ)> 0. Let Υ[w]=A̅ for all w∈ H_0^1(Ω). By Theorem <ref> there is a v∈ C_c^1(Ω) such thatμ(Ω;χ,A̅,v)=∫_J_χ(χ^+-χ^-)ν_χ^A̅Dv^d-1>0.To show (<ref>), we approximate v by a smooth function v_δ∈ C_c^∞(Ω). Similar to the proof of Lemma <ref>, choose a mollifier ρ∈ C_c^∞(^d) with compact support B_1(0). For any δ>0, define ρ_δ(x)=δ^-dρ(δ^-1x), x∈^d and v_δ = v*ρ_δ with δ<1/4(Ψ(Q̅_k),∂ U_z)) (See (<ref>)). Thus We have Dv_δ∈ C_c^∞(Ω) satisfying Dv_δ = v*Dρ_δ.Since v∈ C_c^1(Ω), we obtain lim_δ→ 0v*Dρ_δ = Dv uniformly in Ω. Since ^d-1(J_χ)<+∞, we have for δ small enoughμ(Ω;χ,A̅,v_δ-v)≤ 2χ_maxΛ̅^d-1(J_χ)Dv_δ-Dv≤1/2μ(Ω;J_χ,A̅,v).Thus we complete the proof by μ(Ω;χ,A̅,v_δ)=μ(Ω;χ,A̅,v_δ-v)+μ(Ω;χ,A̅,v)≥1/2μ(Ω;χ,A̅,v)>0. Suppose that Assumption <ref> holds with d'≥ 1. Then there exists a v_δ∈ C_c^∞(Ω;^d'), such that ∑_β=1^d'μ(Ω;χ^α_0β,A̅^α_0β,v_δ^β)> 0.Apply Theorem <ref> with Υ^αβ[w]=A̅^αβ for all α,β∈{1,…,d'} and w∈ H_0^1(Ω). Therefore, the proof is similar to that of Theorem <ref>.§ DEVIATION AND IMPLICIT BIAS OF RM METHODS Based on the characterization of the singularities, we are ready to study the deviation and implicit bias of RM formulation for linear elliptic equations and systems. In Section <ref>, we state some preliminary results on the existence and a priori estimates of linear elliptic equations and systems which will be applied to equations (<ref>) and (<ref>). In Section <ref>, we show that the solution to the modified equation is deviated from the one to the original equation, which is a theoretical explanation to experiments mentioned in Section <ref>. Consequently, we ask what kind of data f gives rise to such deviation and to what extent does such f occupy in L^2 space. We give a characterization by utilizing the RM-transformation T in Section <ref>, which gives a complete answer to the above question. In addition to the deviation in Section <ref>, we also study relative deviation in Section <ref>.Finally, in Section <ref>, we prove the implicit bias of RM method towards the solution to the modified equation. In fact, even if we choose initialization θ(0) such that the output function, u_θ(0), is close enough to the true solution of equation (<ref>), u, after training via gradient flow with RM risk, the parameters θ will evolve and converge to some θ(∞) with u_θ(∞) which is very close to the solution to equation (<ref>) ũ. This shows the RM methods at the exact solution u is unstable under small perturbation, and also the RM methods implicitly biases towards the solution to the modified equation. §.§ Existence and a priori estimatesWe prove the existence of solutions to the original and modified equations together with their a priori estimates. These results are standard, and we include them here for completeness.   * Suppose that Assumption <ref> holds with d'≥1. For any f∈ L^2(Ω;^d'), system (<ref>) has a unique solution u∈ H^1_0(Ω;^d'). Moreover, there exists a constant C>0 such that u_H^1(Ω;^d')≤Cf_L^2(Ω;^d').* Suppose that Assumption <ref> holds with d'≥ 1. For any f∈ H^-1(Ω;^d'),system (<ref>) has a unique solution u∈ H^1_0(Ω;^d'). Moreover, there exists a constant C>0 such that 1/Cf_H^-1(Ω;^d')≤u_H^1(Ω;^d')≤Cf_H^-1(Ω;^d').* Suppose that Assumption <ref> and <ref> hold with d'≥ 1. For any f∈ X, system (<ref>) has a unique solution ũ∈ H^1_0(Ω;^d')∩ H^2(Ω;^d').Moreover, there exists a constant C>0 such that 1/Cf_L^2(Ω;^d')≤u_H^2(Ω;^d')≤Cf_L^2(Ω;^d'). The constants C's is independent of and f.  * See Theorem <ref> which is rephrased from standard textbook such as <cit.> for d'=1, and Theorem <ref> which is rephrased from standard textbook such as <cit.> for d'≥ 1.* The existence and the second inequality is also standard and can be found in Theorems <ref> and <ref>. To show the first inequality, we notice for all φ∈ H^1_0(Ω;^d') with d'≥ 1 and φ_H^1(Ω;^d')=1 ⟨ f, φ⟩_H^-1(Ω;^d'),H^1_0(Ω;^d')≤ CDφ_L^2(Ω;^d')Du_L^2(Ω;^d')≤ CDu_L^2(Ω;^d'),where C depends on Ω and A. Taking supremum, we obtain f_H^-1(Ω;^d')≤ Cũ_H^1(Ω;^d').* For d'=1, X=L^2(Ω), Theorem <ref> with p=2 guarantees the existence of the solution ũ to the equation-∑_i,j=1^d(A̅_ijD_ijũ+χ^-1D_i^aA_ijD_jũ)=χ^-1f.Its solution ũ satisfiesũ_H^2(Ω)≤ Cχ^-1f_L^2(Ω)≤χ_min^-1Cf_L^2(Ω),where C depends on Ω, χ, and A̅. Recall A=χA̅. Obviously, ũ is also the solution toL̃ũ=-∑_i,j=1^d(A_ijD_ijũ+D_i^aA_ijD_jũ) =f. Note that L̃ũ is the weighted sum of D_jũ and D_ijũ with L^∞ coefficients A_ij and D_i^aA_ij, respectively. We have the following inequalityf_L^2(Ω)= L̃ũ_L^2(Ω)≤ Cũ_H^2(Ω),where C is independent of f.For d'> 1, the set X in (<ref>) is well-defined, and hence the existence to the equation (<ref>) holds.And since for all ũ∈ X, L̃w is a linear combination of ũ up to second order and A^αβ∈ L^∞(Ω;S^d× d) and D^aA^αβ∈ L^∞(Ω;S^d× d) for all α, β. Thus for all ũ∈ X, there is C>0 suchf_L^2(Ω;^d')≤ Cũ_H^2(Ω;^d').and by Assumption <ref>, second inequality in (3) and the uniqueness hold. §.§ Deviation occurs In this subsection, we prove that, for some specific f, the distance between the solutions to the original and modified equations is non-zero. According to our modelling in Section <ref> and numerical simulation, the solutions u_θ and ũ are very close to each other. Therefore, in the worst case, the deviation of RM solution is not negligible for the elliptic equations. And this explains why PINN sometimes may fail as shown in Section <ref>. The goal is clear, and essential we need to find a particular data f such that the corresponding u is not equal to ũ. In fact, as we mentioned previously, the set of f which induces a gap between u and ũ is identified by the kernel (T-I). Hence the transformation T is informative and required to be investigated. We will, in particular, show the compactness and the fact that it is generically not equal to identity. This is given by the following proposition.Suppose that Assumption <ref> holds with d'≥ 1. Let u and ũ be solutions to the original equation/system (<ref>) and modified equation/system (<ref>) with data f∈ X, respectively. Let T be the operator defined in (<ref>). Then * L (H^1_0, ·_H^1)→( H^-1,·_H^-1) is a bounded linear operator,* T (X, ·_L^2)→( H^-1,·_H^-1) is a bounded linear operator,* (Tf-f)^α=(Lũ-L̃ũ)^α=-∑_β=1^d'(χ^αβ+-χ^αβ-)ν_χ^αβA̅^αβDũ^β^d-1 for all α∈{1,…,d'} is a Radon measure and Tf-f is in H^-1(Ω;^d')* T≠ I, that is, T is not the identity on L^2(Ω). Clearly, L and T are both linear. * This is proved by Theorem <ref> (2).* Estimate the H^-1 norm of T f by Theorem <ref> (1)T f_H^-1(Ω;^d')= L ũ_H^-1(Ω;^d')≤ Cũ_H^1(Ω;^d'),where C depends on Ω and A. The natural embedding H^1(Ω)↪ H^2(Ω) with estimates (<ref>) and (<ref>) leads to the boundedness of T, that is,T f_H^-1(Ω;^d')≤ C ũ_H^1(Ω;^d')≤ Cũ_H^2(Ω;^d')≤ Cf_L^2(Ω;^d'),where C's are different from one inequality to another and depending on Ω, χ, A̅.* By direct calculation with the structure theorem (Theorem <ref>), Tf-f satisfies for all α∈{1,…,d'}(Tf-f)^α=(Lũ-L̃ũ)^α=-∑_β=1^d'(χ^αβ+-χ^αβ-)ν_χ^αβA̅^αβDũ^β^d-1 in the sense of Radon measure. Moreover, Tf-f∈ H^-1(Ω;^d') as a consequence of the Theorem <ref> (1) and (2). In fact, we have with some C depending on Ω, χ, A̅:Tf-f_H^-1(Ω;^d')≤Tf_H^-1(Ω;^d')+f_H^-1(Ω;^d')≤ Cf_L^2(Ω;^d'). * By Theorem <ref> and <ref>, there are α_0∈{1,…,d'} and v_δ∈ C_c^∞(Ω;^d') such that ∑_β=1^d'μ(Ω;χ^α_0β,A̅^α_0β,v_δ^β)>0.By setting f=L̃v_δ∈ L^2(Ω;^d'), we have (Tf-f)^α_0=-∑_β=1^d'(χ^α_0β+-χ^α_0β-)ν_χ^α_0βA̅^α_0βDv_δ^β^d-1in the sense of Radon measure. Let Ω_n={x∈Ω (x,∂Ω)>1/n} and use Lemma <ref> to choose a cutoff function ζ_n such that Ω_n≺ζ_n≺Ω. Then for sufficiently large n⟨ (f-Tf)^α_0, ζ_n⟩_H^-1(Ω),H^1_0(Ω) = ∑_β=1^d'μ(Ω;χ^α_0β,ζ_n A̅^α_0β,v_δ^β)=∑_β=1^d'μ(Ω;χ^α_0β, A̅^α_0β,v_δ^β)-∑_β=1^d'μ(Ω;χ^α_0β,(1-ζ_n) A̅^α_0β,v_δ^β)≥∑_β=1^d'μ(Ω;χ,A̅,v_δ)-2d'χ_maxΛ̅Dv_δ_L^∞(Ω)^d-1((Ω\Ω_n)∩ J_χ)>0,which implies Tf≠ f. The last statement immediately implies that for specific f, the deviation of solution ũ from u occurs as follows. Suppose that Assumption <ref> holds with d'≥1. Then there exists f∈ L^2(Ω;^d') such that u-ũ_H^1(Ω;^d')>0, where u and ũ are solutions to the original equation/system (<ref>) and modified equation/system (<ref>), respectively.By Proposition <ref> (4), there is f∈ L^2(Ω;^d') such that Tf-f_H^-1(Ω;^d')>0, combining Theorem <ref> (2), there is C>0 such that u-ũ_H^1(Ω;^d')≥1/CTf-f_H^-1(Ω;^d')>0,and this completes the proof.Obviously, when d'=1, we only need Assumption <ref>. §.§ Deviation occurs generically Based on Proposition <ref>, we further study the eigenvalue and eigenspace of T in this subsection.We recall that an eigenvalue could be a complex number in general. Hence, for the full consideration of the eigenvalue problem, we would like to enlarge the space of f from X to the complex-valued one, namely X̅:={L̃w w∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') }. Meanwhile, we extend the operator T to T̅: X̅→ H^-1(Ω;^d'). But for notational simplicity, we still write T for T̅. This will not cause any ambiguity, since we only need this extension for the discussion on eigenvalues. Similar remark applies to the system case in Theorem <ref>.No matter whether we consider the eigenvalue problem over the complex field or the real field, the only eigenvalue of T is 1, as shown in Theorem <ref>. Hence there is no harm to regard all functions to be real-valued, and it is consistent to our situation, that is, to study the real-valued PDE problems.Next, to characterize the implicit bias via RM-transformation, we only need to consider the real-valued kernel (T-I) which consists of all RM-invariant data f. Thus the X\(T-I) consists of data f satisfying u≠ũ. In the following theorems (Theorem <ref>, we show that the latter is more generic, and hence RM method fail for almost all data f in equations considered in this paper. Suppose that Assumption <ref> and <ref> hold with d'≥ 1 and ⋃_α,β=1^d'J_χ^αβ is not dense in Ω. Then we have * σ(T)={1};* (T-I)={f∈ X ∃ wsuch that L̃ w=f, ∀α ∑_β=1^d'μ(·;χ^αβ,A̅^αβ,w)=0};* L^2(Ω;^d')\(T-I) is dense in L^2(Ω;^d');* X is a closed subspace of L^2(Ω;^d') and X\Ker(T-I) is relatively open in L^2(Ω;^d'). As above, we also note that ∑_β=1^d'μ(·;χ^αβ,A̅^αβ,w)=0, ∀α∈{1,…,d'} for all ^d-1 measurable set B is equivalent to ∑_β=1^d'A̅^αβDw^β· D^jχ^αβ=0, ∀α∈{1,…,d'} as Radon measures.  * For any f∈X̅, let ũ be the solution to (<ref>) with data f. For z∈\{1}, we have for all α∈{1,…,d'}Tf^α-zf^α=(1-z)f^α+∑_β=1^d'A̅^αβDũ^α· D^jχ^αβin the sense of Radon measure.Notice that the measures (1-z)f^α and ∑_β=1^d'A̅^αβDũ^α· D^jχ^αβ are mutually singular to each other. HenceTf-zf=0is equivalent to(1-z)f^α=0and ∑_β=1^d'A̅^αβDũ^α· D^jχ^αβ=0,∀α∈{1,…,d'}.Therefore for all α, f^α=0. By Assumption <ref>, we have ũ =0.For z=1, we obtain for all α∈{1,…,d'}Tf^α-f^α=∑_β=1^d'A̅^αβDũ^β· D^jχ^αβ.Since ⋃_α,β=1^d'J_χ^αβ is not dense in Ω, there is a open ball B_r(x) such that B_r(x)∩ (⋃_α,β=1^d'J_χ^αβ)=∅. By choosing B_r/2(x)≺ũ≺ B_r(x) and f=L̃ũ, we obtainTf-f=0.Hence σ(T)={1}.* This is obvious.* By Theorem <ref>, there are α_0 and v_δ∈ C_c^∞(Ω;^d') such that∑_β=1^d'μ(Ω;χ^α_0β,A̅^α_0β,v_δ^β)≠ 0.By setting g=L̃v_δ, it is easy to see that Tg≠ g. For each f∈(T-I), let f_=f+g/g_L^2(Ω;^d'). We have f_∉(T-I) for all >0 and that lim_→ 0f_=f. Hence X\(T-I) is dense in X. Therefore L^2(Ω;^d')\(T-I) is dense in L^2(Ω;^d').* We show that X is closed under L^2(Ω;^d') norm. By Theorem <ref>, there is C>0L̃w_L^2(Ω;^d')≤ Cw_H^2(Ω;^d').By choosing a Cauchy sequence {f_k}_k=1^∞⊆ X with f_k=L̃ũ_k, there is f∈ L^2(Ω;^d') and ũ∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') such thatlim_k→∞f-f_k_L^2(Ω;^d') =0, lim_k→∞ũ-ũ_k_H^2(Ω;^d') =0.Now we show that L̃ũ=f, ^d-a.e. and that ũ_H^2(Ω;^d')≤ C f_L^2(Ω;^d').Note that L̃ũ-f_L^2(Ω;^d') ≤L̃ũ-L̃ũ_k_L^2(Ω;^d')+L̃ũ_k-f_L^2(Ω;^d')≤ Cũ-ũ_k_H^2(Ω;^d')+f_k-f_L^2(Ω;^d').Taking k→∞, we obtain L̃ũ-f_L^2(Ω;^d')=0. Moreover,ũ_H^2(Ω;^d') ≤ũ_k-ũ_H^2(Ω;^d')+ũ_k_H^2(Ω;^d')≤ Cf_k_L^2(Ω;^d')+ ũ_k-ũ_H^2(Ω;^d')≤ Cf_L^2(Ω;^d')+Cf-f_k_L^2(Ω;^d')+ ũ_k-ũ_H^2(Ω;^d').Taking k→∞ again, we have ũ_H^2(Ω;^d')≤ Cf_L^2(Ω;^d'). Hence C is a closed set.Now by letting {f_k}_k=1^∞⊆ X∩(T-I) be a Cauchy sequence under L^2 norm,there is an f such thatlim_k→∞f_k=f∈ X.In the rest, we show f∈(T-I).Also let u and u_k be the solution to (<ref>) with data f and f_k, respectively. Since f_k∈(T-I) for all k, then u_k=ũ_k, ^d-a.e..By (<ref>), we have:u-u_k_L^2(Ω;^d') ≤ Cf-f_k_L^2(Ω;^d'),ũ-ũ_k_L^2(Ω;^d') ≤ Cf-f_k_L^2(Ω;^d'),which indicates that u=ũ, ^d-a.e.. Thus Tf=Lũ=Lu=f,which means Tf=f and hence f∈(T-I). When d'=1, we have X=L^2(Ω) and when d'≥ 1, X is the largest space that makes equation (<ref>) solvable. Theorem <ref> (3) and (4) together claim that most of f in X make deviation occur.To illustrate Theorem <ref>, we show in the following example that for d'=1 and f∈(T-I), we have u=ũ and the numerical solution u_θ matches them quite accurately. Of course, such f should be very rare according to Theorem <ref>.[back to 1-d] We again consider (<ref>) with coefficients and right hand side as follows:A(x)={ 12,x∈ (-1,0), 1,x∈ [0,1), .f(x)={-1,x∈ (-1,0),-2,x∈ [0,1). .For this problem, we have u=ũ=x^2-1 and hence D_xũ = 2x which satisfies D_xũ(0) =0. Thus by Theorem <ref>, Tf=f, which means the equalities hold: u=ũ. Here the numerical simulation is under the same setting as that of Section <ref>.The numerical simulation indicates u=ũ≈ u_θ, which verifies Theorem <ref>, though only for very rare f. The behavior of RM method solving linear PDEs satisfying Assumption <ref> can be summarized as follows: if μ(·;χ,A̅,ũ)=0, then u_θ≈ũ_θ≈ũ = u,if μ(·;χ,A̅,ũ)≠ 0, then u_θ≈ũ_θ≈ũ≠ u.§.§ Deviation occurs severely As we studied in Section <ref>, for f∉(T-I), the deviation occurs, that is u-ũ_H^1(Ω)>0. From this, it is still unknown whether the relative deviation sup_f∈ L^2(Ω)ũ-u_H^1(Ω)/ũ_H^1(Ω) is bounded or not.In this section, we step further and prove that the supremum could be infinity, i.e., sup_f∈ L^2(Ω)ũ-u_H^1(Ω)/ũ_H^1(Ω)=+∞ (See Theorem <ref>). Furthermore, we will obtain a stronger result (See Proposition <ref>) showing that even for data f sufficiently close to the RM-invariant subspace (T-I), the relative deviation can still achieve have a finite value. Suppose that Assumption <ref> holds with d'≥ 1. For the countably many (d-1)-dimensional C^1 manifolds {S_i^αβ}_i=1^∞ such that ^d-1(⋃_α,β=1^d'(J_χ^αβ-⋃_i=1^∞S^αβ_i))=0 (see in Theorem <ref>), suppose that there is a constant r_0>0 such that for (i,α_1,β_1) ≠ (j,α_2,β_2), (S^α_1β_1_i,S^α_2β_2_j)=inf_x∈ S^α_1β_1_i,y∈ S^α_2β_2_jx-y≥ r_0. Thensup_f∈ L^2(Ω;^d')ũ-u_H^1(Ω;^d')/ũ_H^1(Ω;^d')=+∞. where u and ũ are solutions to the original system (<ref>) and modified system (<ref>) with data f∈ L^2(Ω;^d'), respectively.Let's start with the case d'=1: ByTheorem <ref>, we haveũ-u_H^1(Ω)/ũ_H^1(Ω)≥ C Tf-f_H^-1(Ω)/ũ_H^1(Ω)=C(χ^+-χ^-)ν_χA̅Dũ^d-1_H^-1(Ω)/ũ_H^1(Ω),where for the last equality, we refer readers to Proposition <ref>. Combining Theorems <ref> and <ref>, there is a v_δ=ρ_δ*v∈ C_c^∞(Ω) with v=ζ V∈ C_c^1(Ω),where ρ_δ, ζ and V are defined according to the proofs of Theorems <ref> and <ref>. Since v_H^1(Ω)>0 and μ(Ω;χ,A̅,v)>0, we have(χ^+-χ^-)ν_χA̅Dv^d-1_H^-1(Ω)>0.Proof of the theorem when (χ^+-χ^-)ν_χA̅Dv^d-1_H^-1(Ω)=+∞.For each k∈^+, there is a φ_k∈ C_c^∞(Ω) such thatμ(Ω; χ, φ_kA̅, v)≥ kφ_k_H^1(Ω).We can choose sufficiently small δ such that μ(Ω; χ, φ_kA̅,v_δ-v)≤φ_k_H^1(Ω),because v_δ=ρ_δ*v→ v uniformly and φ_k_L^∞(Ω)<+∞. Henceμ(Ω; χ, φ_kA̅, v_δ)≥ (k-1)φ_k_H^1(Ω).Therefore(χ^+-χ^-)ν_χA̅Dv_δ^d-1_H^-1(Ω)≥ k-1.Moreover lim_δ→ 0v_δ_H^1(Ω)=v_H^1(Ω), which indicates thatlim_δ→ 0ũ-u_H^1(Ω)/ũ_H^1(Ω)=+∞,where u, ũ are solutions to equation (<ref>) and (<ref>) respectively with the data f=L̃v_δ. Proof of the theorem when (χ^+-χ^-)ν_χA̅Dv^d-1_H^-1(Ω)<+∞. Recall thatV(x)=(x_τ)_d-h(x'_τ),where x'_τ=((x_τ)_1,…,(x_τ)_d-1), h is a C^1 function, and for all x∈ S∩ U_z, V(x)=0, and the unit normal vector at x∈ S∩ U_z is ν_χ=DV/DV by proof of Theorem <ref>. For each r∈ (0,1], there is a functionsuch thatD=1/rDv,for all x∈ S∩ U_z. By our construction in the proof of Theorems <ref> and <ref>, v=ζ V is supported in O^ and v_δ is supported in O^2 with=δ < 1/4min((Ψ(Q̅_k), ∂ U_z), r_0).Since V(x)=0, x∈ S∩ U_z and DV_L^∞(O^)≤ C according to (<ref>), then V(x)≤ 2C, x∈ O^2.By choosing ≤_1 with a sufficiently small _1>0, we can define for 0<r≤ 1_r={(y',y_d) ∃ω∈ such that Ψ(y',ω)∈ O^2, y_d≤ 2Cr}=× [-2Cr,2Cr],where ={y' ∃ω∈ such that Ψ(y',ω)∈ O^2} and O^2⊆Ψ(_1) ⊆ U_z. Thus we have v_δ_H^1(Ω)=v_δ_H^1(Ψ(_1)).Consider the integration over _1∫_Ψ(_1)v_δ(x)x =∫_Ψ(_1)[ρ_δ*(ζ V)](x)x=∫_∫_-2C^2C[ρ_δ*(ζ V)]∘τ^-1(y',y_d+h(y'))y_dy'.Denote (y)=[ρ_δ*(ζ V)]∘τ^-1∘ψ(y) with y=(y',y_d) and consider the rescaled function (y) defined on _r and (x) defined on x∈ U_z respectively as(y)=(y',y_d)=(y',1/ry_d)=[ρ_δ*v]∘τ^-1(y', 1/ry_d+h(y'))and(x) =v∘τ^-1(x_τ',1/r(x_τ)_d+r-1/rh(x_τ')).It is easy to verify that ∈ H_0^1(Ω). For simplicity, we denote(x) =ζ∘τ^-1(x_τ',1/r(x_τ)_d+r-1/rh(x_τ')), (x) =V∘τ^-1(x_τ',1/r(x_τ)_d+r-1/rh(x_τ')). By the definition of V as in (<ref>), we have for x∈ U_z(x)=V∘τ^-1(x'_τ,1/r(x_τ)_d+r-1/rh(x'_τ))=1/r(x_τ)_d+r-1/rh(x'_τ)-h(x'_τ) =1/r((x_τ)_d-h(x'_τ))=1/rV(x).Hence (x)=V(x)=0 and D(x)=1/rDV(x) for all x∈ S∩ U_z. For x∈ S∩ U_z, (x_τ)_d=h(x'_τ) implies that for each x∈ S∩ U_z,(x)=ζ∘τ^-1(x'_τ,1/r(x_τ)_d+r-1/rh(x'_τ))=ζ∘τ^-1(x_τ)=ζ(x).Thus by the product rule, we have for x∈ S∩ U_zD= D+ D= D=1/rζ DV=1/rDv(x).For r∈ (0,1], _H^1(Ω)≤C/√(r) for some constant C>0 independent of r.For all x∈ O^2, we haveD=1/r DV≤1/rsup_x∈ U_zDV(x)≤C/r. With a little bit abuse of notation, weuse τ(i) to denote the i-th entry of τ((1,…,d)^). For i∈{1,…,d-1}D_τ(i) =D_τ(i)[ζ](τ^-1(x'_τ,1/r(x_τ)_d+r-1/rh(x'_τ)))   +r-1/rD_τ(d)[ζ](τ^-1(x'_τ,1/r(x_τ)_d+r-1/rh(x'_τ)))D_i[h](x_τ'),D_τ(d) =1/rD_τ(d)[ζ](τ^-1(x'_τ,1/r(x_τ)_d+r-1/rh(x'_τ))).Since h is C^1 function in U_z and τ^-1(x'_τ,1/r(x_τ)_d+r-1/rh(x'_τ))∈ O^2⊆ U_z, combining (<ref>), we have for all x∈_r,D≤d/rC/.Thus D=1/rV D≤1/r2Crd/rC/≤C/r.Combining (<ref>) and (<ref>), we obtain D(x)≤2C/r for all x∈_r. Therefore_H^1(Ω)^2 =_L^2(Ω)^2+D_L^2(Ω)^2 ≤ r∫_H^1(Ψ(_1))v^2x+C/r≤C/r,where C is independent of r.For each r∈ (0,1], there is δ>0 such that the convolution _δ=ρ_δ*∈ H_0^1(Ω)∩ H^2(Ω) satisfying(χ^+-χ^-)ν_χA̅D_δ^d-1_H^-1(Ω)>C/√(r)for some constant C>0 independent of r.Note that(χ^+-χ^-)ν_χA̅D^d-1_H^-1(Ω)=1/r(χ^+-χ^-)ν_χA̅Dv^d-1_H^-1(Ω)>0.Hence there is φ∈ C_c^∞(Ω) such thatμ(Ω;χ,φA̅,) ≥3/4(χ^+-χ^-)ν_χA̅D^d-1_H^-1(Ω)φ_H^1(Ω).Let _δ=ρ_δ*∈ H^1_0(Ω)∩ H^2(Ω), by the property of mollifier lim_δ→ 0_δ-_H^1(Ω)=0. We thus can choose δ≤_2 with sufficiently small _2 such that_H^1(Ω)/_δ_H^1(Ω)≥1/2 andμ(Ω;χ,φA̅,_δ) ≥1/2(χ^+-χ^-)ν_χA̅D^d-1_H^-1(Ω)φ_H^1(Ω),which implies(χ^+-χ^-)ν_χA̅D_δ^d-1_H^-1(Ω)≥1/2(χ^+-χ^-)ν_χA̅D^d-1_H^-1(Ω).Hence there is δ=<min ((Q_k,∂ U_z)/4,r_0,_1,_2) such that(χ^+-χ^-)ν_χA̅D_δ^d-1_H^-1(Ω)/_δ_H^1(Ω) =(χ^+-χ^-)ν_χA̅D_δ^d-1_H^-1(Ω)/(χ^+-χ^-)ν_χA̅D^d-1_H^-1(Ω)×(χ^+-χ^-)ν_χA̅D^d-1_H^-1(Ω)/_H^1(Ω)×_H^1(Ω)/_δ_H^1(Ω)≥1/4r(χ^+-χ^-)ν_χA̅Dv^d-1_H^-1(Ω)/_H^1(Ω)≥C/√(r)(χ^+-χ^-)ν_χA̅Dv^d-1_H^-1(Ω).Recalling (<ref>) and taking r→ 0^+, we obtainlim_r→ 0^+(χ^+-χ^-)ν_χA̅D_δ^d-1_H^-1(Ω)/_δ_H^1(Ω) =+∞.Let ũ=_δ and f=L̃_δ for sufficiently small r. Also recall the definition of u. Thus (<ref>) and (<ref>) leads to the desired unboundedness of ũ-u_H^1(Ω)/ũ_H^1(Ω).For the case of d'> 1, since Assumption <ref> holds, by Theorem <ref> (2), there is constant C>0 such that u_H^1(Ω;^d')>1/Cf^α_H^-1(Ω;^d').Thus we haveũ-u_H^1(Ω;^d')/ũ_H^1(Ω;^d') ≥(Tf)^α_0-f^α_0_H^-1(Ω;^d')/Cũ_H^1(Ω;^d')=∑_β=1^d'(χ^α_0β+-χ^α_0β-)ν_χ^α_0βA̅Dũ^β^d-1_H^-1(Ω;^d')/Cũ_H^1(Ω;^d'). Recall the Claim 1 of proof of Theorem <ref>. Let V=(0,…,V^β_0(x),…,0)^∈ C^1(U_z;^d'), v=ζ V∈ C^1(U_z;^d') where ζ∈ C_c^∞(Ω;^d') and that v_δ=ρ_δ*v. Let ũ=v_δ and u be the solutions to equation (<ref>) and (<ref>) with f=L̃ũ, we obtainũ-u_H^1(Ω;^d')/ũ_H^1(Ω;^d') ≥∑_β=1^d'(χ^α_0β+-χ^α_0β-)ν_χ^α_0βA̅Dũ^β^d-1_H^-1(Ω;^d')/Cũ_H^1(Ω;^d')=(χ^α_0β_0+-χ^α_0β_0-)ν_χ^α_0β_0A̅Dũ^β_0^d-1_H^-1(Ω;^d')/Cũ^β_0_H^1(Ω;^d').The rest of that follows the previous one, and hence we omits it.Suppose that Assumption <ref> holds with d'≥ 1 and that J_χ is not dense in Ω. For each f∈ L^2(Ω;^d'), we define f_∥ and f_ to be the projections of f onto the closed spaces (T-I) and ((T-I))^, respectively. Then there exist constants _0>0 and C>0 such that for any 0<≤_0, we havesup_f_∥_L^2(Ω;^d')=1,f__L^2(Ω;^d')=ũ-u_H^1(Ω;^d')/ũ_H^1(Ω;^d')≥ C,where u and ũ are solutions to the original equation (<ref>) and modified equation (<ref>), respectively, with data f=f_∥+f_.We stress that the constant C>0 is independent of . The proof for case of equation and system are almost the same and here we provide a proof for case of equation. Let u_1 and ũ_1 are solutions to the original equation (<ref>) and modified equation (<ref>), respectively, corresponding to the data f_∥/f_∥_L^2(Ω). Let u_2 and ũ_2 are solutions to the original equation (<ref>) and modified equation (<ref>), respectively, corresponding to the data f_/f__L^2(Ω). Here f_∥ and f_ will be determined later.By linearity, we have ũ=ũ_1+ũ_2, u=u_1+ u_2.This gives rise toũ-u_H^1(Ω)/u_H^1(Ω) =ũ_2-u_2_H^1(Ω)/u_H^1(Ω)≥ũ_2-u_2_H^1(Ω)/u_2_H^1(Ω)+u_1_H^1(Ω).We then claim that we can choose f_∥∈(T-I) to make u_1_H^1(Ω) arbitrarily small. Notice thatu_1_H^1(Ω)=u_1_H^1(Ω)/u_1_H^2(Ω)u_1_H^2(Ω)≤ Cf_∥_L^2u_1_H^1(Ω)/u_1_H^2(Ω),where the last inequality results from Theorem <ref>. Moreover, since J_χ is not dense in Ω, we can choose a cube Q_r(y) centered at y with side length r such that Q_r(y)∩ J_χ = ∅ with r to be determined later. We define for x=(x_1,x_2,…,x_d)∈ Q_r(y) u_g(x)=∏_i=1^d[cos( π(x_i-y_i)/r)+1].Thenu_g_H^1(Ω)^2≤ (8r)^d+dπ^2 (8r)^d-11/r.Since u_g_H^2(Ω)^2≥Δ u_g_L^2(Ω)^2=d(3π r)^d-1π^41/r^3,we haveu_g_H^1(Ω)/u_g_H^2(Ω)≤(2π^2d8^d-1r^d-2/π^4d(3π)^d-1r^d-4)^1/2<r.Let g=L̃u_g. It is clear that g∈(T-I). By setting f_∥=g/g_L^2, we haveu_1_H^1(Ω)=u_1_H^1(Ω)/u_1_H^2(Ω)u_1_H^2(Ω)=u_g_H^1(Ω)/u_g_H^2(Ω)u_1_H^2(Ω)≤ C r.Choose a sufficiently small r, say r≤u_2_H^1(Ω)/4C, and plugin it into (<ref>). We obtainũ-u_H^1(Ω)/u_H^1(Ω)>4/5ũ_2-u_2_H^1(Ω)/u_2_H^1(Ω).Notice thatu_H^1(Ω)/ũ_H^1(Ω)=u_1+ u_2_H^1(Ω)/u_1+ũ_2_H_1(Ω)≥u_2_H^1(Ω)-u_1_H^1(Ω)/ũ_2_H_2(Ω)+u_1_H_1(Ω)≥u_2_H^1(Ω)-Cr/ũ_2_H_1(Ω)+Cr >1/2.This with (<ref>) gives rise toũ-u_H^1(Ω)/ũ_H^1(Ω)>2/5ũ_2-u_2_H^1(Ω)/u_2_H^1(Ω). The last right hand side is a constant, and hence the proof is completed. §.§ Implicit bias towards the solution to the modified equationFor a given function w, the population risk in the residual minimization method for solving the modified equation/system reads asR̃(w)=∫_Ω(L̃w-f)^2x + γ∫_∂Ω(Bw-g)^2x.In the following theorem and proposition, we show the implicit bias of RM methods via an energetic approach. Roughly speaking, for a function close to u, it has large (modified) risk R̃; while for any function has small (modified) risk R̃, it should be close to ũ in the sense of H^1 norm. Suppose that Assumption <ref> and <ref> hold with d'≥ 1. Let u and ũ be solutions to the original equation/system (<ref>) and modified equation/system (<ref>) with data f∈ X\(T-I), respectively. There exist constants _0>0 and C_0>0 such that for all 0<<_0 and for all v∈ B_(u), we have R̃(v)≥ C_0>0. Here B_(u)={w∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') w-u_H^1(Ω;^d')<, L̃w∈ X}.We remark that Theorem <ref> implies that there is >0 such that B_(u)∩B̃_(ũ)=∅, where B̃_(ũ)={ṽ∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') R̃(ṽ)<}.Since f∈ X\(T-I), by Theorem <ref>, there is a constant C_1 such that u-ũ_H^1(Ω;^d')≥ C_1.By linearity of the modified operator L̃, we have for all v∈ H^1_0(Ω;^d')∩ H^2(Ω;^d')(L̃v-L̃ũ)^α=(L̃v)^α-f^α.Since L̃v∈ X, by the definition of X and Assumption <ref>ũ-v_H^2(Ω;^d')≤ Cf-L̃v_L^2(Ω;^d').Let _0=C_1/2. The for all <_0 and v∈ B_(u), we havev-ũ_H^2(Ω;^d')≥v-ũ_H^1(Ω;^d')≥ũ-u_H^1(Ω;^d')-u-v_H^1(Ω;^d')>C_1/2.This together with (<ref>) implies for all v∈ B_(u)√(R̃(v))≥L̃v-f_L^2(Ω;^d')≥1/Cũ-v_H^2(Ω;^d')>C_1/2C.The proof is completed by setting C_0=C^2_1/4C^2.Suppose that Assumption <ref> and <ref> hold with d'≥ 1. Let ũ be the solution to the modified equation (<ref>) with data f∈ X.* Then for all w∈ H^1_0(Ω;^d')∩ H^2(Ω;^d'), there is constant C>0 such that w-ũ_H^1(Ω;^d')≤ C√(R̃(w)). * Suppose that the function w∈ H_0^1(Ω;^d')∩ H^2(Ω;^d'), the random variable X is sampled from the uniform distribution over Ω, the random variable =Ω(Lw()-f)^2 with covariance ofsatisfies []< +∞. Then for any δ>0, with probability 1-δ over the choice of independent uniformly distributed data S:={x_i}_i=1^n in Ω, we have R̃(w)-R̃_S(w)≤1/√(n)√([]/δ), In particular, if R̃(w)< for some small , then w-ũ_H^1(Ω;^d')≤ C√(). (1) is a direct consequence from Theorem <ref>. And (2) is a direct consequence from Monte Carlo integration. We remark that in Proposition <ref>, for w∈ H_0^1(Ω;^d')∩ H^2(Ω;^d') with []< +∞, if R̃(w)≤, for n sufficiently large, we have w-ũ_H^1(Ω;^d')≤ C√(). Moreover, the condition []< +∞ is equivalent to the inequality (D_x^2w- D_x^2ũ)^2_L^2(Ω;^d')<+∞. Theorem <ref> and Proposition <ref> together lead to the implicit bias of RM method in solving PDE: it will bias the numerical solution against the solution to original equation (<ref>) and towards the solution to modified equation (<ref>).§ DISCUSSION: EXTENSION TO QUASILINEAR ELLIPTIC EQUATIONS In this section, we show that some of our results still work for the case of quasilinear elliptic equation. In Section <ref>, we introduce the equation as the Euler–Lagrange equation of a variational problem which arise typically in the materials science, together with its modified equation, and also provide some assumptions to ensure the existence. Besides, some auxiliary lemmas and discussions can be found in this subsection for later analysis. The techniques we developed in Section <ref> is also applicable to the case of quasilinear equation (See Section <ref>) and the deviation still occur in the case of quasilinear equation (See Section <ref>). §.§ Quasilinear elliptic equations with BV coefficientsAssume that Ω⊆^d is a bounded domain with C^1,1 boundary ∂Ω, and L: ^d ××Ω→ is a function taking the formL(ξ,z,x) = χ(x)W(ξ,x) - f(x)z. Consider the minimization problem of the energy functional I[u]=∫_ΩL(D u,u,x)x, that is,min_u∈ H^1_0(Ω)∫_Ω(χ W(D u,x) - fu)x.This type of problem arises from, for example, the coexistence of multiple phases in solids. In such case, u is the displacement field of a solid, f is the body force due to some external field, and χ(x) is a piece-wise constant function indicating different phases. In each phase, the stored energy density W(D u, x) can be derived from the molecular potentials by using the Cauchy–Born rule. See for example <cit.> and the references therein. We also remark that the displacement u is usually a vector field, however, for simplicity, we assume it is a scalar field in this work. The results can be extended to the vector field setting.The equation for the Euler–Lagrange equation of (<ref>) reads as{ Q[u] =f in Ω, u=0 on ∂Ω, .where Q is the quasilinear operator defined asQ[u]=-·(χ D_ξW(D u, x)).We emphasize that Q[u] -Q[v] ≠ Q[u-v] in general. Notice that (<ref>) recovers the linear elliptic equation (<ref>) when we setW(ξ,x)= 1/2ξ^A̅ξ=1/2∑_i,j=1^dA̅_ijξ_iξ_j. In the spirit of modified equation derived in Section <ref>, we shall consider the equation for the modified equation in non-divergence form as follows. {Q̃[u] =f in Ω, u= 0 on ∂Ω, .whereQ̃[u]=-(∑_i,j=1^dD_ξ_iD_ξ_jW(Du,x)D_iju+∑_i=1^dD_ξ_iD_x_iW(Du,x))χ-∑_i=1^dD_i^aχ D_ξ_iW(Du,x). To work on these quasilinear equation, we require the following technical assumptions. [BV coeffcients] Assume χ∈ SBV^∞(Ω), 0<^d-1(J_χ)<+∞, f∈ L^p(Ω) with p>d, and W∈ C^3(^d×Ω). Also assume there are constants χ_min,χ_max>0 such that for all x∈Ωχ_min≤χ(x)≤χ_max. [coercivity, convexity, boundedness] Let Q be the operator defined in (<ref>). Assume the following. * (coercivity) There exist constants c_1,c_2>0 such that for all ξ∈^d, x∈ΩW(ξ,x)≥ c_1ξ^2-c_2and 2χ_minc_1≥sup_u∈ H^1_0(Ω)u_L^2(Ω)/Du_L^2(Ω) where the right hand side is the inverse of the best Poincare constant. * (uniform convexity in ξ) There exist constants λ,Λ>0 such that for all ξ, ξ' ∈^d, x ∈Ωλξ'^2≤(ξ')^ D_ξ^2 W(ξ,x)ξ' ≤Λξ'^2. * (boundedness) There exists c_3>0 such that for allξ∈^d, x ∈Ω.W(ξ, x) ≤ c_3(ξ^2+1), D_ξ W(ξ, x) ≤ c_3(ξ+1).[boundedness, Lipschitz continuity, growth condition] Let Q̃ be the operator defined in (<ref>). Also let p>d. Assume the following. * (boundedness) There exists a non-negative function b_1∈ L^2p(Ω) satisfying D_x⊙ D_ξ W(ξ,x)≤ b_1(x)(1+ξ)for all ξ∈^d, x∈Ω;* (Lipschitz continuity) D_x⊙ D_ξ W(ξ,x) and D_ξ W(ξ,x) are c_4-Lipschitz continuous in ξ, that is, D_x⊙ D_ξ W(ξ,x)- D_x⊙ D_ξ W(ξ',x) +D_ξ W(ξ,x)-D_ξ W(ξ',x)≤ c_4ξ-ξ'for all ξ,ξ'∈^d and ^d-a.e. x∈Ω;* (growth condition) There exists a constant c_5>0 satisfyingD_x D_ξ^2 W(ξ,x)+ D_ξ^3 W(ξ,x)≤ c_5ξfor all ξ∈^d, x∈Ω. Suppose that Assumption <ref> and <ref> hold. Then for any f∈ L^2(Ω), there is a solution u∈ H_0^1(Ω) to the original equation (<ref>). Suppose that Assumption <ref>, <ref> and <ref> hold with p>d. For any f∈ L^p(Ω), there is a solution u∈ W_0^1,p(Ω) ∩ W^2,p(Ω) to the modified equation (<ref>).The proofs of these two theorems are left to Appendix <ref>. We emphasize that these two existence theorems are essentially proved in literature (for which we refer the readers to Appendix <ref>) under very general settings. Thus we check that the required assumptions are satisfied in our proofs of Theorems <ref> and <ref> in Appendix <ref>.Note that in the quasilinear case the solution to (<ref>) and (<ref>) may not be unique, and hence we have to use the notion of solutions sets. These solution sets as well as their Hausdorff distance are defined as follows.Suppose that Assumptions <ref>, <ref> and <ref> hold with p>max{2,d}. We define the solution sets to the original equation (<ref>) and modified equation (<ref>) with data f∈ L^p(Ω). More precisely, we define= {u u∈ H^1_0(Ω)solves equation (<ref>) with data f},= {ũ ũ∈ W^1,p_0(Ω)∩ W^2,p(Ω)solves equation (<ref>) with data f}._H(,) = sup_ũ∈inf_u∈u-ũ_H^1(Ω), As we will see in Proposition <ref>, for a given f∈ L^p(Ω),is closed in H^1(Ω) norm. Thus we conclude that (1) if _H(,)=0, then for each ũ∈, by the closeness ofthere is a u∈ such that u=ũ ^d-a.e. x∈Ω, which means the ũ will not deviate from u; (2) if _H(,)>0, which means \≠∅, then there is ũ deviated from u.Therefore, _H(,) characterize how muchdeviates from . Moreover, if , are both singletons, the above discussion reduces to the deviation results studied in Section <ref>. §.§ Singularity in quasilinear elliptic equationIn this section, we apply the removable singularity (Theorem <ref>) to the case of quasilinear equation, and obtain Theorem <ref> which is parallel to Theorems <ref> and <ref>. And to obtain the estimates in the quasilinear case, we need the following assumption.Assume that there exists a constant c_6>0 such that for all ξ∈^d, x∈ΩD_ξW(ξ,x)≤ c_6ξ.Also assume that the original equation with data f=0, { -·(χ D_ξW(Du,x)) =0 in Ω, u= 0 on ∂Ω, .has a unique solution, namely u_0= 0 on Ω. Suppose the Assumption <ref> and <ref> hold. Then u_0=0 is also a solution to the modified equation (<ref>) with data f=0. Consider the divergence in (<ref>) and note that χ D_ξ_iW(Du_0,x)∈ SBV(Ω) for all i∈{1,…,d}. By the structure theorem, namely Theorem <ref>, we have:-(∑_i,j=1^dD_ξ_iD_ξ_jW(Du_0,x)D_iju_0+∑_i=1^dD_ξ_iD_x_iW(Du_0,x))χ-∑_i=1^dD_iχ D_ξ_iW(Du_0,x)=0,where D_iχ=D_i^aχ+D_i^jχ. Since D_ξW(ξ,x)≤ c_6ξ, this leads to D_ξ_iW(Du_0,x)=0. Thus-(∑_i,j=1^dD_ξ_iD_ξ_jW(Du_0,x)D_iju_0+∑_i=1^dD_ξ_iD_x_iW(Du_0,x))χ-∑_i=1^dD^a_iχ D_ξ_iW(Du_0,x)=0,and the proof is completed. Suppose that Assumption <ref>, <ref>, <ref> and <ref> hold. Then there exists v_δ∈ C_c^∞(Ω) such that ∫_J_χ(χ^+-χ^-)ν_χ^ D_ξW(Dv_δ,x) ^d-1> 0.Since Assumption <ref> holds, we have D_ξW(0,x) =0 for all x∈Ω. By using the Newton–Leibniz theoremD_ξW(Du,x)=D_ξW(0,x)+∫_0^1/tD_ξW(tDu,x)t=∫_0^1A_u^tDut,where in the second equality we use the fact that D_ξW(0,x)=0, and A_u^t is a matrix-valued function with entries defined as [A_u^t]_ij = [D_ξ_iD_ξ_jW(tDu,x)]_ij, t∈[0,1]. Thus ∫_J_χ(χ^+-χ^-)ν_χ^ D_ξW(Du,x) ^d-1=μ(Ω; χ, ∫_0^1A_u^tt, Du).Notice that, by Assumption <ref> (2), we have for all ξ∈^d, t∈ [0,1]λξ^2 ≤ξ^A_u^tξ≤Λξ^2.By letting Υ[u]=∫_0^1A_u^tt and Theorem <ref>, there is v∈ C_c^1(Ω) such thatμ(Ω; χ, ∫_0^1A_v^tt, Dv)>0,and this is equivalent to∫_J_χ(χ^+-χ^-)ν_χ^ D_ξW(Dv,x) ^d-1>0.Let v_δ =v *ρ_δ with δ<1/4(Ψ(Q̅_k),∂ U_z)), we claim thatlim_δ→ 0∫_J_χ(χ^+-χ^-)ν_χ^(D_ξW(Dv,x)-D_ξW(Dv_δ,x)) ^d-1=0.To prove (<ref>), we use the Newton–Leibniz theorem again:D_ξ_iW(D v, x)=D_ξ_iW(D v_δ, x)+∫_0^1d/d t D_ξ_iW(D v_δ+t(D v-D v_δ, x) d t.=D_ξ_iW(D v_δ, x)+∫_0^1∑_j=1^d D_ξ_iD_ξ_jW(D v_δ+t(D v-D v_δ), x)(D_j v -D_j v_δ)t.HenceD_ξW(Dv,x)-D_ξW(Dv_δ,x)=∫_0^1A_v,v_δ^t D(v-v_δ)t,where A_v,v_δ^t= {D_ξ_iD_ξ_jW(D v_δ+t(D v-D v_δ), x)}_ij. Recall that Dv_δ→ Dv uniformly as δ→ 0. Thus∫_J_χ(χ^+-χ^-)ν_χ^(D_ξW(Dv,x)-D_ξW(Dv_δ,x)) ^d-1 = ∫_J_χ(χ^+-χ^-)ν_χ^(∫_0^1A_v,v_δ^t D(v-v_δ)t) ^d-1 ≤2χ_L^∞(Ω)^d-1(J_χ)ΛDv-Dv_δ_L^∞(Ω),which means lim_δ→ 0∫_J_χ(χ^+-χ^-)ν_χ^(D_ξW(Dv,x)-D_ξW(Dv_δ,x)) ^d-1=0.Combining (<ref>) and (<ref>), we have∫_J_χ(χ^+-χ^-)ν_χ^ D_ξW(Dv_δ,x) ^d-1> 0,for δ small enough. Hence the proof is completed. §.§ Deviation occurs for quasilinear elliptic equationsWe show that the deviation is not exclusive for the linear problems. The deviation occurs for the quasilinear elliptic equations. We start with some properties of the solution set .Suppose that Assumption <ref>, <ref> and <ref> hold with p>max{2,d}. Given any f∈ L^p(Ω), we haveis closed under H^1 norm. Let {u_k}⊆⊆ H^1_0(Ω) be a convergent sequence which belong to the solution set to equation (<ref>) with data f. According to the completeness of H_0^1(Ω), there exists a u∈ H_0^1(Ω) such thatlim_k→∞u-u_k_H^1(Ω)=0. Let g = Q[u]∈ H^-1(Ω). To proveis closed under H^1 norm, we only require g=f.Using the Newton–Leibniz theoremD_ξ_iW(Du,x)-D_ξ_iW(Du_k,x)=∫_0^1A_u,u_k^t D(u-u_k)t.Thus for each φ∈ H^1_0(Ω) ⟨ g-f,φ⟩_H^-1(Ω),H^1_0(Ω) =∫_Ωχ(x)Dφ[D_ξW(Du,x)-D_ξW(Du_k,x)]x≤ C φ_H^1_0(Ω)u-u_k_H^1_0(Ω),with right hand term tends to 0 as k→∞. Hence u is a solution to equation (<ref>) with right hand term as f. Suppose that Assumption <ref>, <ref>, <ref> and <ref> hold with p>d. Then there exists f∈ L^p(Ω) such that _H(,)>0 whereandare solution sets to the original equation (<ref>) and modified equation (<ref>), respectively.By Thoerem <ref>, there is a v_δ∈ C_c^∞(Ω) such that ∫_J_χ(χ^+-χ^-)ν_χ^ D_ξW(Dv_δ,x)^d-1> 0,which means (Q̃[v_δ]-Q[v_δ])(Ω)=∫_J_χ(χ^+-χ^-)ν_χ^ D_ξW(Dv_δ,x)^d-1> 0. Here we regard Q̃[v_δ]-Q[v_δ] as a Radon measure.Let f= Q[v_δ] and g=Q̃[v_δ]. We claim that v_δ is not any solution toQ [u] =g ,u = 0.Otherwise, v_δ is a solution to the above equation and we have f=Q[v_δ]=g, which leads to a contradiction. Hence v_δ∉(g). By Proposition <ref>, (g) is closed under H^1 norm. Thus (v_δ,(g))>0, and hence _H((g),(g))>0. Now we conclude this work with a theorem showing the relative deviation can be very large, for the quasilinear elliptic equations, just as the linear case.Suppose that Assumption <ref>, <ref>, <ref> and <ref> hold with p>d. Also suppose that there are a constant r_0>0 and a countably many (d-1)-dimensional C^1 manifolds {S_i}_i=1^∞ such that ^d-1(J_χ\(⋃_i=1^∞S_i))=0 and that for i≠ j, (S_i,S_j)=inf_x∈ S_i,y∈ S_jx-y≥ r_0>0. Thensup_f∈ L^p(Ω)sup_ũ∈inf_u∈ũ-u_H^1(Ω)/ũ_H^1(Ω)=+∞, whereandare the solution sets to the original equation (<ref>) and modified equation (<ref>) with data f∈ L^p(Ω), respectively. Recall thatQ[ũ]-Q[u]=(χ^+-χ^-)ν_χD_ξW(Dũ,x)^d-1.which is non-zero. As above, by using the Newton–Leibniz theorem again, we haveD_ξW(Dũ,x)=D_ξW(Dũ,x)-D_ξW(0,x)=∫_0^1A_ũ^tDũt,where [A_ũ^t]_ij = [D_ξ_iD_ξ_jW(tDũ,x)]_ij, t∈[0,1].By using Proposition <ref>, combining the above two equations, we obtainũ-u_H^1(Ω)/ũ_H^1(Ω)>C(χ^+-χ^-)ν_χ∫_0^1A_ũ^tDũt^d-1_H^-1(Ω)/ũ_H^1(Ω).The rest of the proof is omitted and readers' can refer to the proof of Theorem <ref>. § NOTATIONFor the readers' convenience, we collect the frequently-used notations and list them in Table <ref>.Also for notation simplicity, unless specifically claimed, we abbreviate g(x) as g for univariate functions. In the table, equation is the abbreviation of boundary value problem and RM is the abbreviation of residual minimization.m3cm<|m9cm<|m2cm< List of notations. [1.5pt] NotationDefinition/MeaningRefer to (Y,·_Y) Banach space equipped with norm ·_Y S^d× dset of d× d symmetric real-valued matrices a⊙ bentry-wise multiplication, e.g., if a,b∈^d, then c=a⊙ b means c_i=a_ib_i, i={1,…,d} D_ξ⊙ D_x (F) entry-wise differentiation, e.g., for F(ξ,x) with ξ,x∈^d, D_ξ⊙ D_x (F)= (D_ξ_1D_x_1F,…,D_ξ_dD_x_dF) fright-hand-side of the PDE, (interior) data F ∘ Gcomposition of functions: (F∘ G)(x)=F(G(x)) D_i[F](x), D_iF i-th partial derivative of F evaluated at x; if no confusion, simply write D_i F=D_i[F](x)DFDF=(D_1F,…,D_d F)^ J_χ,χ^±,v_χ set of approximate jump points and related functionsDef. <ref> D^jχ,D^aχ,D^cχ absolutely continuous/jump/Cantor part of D χ Thm. <ref> d,d' input/output dimension 4*Sec. <ref> 1-2 χ_min,χ_max minimum/maximum of χ^αβ 4* 1-2 λ̅,Λ̅lower/upper bound of A̅^αβ 4* 1-2 λ,Λlower/upper bound of A 4* L, L̃ original/modified operator 3*Sec. <ref> 1-2 u, ũsolution to original/modified equation 3* 1-2 u_θ,ũ_θRM solution to original/modified equation3* μ(B;χ,Υ,φ) μ(B;χ,Υ,φ)=∫_B∩ J_χ(χ^+-χ^-)ν_χ^Υ[φ]Dφ^d-1 Sec. <ref> XX={L̃w w∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') } 5*Sec. <ref> 1-2 TfRM-transformed data Tf=Lũ 5* 1-2 TRM-transformation f↦ Lũ 5* 1-2 σ(T)spectrum of T 5* 1-2 (T-I) (T-I)=_X(T-I)={f∈ X:Tf=f} 5* θ^SVparameter obtained by supervised learning with target function u 3*Sec. <ref> 1-2 u_θ^SVneural network function with parameter θ=θ^SV 3* 1-2 u_θ^SV→RM,ũ_θ^SV→RMRM solution to original/modified equation with initial parameter θ_0=θ^SV 3* x'x'=(x_1,…,x_d-1) for x=(x_1,…,x_d)∈^d 5*Sec. <ref> 1-2 B_r(x)ball centered at x with radius r 5* 1-2 J_χ^± J_χ^±={x∈ J_χ:χ^+(x)-χ^-(x)≷ 0} 5* 1-2 U ≺ζ≺ U'ζ≥ 0 in ^d with ζ=1 in U and ζ=0 in ^d\ U' 5* 1-2 (U,U') Euclidean distance between two sets U and U' 3* X̅X̅={L̃w w∈ H^1_0(Ω;^d')∩ H^2(Ω;^d') } Sec. <ref> , set of solutions to original/modified equation of quasilinear equation 3*Sec. <ref> 1-2 _H(,) Hausdorff distance fromto3* 1-2 A_v,u^t[A_u,v^t]_ij=W_ξ_iξ_j(D u+t(D v-D u), x) and if u=0, for simplicity, write A_v,u^t as A_v^t 3* [1.5pt] § FUNCTIONS OF BOUNDED VARIATIONHere we list several definitions and theorems used in the main text.Let Ω be an open subset of ^d and χ∈ L^1(Ω). We say that χ is a function of bounded variation on Ω if the distributional derivative of χ is representable by a finite Radon measure in Ω, that is, if∫_Ωχ D_iφx=-∫_Ωφ d D_iχ∀φ∈ C_c^∞(Ω),i=1, …, dfor some ^d-valued measure D χ=(D_1χ… . D_dχ)^ in Ω. The vector space of all functions of bounded variation on Ω is denoted by BV(Ω).Let χ∈ L_loc^1(Ω). We say that χ has an approximate limit at x ∈Ω if there exists z ∈ such thatlim _ρ→ 01/B_ρ(x)∫_B_ρ(x)χ(y)-zy=0.The set S_χ of points where this property does not hold is called the approximate discontinuity set. For any x ∈Ω\ S_χ, z is uniquely determined by (<ref>), denoted by χ^ap(x), and called the approximate limit of χ at x. Let χ∈ L_loc^1(Ω) and x ∈Ω. We say that x is an approximate jump point of χ if there exist χ^±(x) ∈ and a unit vector ν_χ(x) ∈ S^d-1 such that χ^+(x) ≠χ^-(x) andlim _ρ→ 01/B_ρ^±(x,ν)∫_B_ρ^±(x, v)χ(y)-χ^±(x)y=0,where B^±_ρ(x,ν)={y∈ B_ρ(x) ±⟨ x-y,ν⟩>0}. The triplet (χ^+(x), χ^-(x), ν_χ(x)) is uniquely determined by (<ref>) up to a permutation of (χ^+(x), χ^-(x)) and a change of sign of ν_χ(x). The set of approximate jump points is denoted by J_χ.By Radon–Nikodym theorem, we have the representation Dχ= D^aχ +D^sχ, where D^aχ is the absolutely continuous part with respect to ^d and D^sχ is the singular part with respect to ^d.For any χ∈ BV(Ω), the jump part of the derivative D^jχ and the Cantor part of the derivative D^cχ are defined asD^jχ=D^sχ⌊_J_χ,D^cχ=D^sχ⌊_Ω\ S_χ,respectively.Then we have Dχ=D^aχ+D^jχ+D^cχ,where D^aχ=∇χ^d, ∇χ is the Radon–Nykodim density of D^aχ with respect to ^d and D^jχ= (χ^+-χ^-) ν_χ^^d-1⌊_J_χ. Let Ω be a bounded open set and χ∈ BV(Ω), then there exist countably many C^1-hypersurfaces((d-1)-dimensional C^1-manifolds){S_k}_k=1^∞ such that^d-1(J_χ\(⋃_k=1^∞S_k))=0We say that χ∈ BV(Ω) is a special function of bounded variation and we write χ∈ SBV(Ω), if the Cantor part of its derivative D^cχ is zero. Thus, for χ∈ SBV(Ω), we haveD χ=D^aχ+D^jχ=∇χ^d+(χ^+-χ^-) ν_χ^d-1⌊_J_χ.We further say that χ∈ SBV^p(Ω), p∈ [1,∞] if and only if χ∈ SBV(Ω) and ∇χ∈ L^p(Ω).For any χ∈ SBV(Ω), we have χ∈ W^1,1(Ω) ⟺^d-1(J_χ)=0.§ EXISTENCE THEOREMS FOR LINEAR ELLIPTIC EQUATIONS AND SYSTEMS IN LITERATUREIn this appendix, we rephrase some existence theorems for linear elliptic equation and system from the literature. Theorems <ref> and <ref> focus on linear divergence and nondivergence equation, respectively, while Theorem <ref> works for linear divergence system. Let Ω be a bounded C^1,1 domain in ^d, and let the operator LLu = -div· (ADu)=-∑_i,j=1^dD_i(a_ijD_j u)being uniformly elliptic in Ω with coefficients a_ij∈ L^∞(Ω), namely there is λ>0 satisfying ξ^Aξ≥λξ'^2 for all ξ∈^d.Then the equation: L u=f in Ω, u=0 on ∂Ω with f∈ L^2(Ω) (or f∈ H^-1(Ω)) is uniquely solvable. Furthermore, there is a constant C independent of u such thatu_H^1(Ω)≤ Cf_L^2(Ω) or(≤ Cf_H^-1(Ω)). Let Ω be a bounded C^1,1 domain in ^d, and the operator L̃L̃u=∑_i,j=1^da_ijD_iju+∑_i=1^db_iD_iu+cu,being uniformly elliptic in Ω with coefficients a_ij∈ C(Ω̅), b_i ,c ∈ L^∞(Ω), c≤ 0, with i, j∈{1,…,d}. If f ∈ L^p(Ω) for some p∈ (1,∞), then the equation: L̃ u=f in Ω, u=0 on ∂Ω has a unique solution u ∈ W_0^1,p(Ω)∩ W^2, p(Ω). Furthermore, there is a constant C independent of u such that u_W^2,p(Ω)≤ CL̃u_L^p(Ω). Let Ω be a bounded C^1,1 domain in ^d. Also define the operator L as follows(L u)^α = -∑_β=1^d'·(A^αβ(x)Du^β)=-∑_β=1^d'∑_i,j=1^dD_i(A_ij^αβD_ju^β),where α∈{1,2,…,d'}. Suppose that measurable functions A^αβ∈ S^d× d are symmetric in α,β, namely A^αβ=A^βα, and that the Hadamard–Legendre condition holds, namely there is λ>0 satisfying ∑_α,β=1^d'(ξ^α)^ A^αβ(x)ξ^β≥λξ^2 for all ξ^α∈^d, where ξ^2=∑_α=1^d'ξ^α^2.Then the equation: L u=f in Ω, u=0 on ∂Ω with f∈ L^2(Ω;^d') (or f∈ H^-1(Ω;^d')) is uniquely solvable. Moreover there is a constant C independent of u such thatu_H^1(Ω;^d')≤ Cf_L^2(Ω;^d')(or ≤ Cf_H^-1(Ω;^d')).§ EXISTENCE THEOREMS FOR QUASILINEAR ELLIPTIC EQUATIONS IN LITERATUREIn this appendix, we collect some well-known results on the existence of solutions to the quasilinear elliptic equations from the literature. In particular, we summarize these results for elliptic problems in both divergence form and non-divergence form. §.§ Divergence form Suppose now Ω⊆^d is a bounded, open set with C^1,1 boundary ∂Ω, and we are given a LagrangianL: ^d××Ω̅→.We call L=L(ξ,z,x) the Lagrangian and define the functional: I[w]=∫_Ω L(D w(x), w(x), x) x.Consider the Euler–Lagrange equation:{ -·(D_ξ L(D u, u, x))+D_zL(D u, u, x)=0 in Ω, u=0 on ∂Ω, . To guarantee the existence and uniqueness of solution, we assume the operator L taking the following properties:Assume that the following hold. * (coercivity for L)There exist constants c'_1>0 and c'_2>0 such that for all ξ∈^d, z∈, x∈ΩL(ξ,z,x)≥ c'_1ξ^p-c'_2. * (convexity in ξ) For all ξ,ξ'∈^d, we have (ξ')^ D_ξ^2L(ξ, x) ξ' ≥ 0.* (growth condition of L) There exists c'_3>0 such that for all ξ∈^d,z ∈,x ∈ΩL(ξ, z, x) ≤ c'_3(ξ^p+z^p+1), D_ξ L(ξ, z, x) ≤ c'_3(ξ^p-1+z^p-1+1) , D_z L(ξ, z, x) ≤ c'_3(ξ^p-1+z^p-1+1).   * The coercivity assumption will lead to the coervicity condition on I[·]I[w] ≥ c'_1D w_L^p(Ω)^p-c'_2. * For the convexity assumption, we say L is uniformly convex in the variable ξ, if L=L(ξ, x) does not depend on z and there is λ>0 such that for all ξ, ξ'∈^d and x∈Ω(ξ')^ D_ξ^2L_ξ_iξ_j(ξ, x) ξ' ≥λξ'^2 . Suppose that Assumption <ref> holds. Then there is a u∈ W^1,p_0, p≥ 2, which minimizes I[·], and is the weak solution to (<ref>). §.§ Non-divergence formLet g:^q×Ω→, Ω⊆^d. Then g is a Carathéodory function if * x↦ g(ξ,x) is ^d measurable on Ω for any ξ∈^q,* ξ↦ g(ξ,x) is continuous on ^q for ^d-a.e. x∈Ω.Consider the general quasilinear equation{ Q[u] = A_ij(Du, u, x) D_ij u+b(Du, u, x)=0 in Ω, u=0 on ∂Ω, .Suppose A_ij:^d××Ω→ are C^1 functions, b: ^d××Ω→ is a Carathéodory function and   * (quadratic gradient growth of b) There exists p>d, b_1∈ L^p(Ω) and a non-decreasing function v:[0, ∞) →(0, ∞) such thatb(ξ, z, x)≤ v(z)(b_1(x)+ξ^2)for ^d-a.e. x ∈Ω, ∀(ξ, z) ∈^d ×.* (monotonicity of b with respect to u) There exists a non-negative function b_2∈ L^d(Ω) such thatsign z · b(ξ, z, x) ≤λ(z) b_2(x)(1+ξ)for ^d-a.e. x ∈Ω and for all (ξ,z) ∈^d×.* (uniform ellipticity of Q) There exists a non-increasing function λ : [0,+∞)→ (0,+∞) such that for ^d-a.e. x ∈Ω, for all z ∈ and for all ξ, ξ' ∈^dλ(z)ξ'^2≤ (ξ')^ A(ξ, z, x) ξ' ≤1/λ(z)ξ'^2. * (growth conditions of A_ij) There exist p>d, b_3 ∈ L^p(Ω) and a non-decreasing function η:[0,+∞)→ (0,+∞) such that for all (ξ, z, x) ∈ ^d××ΩD_z A_ij(ξ, z, x)+D_x_k A_ij(ξ, z, x) ≤η(z+ξ) b_3(x)D_ξ_k A_ij(ξ, z, x) ≤η(z+ξ)D_ξ_k A_ij(ξ, z, x)-D_ξ_j A_ik(ξ, z, x) ≤η(z)(1+ξ^2)^-1 / 2, ∑_k=1^d(D_z A_ij(ξ, z, x) ξ_kξ_k-D_z A_kj(ξ, z, x) ξ_kξ_i +D_x_k A_ij(ξ, z, x) ξ_k-D_x_k A_kj(ξ, z, x) ξ_i)≤η(z)(1+ξ^2)^1 / 2(ξ+b_3(x)). * (local uniform continuity of b with respect to (ξ, z)) There exists p>d such that b(ξ, z, ·) ∈ L^p(Ω) for all (ξ, z) ∈^d ×, and for all M, >0 there exists δ>0 such that∫_Ωb(ξ, z, x)-b(ξ', z', x)^px <for ^d-a.e. x ∈Ω and all (ξ, z),(ξ', z') ∈^d × with z-z'+ξ-ξ'<δ and z,z',ξ,ξ'≤ M. Suppose that Assumption <ref> holds. Then there exists a solution u ∈ W^1, p_0(Ω)∩ W^2, p(Ω) of the problem (<ref>). § A SHORT PROOF OF THE FAILURE OF PINN IN 1-D To fix ideas and to understand the failure example of PINN given in Section <ref>, we provide in this appendix a succinct explanation to the failure phenomenon with one-dimensional setting Ω=(-1,1). More general results and the complete analysis for the case of higher dimensional problem and even for quasilinear elliptic equations are given in the main text.[uniform ellipticity condition and piece-wise continuous coefficients] Assume there are constants 0<λ<Λ satisfying λ≤ A(x)≤Λ for all x∈Ω andA(x) = a̅(x) + ∑_k=1^n_jump a_k H(x-j_k), where a̅ is an absolutely continuous function on Ω, weights a_k∈, and discontinuities {j_k}_k=1^n_jump⊆Ω for n_jump∈^+. Here H is the Heaviside step function. As the general case in the main text, we introduce the modified problem:{ -(a̅ D^2_xu+(D_xa̅) D_x u) =f in Ω, u(± 1)=0. .and denote its solution as ũ∈ H^1_0(Ω)∩ H^2(Ω). The existence and regularity are guaranteed by Theorem <ref> in the general setting. The following proposition shows that the original operator L maps ũ to the RM-transformed data f -∑_k=1^n_jumpa_kD_xũ(j_k)δ_j_k. Thus it is clear that the latter equals f if and only if D_xũ vanishes at all jump points j_k. Suppose that Assumption <ref> holds and f∈ L^2. Let ũ be the solution to problem (<ref>). Then ũ is the weak solution to { Lũ = f -∑_k=1^n_jumpa_k D_xũ(j_k)δ_j_kin Ω,ũ(± 1)=0, .where δ_j_k is the Dirac measure satisfying ⟨δ_j_k,φ⟩=δ_j_k(φ)=φ(j_k) for any φ∈ H^1_0(Ω). Without loss of generality, we assume that j_k<j_k' if 1≤ k< k'≤ n_jump, and set j_0=-1, j_n_jump+1=1. The solution ũ∈ H^2(Ω) implies that D_xũ∈ C(Ω) and AD_xũ∈ H^1((j_k,j_k+1)) for k=0,…,n_jump. Using Lebesgue integral theorem, we have∫_j_k^j_k+1D_x(AD_xũ)φx+∫_j_k^j_k+1AD_xũD_xφx = AD_xũφ|_j_k^+^j_k+1^-, k=0,…,n_jump,where AD_xũφ|_j_k^+^j_k+1^-=lim_→ 0[A(j_k+1-)D_xũ(j_k+1-) φ(j_k+1-)-A(j_k+)D_xũ(j_k+) φ(j_k+)].Since there is no jump discontinuity of A(x) for x∈Ω\{j_k}_k=1^n_jump, we have -D_x(AD_xũ) = f on (j_k,j_k+1) and∫_j_k^j_k+1AD_xũD_xφx = ∫_j_k^j_k+1fφx + AD_xũφ|_j_k^j_k+1,∀φ∈ H^1_0(Ω).Thus the proof is completed by the following calculation∫_-1^1(AD_xũ)D_xφx =∑_k=0^n_jump∫_j_k^j_k+1AD_xũD_xφx = ∑_k=0^n_jump∫_j_k^j_k+1fφx + AD_xũφ|_j_k^+^j_k+1^-= ∫_-1^1fφx -∑_k=1^n_jumpa_kD_xũ(j_k)⟨δ_j_k , φ⟩.Next we describe the gap between u and ũ in terms of the L^2 norm. As in the main text, we will show this is non-zero. Suppose that Assumption <ref> holds and f∈ L^2(Ω). Let u and ũ be the solution to the problem (<ref>) and (<ref>), respectively. If there exists k∈{1,…,n_jump} such that D_xũ(j_k) ≠ 0, then we have that: u -ũ_L^2(Ω)>0. The existence and regularity are guaranteed by Theorem <ref> in a more general setting.By Theorem <ref>, we haveL(u - ũ ) = f-Lũ = ∑_k=1^n_jumpa_k D_xũ(j_k)δ_j_k.Without loss of generality, we can choose some function φ∈ C_c^∞(Ω) such that φ(j_k) = 1 and φ(a_k') = 0 for k'∈{1,…,n_jump}\{k} and φ_H^1(Ω)>0.Since there are 0<λ<Λ such that λ≤ A(x) ≤Λ for all x∈Ω, we haveΛD_x u -D_xũ_L^2(Ω)φ_H^1(Ω)≥ ∫_-1^1(AD_x(u-ũ))D_xφx= ∫_-1^1L(u-ũ)φx = a_kD_xũ(j_k)>0.This implies u -ũ_L^2(Ω)>0.§ PROOF OF THEOREMS <REF> AND <REF> We use the calculus of variation and the proof is mainly based on Evans <cit.> pp. 451–454.(1). First, we prove there is a minimizer u∈ H^1_0(Ω) of the energy functional I[u] defined as I[u] = inf_w∈ H^1_0(Ω)I[w]. Set m = inf_w∈ H^1_0(Ω)I[w]. If m=+∞, we are done. We henceforth consider that m is finite. Choosing a sequence {u_k}_k=1^∞ such that: I[u_k]→ m. By the coercivity condition, we have I[u_k]= ∫_Ωχ(x)W(Du_k,x)-fwx≥χ_min(c_1Du_k^2_L^2(Ω)-c_2Ω)-1/2u_k_L^2(Ω)^2-1/2f_L^2(Ω)^2≥(χ_minc_1-1/2sup_w∈ H^1_0(Ω)w_L^2(Ω)/Dw_L^2(Ω))Du_k_L^2(Ω)^2-χ_minc_2Ω-1/2f_L^2(Ω)^2.Recall that χ_minc_1-1/2sup_w∈ H^1_0(Ω)w_L^2(Ω)/Dw_L^2(Ω)>0 by Assumption <ref>. Since m is finite, we conclude that sup_kDu_k_L^2(Ω)<+∞.By Poincare inequality, we have u_k_L^2(Ω)≤ CDu_k_L^2(Ω)<+∞, and hence {u_k}_k=1^∞ is bounded in H^1(Ω). Thus there is a subsequence {u_k_j}_j=1^∞ such that u_k_j⇀ u weakly in H^1(Ω).Notice that u_k∈ H^1_0(Ω) and H^1_0(Ω) a closed linear subspace of H^1(Ω), hence, by Mazur's Theorem, is weakly closed. Hence, u∈ H^1_0(Ω), which means u=0 on ∂Ω in the sense of trace. The existence of the minimizer is established.(2). By the way, we also have the uniqueness of minimizer due to the uniform convexity in Assumption <ref>. We omit the details since it is exactly the same as the one in Evans <cit.> pp. 451–454.(3). Finally, we claim that the minimizer is indeed a weak solution to the equation (<ref>).For any φ∈ H^1_0(Ω), define J(t)= I[u+t φ], (t∈).The difference quotient reads asJ(t)-J(0)/t = 1/t∫_Ωχ(x)(W(Du+t Dφ,x)-W(Du,x))-ft φx=∫_ΩW^t(x)x-∫_Ωfφx,where W^t(x)=1/tχ(x)(W(Du+t Dφ,x)-W(Du,x)).Thus W^t(x) →∑_i=1^dχ(x)D_ξ_iW(Du,x)φ_x_i for ^d-a.e. x∈Ω as t→ 0. Furthermore:W^t(x)=1/t∫_0^t/ t' χ(x)W(Du+t'Dφ,x)t'=1/t∫_0^t∑_i=1^dχ(x)D_ξ_iW(Du+t'Dφ,x)φ_x_it'.By the growth condition on D_ξ_iW and Cauchy–Schwartz inequality, we then have for all u, φ∈ H^1(Ω)W^t(x)≤ Cχ_L^∞(Ω)(Du^2+Dφ^2+1),where C depends on c_3. Thus by dominated convergence theorem, we have/ tJ(0) = ∫_Ω∑_i=1^dχ D_ξ_iW(Du,x)v_x_i-fvx.By our previous discussion on existence of minimizer and the existence of / tJ(0), we hence have / tJ(0)=0. Therefore, u is a weak solution to the equation (<ref>).Notice that this proof still holds even if f∈ H^-1(Ω), since we have⟨ f,φ⟩_H^-1(Ω),H^1_0(Ω)≤f_H^-1(Ω)φ_H^1(Ω)for any φ∈ H^1_0(Ω). Then the statement as well as the proof works for the energy functional defined byI[w] = ∫_Ωχ(x)W(Dw,x)x - ⟨ f,w⟩_H^-1(Ω),H^1_0(Ω),for w∈ H^1_0(Ω). Suppose that Assumption <ref> holds. For any f, f'∈ L^2(Ω), let u,u' ∈ H^1_0(Ω) be the corresponding solutions to the equation (<ref>). Then we haveu-u'_H^1(Ω)≤ Cf-f'_L^2(Ω),where C depends on Ω,χ and λ.This means the solution u is continuous with respect to the interior data f.Using the Newton–Leibniz theorem, we have for i∈{1,…,d}D_ξ_iW(D u, x)=D_ξ_iW(D u', x)+∫_0^1d/d t D_ξ_iW(D u'+t(D u-D u', x) d t.=D_ξ_iW(D u', x)+∫_0^1∑_j=1^d D_ξ_iD_ξ_jW(D u'+t(D u-D u'), x)(D_j u-D_j u')t.Thus-·[χ(x)D_ξW(Du,x)-χ(x)D_ξW(D u',x)]= f-f',⇒-·[χ∫_0^1A_u,u'^t D(u-u')t]= f-f',where A_u,u'^t is a matrix-valued function with entries (A_u,u'^t)_ij= D_ξ_iD_ξ_jW(D u'+t(D u-D u'), x). Next we use the property in Assumption <ref>, that is the uniform ellipticity of L. By multiplying u-u' and integrating on both sides, we haveu-u'_H^1(Ω)≤ Cf-f'_L^2(Ω),where C depends on Ω,χ and λ. More generally, for the H^-1(Ω) data f, we provide a similar result. (dependence on data in H^-1(Ω)) Suppose that Assumption <ref> holds. For any f, f'∈ H^-1(Ω), let u,u' ∈ H^1_0(Ω) be the corresponding solutions to the equation (<ref>). Then we have1/Cf-f'_H^-1(Ω)≤u-u'_H^1(Ω)≤ Cf-f'_H^-1(Ω),where C depends on Ω,χ, λ and Λ.First, similar to the proof of Proposition <ref>, we have ⟨ f-g, u-u' ⟩_H^-1(Ω),H^1_0(Ω) = ∫_Ωχ(x)D(u-u') ∫_0^1A_u,u'^t D(u-u')tx≥ Cu-u'^2_H^1(Ω),where the last inequality results from χ_min(x)>0 and the uniform ellipticity of A_u,u'^t for all t∈ [0,1]. Here the constant C depends on Ω,χ and λ.Next we obtain the inequality in the other direction. We have for any φ∈ H^1_0(Ω) with φ_H^1_0(Ω)=1 ⟨ f-f', φ⟩_H^-1(Ω),H^1_0(Ω) = ∫_Ωχ(x)Dφ∫_0^1A_u,u'^t D(u-u')tx≤ Cu-u'_H^1(Ω)φ_H^1(Ω),where the last inequality results from Cauchy–Schwartz inequality and uniform ellipticity. In particular, the constant C depends on Ω,χ and Λ.According to Theorem <ref>, it is sufficient to show that Assumptions <ref>, <ref> and <ref> together imply Assumption <ref>.equation (<ref>) is equivalent to the following problem { -(∑_i,j=1^dD_ξ_iD_ξ_jW(Du,x)D_iju+∑_i=1^dD_ξ_iD_x_iW(Du,x)+∑_i=1^dχ^-1 D_i^aχ D_ξ_iW(Du,x)) =χ^-1f in Ω, u= 0 on ∂Ω, . (1). Let for all ξ∈^d, x∈Ωb(ξ,x)= ∑_i=1^dD_ξ_iD_x_iW(ξ,x)+∑_i=1^dχ^-1D_i^aχ D_ξ_iW(ξ,x)+f/χ.By Assumption <ref> (3), we have that∑_i=1^dχ^-1D_i^aχ D_ξ_iW(ξ,x)≤1/2χ^-1D^aχ^2+1/2D_ξW(ξ,x)^2≤ C(χ^-1D^aχ^2+ξ^2+1),where C depends on c_3. By Assumption <ref> (1) and the Cauchy–Schwartz inequality, we have∑_i=1^dD_ξ_iD_x_iW(ξ,x)≤1/2b_1(x)+1/2(ξ+1)^2≤b_1(x)^2+ξ^2+1.Since f∈ L^p(Ω), b_1∈ L^2p(Ω), and D^aχ∈ L^∞(Ω), we haveb(ξ,x)≤ C(q(x)+ξ^2),where q(x)=b_1^2(x)+χ_min^-1D^aχ^2+χ_min^-1f+1∈ L^p(Ω) and C depends on c_3. This verifies Assumption <ref> (1).(2). By Assumption <ref> (2), (3) and Assumption <ref> (1), namely, the conrtol on D_ξ_iW(ξ,x) and D_ξ_ix_iW(ξ,x) and uniform convexity, Assumption <ref> (2) and (3) are satisfied.(3).By Assumption <ref> (3) and that D_ξ_iD_ξ_jW(ξ,x) are independent of variable z, the first and second inequalities of Assumption <ref> (4) holds naturally.Recall that W∈C^3(^d×Ω). Thus for all i,j,k∈{1,…,d} D_ξ_kD_ξ_iD_ξ_jW-D_ξ_jD_ξ_iD_ξ_kW=0,which implies the third inequality in Assumption <ref> (4).Now consider the last inequality in Assumption <ref> (4) and letM=∑_k=1^d(D_x_k D_ξ_iD_ξ_jW(ξ, x) ξ_k-D_x_k D_ξ_kD_ξ_jW(ξ, x) ξ_i).Thus we have the following estimateM ≤2dc_5ξ^2≤ 2dc_5ξ(1+ξ^2)^1/2(ξ+b_3(x)),where we let b_3(x)=d∈ L^∞(Ω). Thus Assumption <ref> (4) is satisfied with constant function η=2dc_5.(4). Note that∫_Ωb(ξ,x)-b(ξ',x)^px≤ I_1+I_2,where I_1 =2^p-1∑_i=1^d∫_ΩD_ξ_iD_x_iW(ξ,x)-D_ξ_iD_x_iW(ξ,x)^px, I_2 =2^p-1∑_i=1^d∫_Ωχ^-1D_i^aχ^pD_ξ_iW(ξ,x)-D_ξ_iW(ξ',x)^px.By Assumption <ref> (2), namely the Lipshcitz continuity of D_x_iW and D_ξ_iD_x_iW, we haveI_1 ≤ 2^p-1dΩc_4^pξ-ξ'^pandI_2≤ 2^p-1∑_i=1^d(∫_Ωχ^-1D_i^aχ^2px)^1/2(∫_ΩD_ξ_iW(ξ,x)-D_ξ_iW(ξ',x)^2px)^1/2≤ 2^p-1dΩc_4^pCξ-ξ'^p,where C depends on χ. By setting ξ-ξ' small enough, we obtain∫_Ωb(ξ,x)-b(ξ',x)^px≤,and hence the Assumption <ref> (5) is satisfied. § ACKNOWLEDGEMENT This work is sponsored by the National Key R&D Program of ChinaGrant No. 2022YFA1008200 (T. L.), the National Natural Science Foundation of China Grant No. 12101401 (T. L.), Shanghai Municipal Science and Technology Key Project No. 22JC1401500 (T. L.), Shanghai Municipal of Science and Technology Major Project No. 2021SHZDZX0102, and the HPC of School of Mathematical Sciences and the Student Innovation Center, and the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University. The authors thank Yingzhou Li and Zhi-Qin John Xu for helpful discussions.plain
http://arxiv.org/abs/2310.18201v2
{ "authors": [ "Tao Luo", "Qixuan Zhou" ], "categories": [ "math.AP", "cs.NA", "math.NA", "35D30, 35D35, 35R05, 35R06, 65N15" ], "primary_category": "math.AP", "published": "20231027152025", "title": "On Residual Minimization for PDEs: Failure of PINN, Modified Equation, and Implicit Bias" }
Corresponding author: [email protected] Current address: Department of Engineering of Information, University of Padova, ItalyCorresponding author: [email protected] key distribution (QKD) enables private communications with information-theoretic security. To guarantee the practical security of QKD, it is essential that QKD systems are implemented in accordance to theoretical requirements and robust against side-channel attacks. Here we study a prominent attack on QKD transmitters known as the laser seeding attack (LSA). It consists in injecting photons into the laser of the transmitter in an attempt to modify the outgoing light in some way that is beneficial to the eavesdropper. In this work we measure the response of a QKD transmitter to the LSA as a function of the optical power injected, allowing us to quantify the level of optical attenuation required to mitigate the attack. Further, we employ a laser rate equation model to numerically simulate the effects of the LSA on a gain-switched laser. With this model we are able to reproduce previous experimental results, as well as generate new insight into the LSA by examining the effects of the LSA when the QKD transmitter is operated with different laser current driving parameters.Quantified Effects of the Laser Seeding Attack in Quantum Key Distribution A. J. Shields January 14, 2024 ==========================================================================§ INTRODUCTIONQuantum key distribution (QKD) is a mature quantum technology that can be used to establish a secret key between two communicating parties, conventionally referred to as “Alice” and “Bob” <cit.>. QKD enables communications with information theoretic security and has therefore attracted great interest in the face of the threat posed by quantum computers to public-key cryptography <cit.>. The security of QKD rests on mathematical proofs that make certain assumptions about the physical systems that implement the protocol <cit.>.It is therefore crucial that these physical systems conform to the assumptions made by the theory. During the last two decades of QKD research there has been a two-sided effort to bridge this gap between theory and practice: on the one hand QKD security proofs and protocols have advanced, making fewer and more realistic assumptions <cit.>; on the other, physical QKD systems have moved closer to theoretical requirements by implementing countermeasures to known security vulnerabilities <cit.>.QKD implementation security vulnerabilities can be broadly classified as targeting either the transmitter or the receiver. Historically, the most serious proposed attacks against QKD systems have targeted the receiver. However, the development of measurement-device-independent QKD and its variants has provided a solution to all known and possible vulnerabilities in the receiver <cit.>. The focus of QKD implementation security has therefore shifted toward the transmitter <cit.>.In this work we study a prominent attack on QKD transmitters known as the laser seeding attack (LSA) <cit.>.This attack consists of an eavesdropper, named “Eve”, injecting light into the laser of a QKD transmitter to try to change the emitted light in some way that is beneficial to her (Fig. <ref>). For example, it was demonstrated in Ref. <cit.>how the LSA can violate the assumption of a phase-randomized source: by seeding the QKD transmitter laser with light of a known phase, the LSA can give an eavesdropper full knowledge of the phase of the outgoing light, which drastically reduces the performance of a QKD system. In Ref. <cit.> the LSA was implemented with use of only two different levels of injected power. Additionally, the LSA was implemented with use of an isolated laser diode. The isolation of an isolated laser diode is difficult to characterize, and so it is unclear exactly how much light was reaching Alice's laser cavity. Altogether, this means that we do not know accurately how the effects of the LSA depend the level of injected power (that reaches the laser cavity). This is a crucial consideration for Alice when she is implementing countermeasures to the attack: if Eve must inject a large amount of light for a successful attack, then the attack will be easier to detect. However, if Eve can implement a successful LSA with only a small amount of injected power, it will be more difficult to detect or prevent. In the first part of this work (Section <ref>), we implement the LSA experimentally and measure its effects on the phase randomization of Alice's laser as a function of the injected optical power. We accurately measure the level of injected power that reaches Alice's laser cavity by using an unisolated laser, and measuring the incoming light with a power meter. We find that Eve can influence the phase of Alice's laser using much lower levels of injected power than may have been considered previously <cit.>. Additionally, by accurately measuring how the effects of the LSA vary with increasing injected power, we are able to quantify the level of optical isolation required to limit the effects of the LSA on Alice's laser. This is of obvious practical relevance to the secure implementation of QKD systems. In Ref. <cit.> it is also suggested that the LSA could have other damaging effects apart from derandomizing the phase. Indeed, it is demonstrated in Ref. <cit.>that the LSA can also increase the power output of a QKD transmitter, and the effect of this increase on the secure key rate is quantified. However, there are still other effects of the LSA, such as a shift in the wavelength of Alice's laser <cit.>, a reduced turn-on delay, or a change in the shape of the emitted pulse, and Eve can try to use any of these to her advantage. In the second part of this work (Sections <ref> and <ref>) we use a laser rate equation model that allows us to simulate the output of a laser subjected to the LSA by modeling it as a form of optical injection locking (OIL). We argue that the laser rate equations are a useful tool for the study of the LSA and can be used as a general model of the LSA to investigate all of its various effects. To demonstrate this usefulness, we use the model to reproduce, in simulation, previous published experimental results in the literature <cit.>. Further, we use the model to generate new insight into the effects of the LSA on QKD transmitters by examining how its effects on the power output of a gain-switched laser vary under different laser current driving conditions.§ EFFECTS ON PHASE RANDOMIZATION QKD has been proven to be secure with and without phase randomized signals <cit.>. However, the use of nonrandom phase leads to a significantly worse performance. Hence, current implementations of QKD use phase randomized pulses of light. The phase randomization can be achieved with the use of a phase modulator connected to a cryptographically secure random number generator, but this adds cost and complexity to a QKD system. A widely used, simple, and effective alternative to adding a phase modulator is to use a gain-switched laser diode to generate the pulses of light <cit.>. Gain-switching consists in driving a laser diode alternately above and below its lasing threshold. When the laser is below threshold, the cavity empties such that when the laser is next driven above threshold, the pulse that is generated has a random phase from spontaneously emitted photons in the cavity <cit.>. Gain-switched lasers can produce short, naturally phase randomized pulses of light at gigahertz clock ratesby adjustment of onlu the current signal driving the laser and are therefore widely used in modern QKD implementations <cit.>. We therefore assume that the target of the LSA is a gain-switched laser diode, as widely used in current real-world QKD systems <cit.>.The current parameters of a gain-switched laser diode must be set carefully to ensure that the laser cavity fully empties between each pulse. If photons from a previous pulse are still present in the cavity when the laser is driven above threshold, then the new pulse will inherit its phase from these photons This will therefore lead to correlations between the phase of the emitted pulses, which is detrimental to the security of QKD. The LSA works in a similar way: by injecting external photons into the laser cavity the generated pulses will inherit their phase from the external photons, rather than from spontaneous emission, preventing the phase from being randomized between pulses. Additionally, the attacker can deterministically control the phase of the injected photons, and therefore of the emitted pulses, which is a further security concern.§.§ ExperimentA simple implementation of the LSA consists in Eve injecting constant, continuous wave laser light into Alice's laser. As explained, this prevents the phase randomization in Alice's laser, locking its phase to a constant value determined by the coherent injected light. Alice may try to detect this attack by monitoring the phase at the output of her laser to check for signs of nonrandomness. However, with a successful LSA, Eve can deterministically control the phase of Alice's laser. Therefore, we propose and experimentally demonstrate amore sophisticated version of the LSA, which we refer to as the “phase-randomized LSA”. In this version of the LSA, Eve injects light with a seemingly random phase, but which is nonetheless completely known to her, into Alice's transmitter. For example, she can modulate the phase of her light with a phase modulator connected to a pseudo random number generator under her control, or alternatively she can gain-switch her laser and measure the phase before it is sent into Alice's transmitter.Either way, when Alice's laser locks to the injected light, it will adopt the seemingly random phase of that light, which Eve has full knowledge of. If Alice tries to monitor the phase of her light, she will not detect any signs of nonrandomness, even when the attack is successful. The phase-randomized LSA is more difficult to detect, and therefore is the one we implement and analyze in this work. Because of the phase-randomized version of the LSA, Alice is forced to rely on optical isolation, or detect the incoming light from Eve using a “watchdog” detector, to prevent the attack. It should be noted that neither optical isolation nor watchdog detectors are foolproof solutions: optical isolation can be damaged, and detectors can be blinded, for example <cit.>. Our experimental setup is shown in Fig. <ref>, where both Alice and Eve monitor the phase of their lasers by using asymmetric Mach-Zehnder interferometers.In the following we demonstrate how the naive countermeasure of monitoring Alice's phase fails to detect Eve's attack, even when the attack is highly successful. Further, we use this experimental setup to quantify the degree to which Eve is successful in locking Alice's laser in phase as a function of the level of injected optical power. This allows us to, in turn, quantify the level of optical isolation needed to mitigate the attack, in Section <ref>.Note that some previous experimental demonstrations of the LSA used of isolated laser diodes, requiring much higher injected power to compensate for the isolation losses. More importantly, it is difficult to accurately characterize the isolation of an isolated laser, and therefore it is unclear exactly how much power actually reaches the laser cavity. In our work, we use an unisolated laser (for Alice) and a power meter to accurately measure the injected power that reaches the laser cavity. Therefore, in this paper we use “injected power” to mean “injected power that reaches Alice's laser cavity”.We use two asymmetric interferometers to monitor the phase of both Eve's and Alice's laser. The intensity at the output of an asymmetric Mach-Zehnder interferometer is given byI_out = I_in/2[1 + cos(Δϕ + ϕ_0)]where I_in is the input intensity, Δϕ is the phase difference between the interfering pulses and ϕ_0 is the relative phase between both arms of the interferometer due to the difference in their length. We can therefore monitor the intensity at the interferometer output as a measure of the phase difference between successive pulses at the laser output. This allows us to measure the correlation between the phase of both lasers; if the output intensity of Eve's interferometer is highly correlated with that of Alice's, then this indicates a successful attack, giving Eve information about the phase of Alice's laser.In our experiment, we use nonstabilized interferometers, and therefore ϕ_0 drifts over time, destroying the correlation between the intensity of both lasers, even if they are locked in phase. To get around this issue, we make many measurements of the correlation, and keep only the maximum and minimum correlation values obtained, which correspond to the times when, by chance, the interferometers are in phase and are out of phase by π radians, respectively. Note that a negative correlation coefficient represents an anticorrelation, which is equally beneficial to Eve as a positive correlation. All that matters to Eve is the absolute value of the correlations. In the following, however, we will continue using maximum and minimum correlation values since this most accurately represents our experimental approach. Appendix <ref> provides further details on this method of measuring the phase correlation even when non-phase-stabilized interferometers are used. To set up the experiment, we gain-switch both lasers (distributed feedback telecom laser diodes, unisolated for Alice and isolated for Eve) at 1 GHz using a 3.35 GHz pulse-pattern generator. Our lasers are high-bandwidth (approximately 10 GHz), with a linewidth of approximately 1 MHz, and a tunable wavelength range ofapproximately 30 nm, centered at around 1550 nm. Eve's isolated laser has optical isolation of approximately 35 dB. Both lasers are coupled to polarization-maintaining single-mode fiber and are temperature controlled with use of an inbuilt thermoelectric cooler controller. We choose 1 GHz as it is representative of the modulation frequencies used in real-world QKD systems, and allows us to reliably generate fully phase randomized pulses. We set the current parameters (duty cycle, bias, and modulation current) appropriately to ensure the phase of each generated pulse is random, and independent of that of prior pulses as previously assessed in Ref. <cit.>. We verify this for both lasers by measuring the autocorrelation of the intensity at the output of each interferometer. Both lasers are disconnected for this measurement, so none of Eve's laser light reaches Alice. We then connect both lasers using the circulator and use a variable optical attenuator to adjust the level of injected power. We use polarization maintaining components and fiber throughout, and a manual fiber polarization controller, to match the polarization of both lasers and maximize the effectiveness of the LSA. We measure the average injected power reaching Alice's laser using a power meter. We precisely match the frequencies of both lasers by temperature turning using inbuilt thermoelectric cooler controllers and an optical spectrum analyzer. Finally, we measure the intensity waveforms at the output of both interferometers for 25 s, corresponding to 25000 pulses, using two high-speed photodiodes connected to two channels of a high-bandwidth (13 GHz) oscilloscope. In the following we are particularly interested in the amount of injected power at which Eve starts to induce measurable correlations between her laser and Alice's laser. However, since we are measuring the maximum and minimum values of correlation, rather than the mean, the measured correlations will never be zero, due to statistical fluctuations, even when no light is injected into Alice's laser. Therefore, at zero injected power there will be a minimum and maximum baseline correlation, and we are interested in the amount of injected power at which the correlation rises above this baseline. The measured maximum and minimum baseline correlations at zero injected power will increase as a function of the length of the measurement: the longer a measurement, the higher the likelihood of an outlier correlation measurement pushing the maximum higher or the minimum lower. For this reason, it is important that the length of measurements throughout the experiment remains constant (in our case 25 s). To measure the baseline, we simply disconnect Eve's laser from the circulator before making the correlation measurements (described below). The measured baseline correlation is shown by the dashed blacklines in Fig. <ref>. Any increase above the maximum correlation baseline must be due to Eve's injected light, and likewise for the minimum correlation. To measure the maximum and minimum values of correlation, using a non-phase-stabilized interferometer, we simply repeatedly measure the correlation at different times.We make 50 measurements of the correlations at each level of injected power, adjusting the injected power by changing the attenuation of the variable optical attenuator, leaving several seconds between measurements to allow the interferometer phase ϕ_0 to drift. Our results are plotted in Fig. <ref>, showing the maximum and minimum values of correlation at each level of injected power. We can clearly see that at high levels of injected power, the LSA is very successful at locking Alice's laser in phase, with correlations between Alice's laser and Eve's laser reaching absolute values above 0.8. In practice, the maximum and minimum correlation values will never reach 1 and -1, since other sources of noise, such as intensity noise, or chirp, will also influence the interferometer output, acting against the locking effects of the LSA and reducing the correlation. Therefore, for injected powers above 100 nW, Eve has near perfect knowledge of the phase of Alice's emitted pulses. From her perspective, the pulses are almost fully nonrandom, which undermines the security of the communications due to the drastically lower secure key rates of non-phase-randomized QKD. Note that despite the large correlation between both lasers, the output of Alice's laser does not display any signs of nonrandomness. We plot the distribution of intensity at the output of Alice's laser in Fig. <ref> (top), showing the signature arcsine distribution indicative of full phase randomization. Additionally, we plot the autocorrelation of the intensity in Fig. <ref> (bottom), again showing no signs of nonrandomness. This demonstrates that if Eve uses phase randomized light to perform the LSA, Alice cannot detect the attack by monitoring the output of her laser. Instead, to prevent the attack, Alice is forced to either block the incoming light using optical isolation or detect it with a monitor “watchdog” detector.As the injected power decreases, so do the correlations. The inset in Fig. <ref> shows an enlargement of the lower-power measurement results, showing that, in our experiment, Eve can induce measurable correlations with as little as 1 nW of injected power. Starting at around 1 nW, the measured correlations rise above the zero injected power baseline, showing that Eve's attack increases the correlation between both lasers. We stress that this 1 nW threshold is specific to our experiment. Other experiments, and indeed practical QKD systems, will undoubtedly have different thresholds, depending on the lasers being used, experimental conditions such as temperature, and other factors. However, our observations and analysis are widely applicable to all QKD transmitters under the LSA. One nanowatt is a much lower level of injected power than used in previous studies analyzing the LSA, and suggests that the LSA requires a lower level of injected power to be successful than previously thought <cit.>. However, note that 1 nW is the threshold at which the effects of the LSA just start to become measurable, and are therefore still very small. Indeed, our results are still consistent with those of prior studies: at 100 nW of injected power, similar to the injected power used in the work reported in Ref. <cit.>, we observe very large effects from the LSA consistent with the results reported there. At this point, it is worth recalling that for the standard decoy protocol, as pointed out in Ref. <cit.>, even a minimal deviation from phase randomization would place the protocol outside the framework of assumptions necessary to guarantee its unconditional security. This conservative standpoint, therefore, prevents us from assuming that the absence of measurable correlations below 1 nW implies perfect phase randomization. In this context, recently introduced security frameworks may prove beneficial in obtaining a secret key rate even with imperfect phase randomization <cit.>. In particular, the situation explored here, where Eve can arbitrarily set each pulse phase value, aligns with the case of zero-length correlation between pulses and a nonuniform phase distribution, as discussed in Ref. <cit.>.Nonetheless, it is plausible that a situation in which correlations are undetectable corresponds to a weaker coupling between Alice's laser and Eve's laser. As a result, this can still be considered a favorable initial condition to mitigate the LSA. To this end is worth investigating the level of isolation necessary to prevent levels higher than 1 nW reaching Alice’s laser cavity. Using this minimum injected power threshold, in the next section, we take an approach similar to that in Ref. <cit.> to quantify the optical isolation required to prevent a successful LSA. §.§ CountermeasuresTo prevent the LSA and guarantee the security of the QKD communications, the transmitter must implement countermeasures. This could consist of a “watchdog" detector placed at the entrance of the QKD transmitter, monitoring any incoming light <cit.>. If this detector registers incoming light above some threshold value, then the communications can be aborted. However, this solution comes with several drawbacks. Most obvious is the increased cost and complexity of the QKD system. Another disadvantage is that there are several known security vulnerabilities of detectors used in QKD systems <cit.>. For example it has been shown that certain types of detector can be “blinded" by the shining of bright light on them, making them unresponsive <cit.>. In general, active countermeasures, as opposed to passive ones, are less desirable because of their increased complexity, which can introduce further avenues to attack, such as the aforementioned blinding. Instead, passive countermeasures are preferable. In particular, Alice can use optical isolation to block any incoming light <cit.>. An optical isolator is a device that is transparent to light traveling in one direction but opaque to light traveling in the other direction. Clearly, an ideal optical isolator would be a suitable countermeasure. However, optical isolators are not ideal, and in particular they are not able to completely block light, but rather they just attenuate the light by a large amount. Therefore, Alice cannot completely prevent light from entering her transmitter, but can only control the attenuation of the incoming light. A key quantity is therefore the minimum amount of light that needs to reach Alice's laser for Eve's attack to be successful. Clearly, if any amount of injected light, however small, will lead to a successful attack, then no amount of optical isolation can prevent it. In the previous section, we showed how we experimentally determined that, for our experimental apparatus and conditions, Eve can induce measurable correlations between the phase of her laser and that of Alice's laser by injecting as little as 1 nW of (average) optical power. In the following, we use 1 nW as an example minimum power threshold for a successful LSA, but we stress that other experiments and QKD systems may well have different thresholds, which should be verified independently. A second key quantity is the maximum amount of light that Eve can inject into Alice's transmitter. Clearly, if Eve can inject an unlimited amount of light, then no amount of optical isolation can guarantee the 1 nW limit. One upper limit to the amount of light that Eve can inject is given by the laser-induced damage threshold (LIDT). The LIDT is defined as the maximum power that can be transmitted through an optical fiber without damaging it. Lucamarini et al. Ref. <cit.> quote a value for the LIDT of standard single-mode silica optical fiber of 55 kW, beyond which the fiber softens and begins to melt. Reducing 55 kW of optical power to 1 nW would require on the order of 140 dB of optical isolation. Alternatively, we can upper bound the amount of light Eve can inject by using an optical fuse, which is a device intended to break and stop the transmission of any light if the power goes above some threshold.An optical fuse with a threshold of a few watts, reducing the required optical isolation to approximately 90 dB, was introduced in Ref. <cit.>. There also exist so-called optical power limiters, which do not break above a certain threshold like an optical fuse, but rather prevent the transmitted optical power from increasing beyond that threshold. A recent proposal introduced a simple power limiting device with a widely tunable power threshold, down to values of around 1 W, which would reduce the optical isolation requirement down to approximately 30 dB <cit.>.§ NUMERICAL MODEL OF THE LSAAlthough the effect of the LSA on the phase randomization of a gain-switched laser is perhaps the most damaging, the LSA has many other effects on the laser output and Eve can try to use any of these to her advantage. In this section we model the effects of the LSA in simulation, by considering it as an instance of(OIL) <cit.>. We present the OIL laser rate equations which can serve as a general model of the LSA and can be used to study all of the effects of the LSA, including the phase randomization, but also other effects, such as an increase in the power output, a reduction in the turn-on delay, or a reduction in the chirp and jitter.The OIL rate equations consist of two coupled sets of differential equations, one for modeling the master laser and one for modeling the slave laser <cit.>. Eve's master laser is described by the standard rate equations for a free-running semiconductor laser that describe the rate of change of the carrier number (N), the photon number (S) and the phase (ϕ) <cit.>:d N(t)/d t =I(t)/qV-N(t)/τ_n-g N(t)-N_0/1+ϵ S(t) S(t) + F_N(t)d S(t)/d t =Γ g N(t)-N_0/1+ϵ S(t) S(t)-S(t)/τ_p+Γβ N(t)/τ_n + F_S(t)d ϕ(t)/d t =α/2[ Γ g(N(t)-N_0) -1/τ_p] + F_ϕ(t)where I(t) is the applied current, q is the electron charge, V is the active layer volume. τ_n and τ_p are the carrier and photon lifetimes, respectively, which quantify the average time a carrier or photon survives in the laser cavity. Γ is the mode confinement factor, which accounts for the fact that only a fraction Γ of the photons are confined to the active layer, g is the differential gain coefficient which arises from our making the approximation that the gain is linear as a function of carrier density, ϵ is the gain compression factor, which accounts for the non-linear reduction in gain at high power outputs <cit.>, N_0 is the carrier density at transparency, β is the fraction of spontaneous emission coupled into the lasing mode, α is the linewidth enhancement factor which quantifies the increase in linewidth due to the coupling between refractive index and carrier density in semiconductor lasers <cit.>, and F_N, F_S, F_ϕ are Langevin noise terms, which capture the effects of spontaneous emission. These terms are defined in Appendix <ref>. The power output of the laser is related to the photon density byP(t)=V η h ν/2Γτ_p S(t)where η is the differential quantum efficiency, h is Planck's constant, and ν is the laser frequency.Alice's slave laser cavity receives additional photons from Eve's master laser and, to account for these injected photons, the standard free-running rate equations need to be extended as follows <cit.>:dN(t)/dt = dN_fr(t)/dt dS(t)/dt = dS_fr(t)/dt +2 κ√(S_inj (t) S(t))cos(Δϕ(t) - Δω t)dϕ(t)/dt = dϕ_fr(t)/dt -κ√(S_inj(t)/S(t))sin (Δϕ(t) -Δω t)where the subscript fr denotes the standard rate equations for a free running laser given by Eqns. <ref>-<ref>, Δϕ = ϕ(t) - ϕ(t) is the difference between the secondary laser phase and the phase of the injected light, κ is a coupling coefficient that quantifies the rate at which injected photons enter the secondary laser cavity, S is the injected photon density and Δω is the difference in free-running optical angular frequency between primary laser and secondary laser. In the following simulations, we use rate equation parameters obtained by the fitting of parameters to experimental measurements of a distributed feedback laser <cit.>, given in Table <ref>.Using the rate equations, we can simulate the power output and phase of Alice's gain-switched slave laser, both with and without the effects of the LSA. The blue curve in Fig. <ref> plots the power output and phase of a free-running gain-switched laser, and is obtained by our solving the free-running laser rate equations without OIL terms. It displays signature features of gain-switching: a train of pulses, each pulse with one or more relaxation oscillation peaks, and each with a random phase, due to rapid phase randomization between pulses. The orange curve in Fig. <ref> plots the power output and phase of a gain-switched laser under the effects of the LSA. It is obtained by our solving the rate equations with OIL terms and setting P = 100 nW and ϕ = 0. These two parameters can be set to any values to simulate the effects of the LSA under different levels of injected optical power and different phase differences between the master laser and the slave laser.P and ϕ do not even need to be constant if, for example, Eve's master laser is itself gain-switched.The orange curve in Fig. <ref> displays several signature effects of the LSA on the output of a gain-switched laser. The effect on the phase randomization is clear:the LSA prevents the slave laser phase from being randomized, keeping it centered at around a constant value set by the phase of the master laser ϕ (in this case 0). This is in agreement with the experimental results demonstrating the LSA in Ref. <cit.>.Likewise, the effect on the power output is apparent: the slave laser begins lasing sooner (shorter turn-on delay), and the relaxation oscillation peaks are shorter and wider. It was demonstrated in Ref. <cit.>that an increase in the photon number per pulse, or equivalently energy per pulse, emitted by a QKD transmitter is detrimental to its performance and security. We can calculate the energy per pulse in simulation by integrating the simulated laser power output over one period. To investigate how the energy per pulse changes as a function of injected optical power, we repeat the simulations under various levels of injected power P. Fig. <ref> (a) shows the resulting pulse shapes, indicating that at higher levels of injected power, the slave laser begins lasing sooner (reduced turn-on delay, indicated by the inset), while the first relaxation oscillation peak reduces in power and the second peak increases in power. By integrating these curves over one period, we can plot the energy per pulse as a function of injected power, and Fig. <ref> (b) shows an increase in energy with higher levels of injected power. These simulations again agree with previous experimental results <cit.>, and demonstrate the usefulness of the rate equation model in studying the LSA. We have shown that the rate equation model can simulate the main effects of the LSA studied experimentally to date. In the next section we demonstrate how the rate equation model can go further and can be used to generate new insight into the LSA. We study how the effects of the LSA on the total energy emitted per pulse changes when using different laser current parameters, and we verify our numerical simulations with experimental measurements. We argue that a rate equation model can be a useful tool for exploring large ranges of parameters, which would be time-consuming to explore experimentally.§ NEW INSIGHT INTO THE LSA USING A RATE EQUATION MODEL The pulses emitted by a QKD transmitter need to be short, phase randomized and with a photon number close to zero <cit.>. The last requirement can be met with an arbitrary level of attenuation after the laser. The first two requirements can instead be obtained by suitable adjustment of the current signal that drives the gain-switched laser. Therefore, we can ask whether certain current driving parameters are more susceptible to the LSA than others, in terms of energy increase. We can easily investigate this question using the rate equation model by simply adjusting the current parameter I(t) and calculating the energy increase. In this way, we can explore hundreds of parameters in simulation, before undertaking time-consuming experimental work.We can define the injected current as a 1 GHz pulse wave, with an on-time currentI_on and an off-time currentI_off and a variable duty cycle. We choose to define the current in terms of on-time and off-time current, instead of the more typical bias and modulation current, because this makes it easier to define constraints such that, for example, the current is never negative. To model the finite bandwidth of the pulse generator used to drive the laser, we apply a 3.5 GHz low-pass filter to the injected current. Fig. <ref> gives a visual representation of the driving current. We can then vary I_on and I_off across a wide range of values subject to the following constraints: the off-time current of the laser should never be negative, and should always remain below the threshold current, i.e. 0<I_off<I_th, where I_th is the threshold current; and the on-time current should always be above the threshold current, i.e., I_on>I_th. To guarantee the requirement of narrow pulses, we set the duty cycle such that the on-time of the pulse wave is equal to the turn-on delay plus half the period of the relaxation oscillations <cit.>.Fig. <ref> (a)shows our simulation results in the form of a heatmap.For each combination of current parameters (I_on, I_off), we calculate the percentage increase in energy due to the LSA by solving the rate equations with and without injected light. To calculate the energy per pulse without injected light, we set P = 0 to model a free-running laser, and integrate the solution to the rate equations over one period. To calculate the energy per pulse with injected light, we set P = 100 nW, which is a level of injected power used in previous experimental studies of the LSA <cit.>. The heatmap color bar indicates the calculated percentage increase in energy per pulse. Using the same setup as in Fig. <ref>, we can experimentally verify the simulation results shown in Fig. <ref> (a). For each combination of current parameters covered by the heatmap, we measure the average output power of Alice's laser with no injected light and with 100 nW of continuous wave injected light from Eve's laser. Our experimental results are shown in Fig. <ref> (b), and agree well with the simulations. In particular, both heatmaps display a notable line of light-colored squares rising diagonally from the bottom left corner, with darker squares elsewhere.The good agreement between experiment and simulation is evidence of the accuracy of the rate equation model in describing the LSA. On the basis of these simulations, we can see that the specific current parameters, i.e. the specific square on the heatmap, used to drive a gain-switched laser can have a large impact on the effect of the LSA on the power output of that laser. There are many factors to consider when one is choosing the current parameters to drive a gain-switched laser in a QKD transmitter. The shape of the emitted pulses, their width, chirp and other factors are all important. The simulation and experimental results in Fig. <ref> can serve as another data point and consideration when one is choosing these parameters. For example, Alice may choose to avoid a bright square so as to minimize the energy increase caused by the LSA. On the other hand, a large increase in energy could be used to detect the presence of Eve implementing the LSA on her laser. The heatmaps in Fig. <ref> can serve as one factor among others when one is choosing the best current parameters for a gain-switched laser in a QKD system. These results demonstrate the usefulness of a rate equation model for studying the LSA. Using the model, we were able to explore hundreds of parameters in simulation before conducting time-consuming experimental work. Although we focused on the energy increase, other effects of the LSA can similarly be studied, such as the reduction in the turn-on delay. § CONCLUSIONIn this work we presented an experimental and numerical study of the laser seeding attack in QKD. We implemented the LSA experimentally and measured the effect on the phase randomization as a function of injected optical power. This allowed us to quantify the level of optical isolation required to mitigate the attack. Since the effect on the phase is just one of multiple effects of the LSA, we introduced the laser rate equations as a useful tool to study and model the LSA, drawing on the literature on optical injection locking from the field of classical telecommunications. Using this model, we were able to reproduce previously published experimental results, and demonstrate how the model can be used to study other properties of the LSA. In particular, we measured the increase in power output of a gain-switched laser when subjected to the LSA with different laser current parameters. We found that certain parameters made the laser more vulnerable to the LSA, and we verified our findings experimentally. Altogether, our work contributes to the security of QKD transmitters, and provides a tool for further investigations of the LSA and other attacks that target the laser in a QKD transmitter. The work reported here was funded by the project EMPIR 19NRM06 METISQ, which received funding from the European Metrology Programme for Innovation and Research (EMPIR) cofinanced by the participating states and from the European Union’s Horizon 2020 research and innovation program. V. L. acknowledges financial support from the EPSRC (EP/S513635/1) and Toshiba Europe Ltd.§ MEASURING PHASE CORRELATION WITH UNSTABLE INTERFEROMETERSSince both interferometers are not phase stabilized, the relative phase between both arms of the interferometer drifts between 0 and 2π. In terms of Eq. <ref>, ϕ_0 is not stabilized and instead drifts between 0 and 2π, at a slow rate on the order of radians per second. This is much slower than our measurement time of 25 s, and therefore has a negligible effect on an individual measurement. However, for measurements taken at different times, ϕ_0 can take on very different values, requiring care when one is comparing results across measurements taken at different times. For example, if both interferometers have the same value of ϕ_0, and both lasers are locked in phase (Δϕ = 0), then the correlation between the waveforms will be 1. On the other hand, if there is a phase difference of π between both interferometers then the correlation will be -1. And if the phase difference is π/2, then the measured correlations will be 0, even though both lasers are still locked in phase. Given that the value of ϕ_0 for both interferometers drifts over time, we should expect to see the correlations correspondingly drift between a maximum value and a minimum value, with a mean value of 0. We stress that this drift of the correlation is due to the unstable interferometers, and is not due to the change in underlying correlation between the phase of Alice's laser and Eve's laser. Note that the oscilloscope does not measure the waveforms on both channels at exactly the same time. Furthermore, it takes Eve's light a few nanoseconds to reach Alice's laser, which further adds to the time mismatch between both measured waveforms. To account for this mismatch, we calculate the correlation between both waveforms as a function of time delay, or lag, by shifting one waveform with respect to the other by 1 ns intervals. An example measurement is plotted in Fig. <ref>, showing no correlations at most lags, and a very high correlation at a lag of 86 ns. This indicates that the combined effect of the time mismatch between the waveform measurements on both oscilloscope channels and the time delay of Eve's light reaching Alice laser, equals 86 ns. In subsequent measurements, we therefore calculate the correlations between both waveforms displaced by 86 ns.§ LASER RATE EQUATION NOISE TERMSTo account for spontaneous emission noise, additional terms are added to the laser rate equations. These so-called Langevin noise terms take on different forms depending on the sources of noise being considered. To account for the effects of spontaneous emission, they are given by:F_S(t)=√(2 Γβ N(t) S(t)/τ_nΔ t)x_SF_ϕ(t)=√(Γβ N(t)/2 τ_n S(t) Δ t)x_ϕF_Z(t)=√(2 N(t)/V τ_nΔ tx_Z)F_N(t)=F_Z(t)-F_S(t)/Γ where F_Z(t) is a noise term, uncorrelated to F_S(t) and F_ϕ(t), used to define the carrier density noise term F_N(t). Δ t is the integration time step and x_S, x_ϕ, and x_Z are three independent standard normal random variables. Often the rate equations are used without noise terms, when the effects of noise are not of interest. In this case, the rate equations can be solved with use of standard numerical integration tools. However, when the noise terms are included, the rate equations become stochastic differential equations and must be solved using by stochastic numerical integration methods,the simplest of which is the Euler–Maruyama method <cit.>. ieeetr 10 bennett_quantum_1984 C. H. Bennett and G. Brassard, “Quantum cryptography: Public key distribution and coin tossing,” in International Conference on Computers, Systems & Signal Processing, pp. 175–179, Dec. 1984. gisin_quantum_2002 N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum cryptography,” Rev. Mod. Phys., vol. 74, pp. 145–195, Mar. 2002. pirandola_advances_2020 S. Pirandola, S. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, J. L. Pereira, M. Razavi, J. S. Shaari, J. S. Shaari, M. Tomamichel, M. Tomamichel, V. C. Usenko, G. Vallone, P. Villoresi, and P. Wallden, “Advances in quantum cryptography,” Adv. Opt. Photon., AOP, vol. 12, pp. 1012–1236, Dec. 2020. shor_polynomial-time_1997 P. W. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM J. Comput., vol. 26, pp. 1484–1509, Oct. 1997. shor_simple_2000 P. W. Shor and J. Preskill, “Simple Proof of Security of the BB84 Quantum Key Distribution Protocol,” Phys. Rev. Lett., vol. 85, pp. 441–444, July 2000. gottesman_security_2004-1 D. Gottesman, H.-K. Lo, N. Lutkenhaus, and J. Preskill, “Security of quantum key distribution with imperfect devices,” in International Symposium onInformation Theory, 2004. ISIT 2004. Proceedings., pp. 136–, June 2004. lo_measurement-device-independent_2012 H.-K. Lo, M. Curty, and B. Qi, “Measurement-Device-Independent Quantum Key Distribution,” Phys. Rev. Lett., vol. 108, p. 130503, Mar. 2012. pereira_quantum_2019 M. Pereira, M. Curty, and K. Tamaki, “Quantum key distribution with flawed and leaky sources,” npj Quantum Information, vol. 5, pp. 1–12, July 2019. xu_secure_2020 F. Xu, X. Ma, Q. Zhang, H.-K. Lo, and J.-W. Pan, “Secure quantum key distribution with realistic devices,” Rev. Mod. Phys., vol. 92, p. 025002, May 2020. lucamarini_practical_2015 M. Lucamarini, I. Choi, M. Ward, J. Dynes, Z. Yuan, and A. Shields, “Practical Security Bounds Against the Trojan-Horse Attack in Quantum Key Distribution,” Phys. Rev. X, vol. 5, p. 031030, Sept. 2015. yuan_resilience_2011 Z. L. Yuan, J. F. Dynes, and A. J. Shields, “Resilience of gated avalanche photodiodes against bright illumination attacks in quantum cryptography,” Appl. Phys. Lett., vol. 98, p. 231104, June 2011. qian_robust_2019 Y.-J. Qian, D.-Y. He, S. Wang, W. Chen, Z.-Q. Yin, G.-C. Guo, and Z.-F. Han, “Robust countermeasure against detector control attack in a practical quantum key distribution system,” Optica, OPTICA, vol. 6, pp. 1178–1184, Sept. 2019. braunstein_side-channel-free_2012 S. L. Braunstein and S. Pirandola, “Side-Channel-Free Quantum Key Distribution,” Phys. Rev. Lett., vol. 108, p. 130502, Mar. 2012. lucamarini_overcoming_2018 M. Lucamarini, Z. L. Yuan, J. F. Dynes, and A. J. Shields, “Overcoming the rate–distance limit of quantum key distribution without quantum repeaters,” Nature, vol. 557, pp. 400–403, May 2018. sun_effect_2015 S.-H. Sun, F. Xu, M.-S. Jiang, X.-C. Ma, H.-K. Lo, and L.-M. Liang, “Effect of source tampering in the security of quantum cryptography,” Phys. Rev. A, vol. 92, p. 022304, Aug. 2015. lee_free-space_2017 M. S. Lee, M. K. Woo, J. Jung, Y.-S. Kim, S.-W. Han, and S. Moon, “Free-space QKD system hacking by wavelength control using an external laser,” Opt. Express, OE, vol. 25, pp. 11124–11131, May 2017. pang_hacking_2020 X.-L. Pang, A.-L. Yang, C.-N. Zhang, J.-P. Dou, H. Li, J. Gao, and X.-M. Jin, “Hacking Quantum Key Distribution via Injection Locking,” Phys. Rev. Applied, vol. 13, p. 034008, Mar. 2020. huang_laser-seeding_2019 A. Huang, A. Navarrete, S.-H. Sun, P. Chaiwongkhot, M. Curty, and V. Makarov, “Laser-Seeding Attack in Quantum Key Distribution,” Phys. Rev. Applied, vol. 12, p. 064043, Dec. 2019. zhang_analysis_2022 X.-X. Zhang, M.-S. Jiang, Y. Wang, Y.-F. Lu, H.-W. Li, C. Zhou, Y. Zhou, and W.-S. Bao, “Analysis of an injection-locking-loophole attack from an external source for quantum key distribution,” Physical Review A, vol. 106, no. 6, p. 062412, 2022. lo_security_2007-1 H.-K. Lo and J. Preskill, “Security of quantum key distribution using weak coherent states with nonrandom phases,” Quantum Info. Comput., vol. 7, pp. 431–458, July 2007. paraiso_advanced_2021 T. K. Paraïso, R. I. Woodward, D. G. Marangon, V. Lovic, Z. Yuan, and A. J. Shields, “Advanced Laser Technology for Quantum Communications (Tutorial Review),” Adv Quantum Tech, p. 2100062, Aug. 2021. jofre_true_2011 M. Jofre, M. Curty, F. Steinlechner, G. Anzolin, J. P. Torres, M. W. Mitchell, and V. Pruneri, “True random numbers from amplified quantum vacuum,” Opt. Express, OE, vol. 19, pp. 20665–20672, Oct. 2011. yuan_10-mb/s_2018 Z. Yuan, A. Plews, R. Takahashi, K. Doi, W. Tam, A. W. Sharpe, A. R. Dixon, E. Lavelle, J. F. Dynes, A. Murakami, M. Kujiraoka, M. Lucamarini, Y. Tanizawa, H. Sato, and A. J. Shields, “10-Mb/s Quantum Key Distribution,” Journal of Lightwave Technology, vol. 36, pp. 3427–3433, Aug. 2018. boaron_secure_2018 A. Boaron, G. Boso, D. Rusca, C. Vulliez, C. Autebert, M. Caloz, M. Perrenoud, G. Gras, F. Bussières, M.-J. Li, D. Nolan, A. Martin, and H. Zbinden, “Secure Quantum Key Distribution over 421 km of Optical Fiber,” Phys. Rev. Lett., vol. 121, p. 190502, Nov. 2018. lydersen_hacking_2010 L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar, and V. Makarov, “Hacking commercial quantum cryptography systems by tailored bright illumination,” Nature Photon, vol. 4, pp. 686–689, Oct. 2010. huang_laser-damage_2020 A. Huang, R. Li, V. Egorov, S. Tchouragoulov, K. Kumar, and V. Makarov, “Laser-Damage Attack Against Optical Attenuators in Quantum Key Distribution,” Phys. Rev. Appl., vol. 13, p. 034017, Mar. 2020. lovic_characterizing_2021 V. Lovic, D. Marangon, M. Lucamarini, Z. Yuan, and A. Shields, “Characterizing Phase Noise in a Gain-Switched Laser Diode for Quantum Random-Number Generation,” Phys. Rev. Applied, vol. 16, p. 054012, Nov. 2021. curras-lorenzo_security_2023 G. Currás-Lorenzo, K. Tamaki, and M. Curty, “Security of quantum key distribution with imperfect phase randomisation,” Apr. 2023. arXiv:2210.08183 [quant-ph]. sixto_secret_2023 X. Sixto, G. Currás-Lorenzo, K. Tamaki, and M. Curty, “Secret key rate bounds for quantum key distribution with non-uniform phase randomization,” Apr. 2023. arXiv:2304.03562 [quant-ph]. nahar_imperfect_2023 S. Nahar, T. Upadhyaya, and N. Lütkenhaus, “Imperfect Phase-Randomisation and Generalised Decoy-State Quantum Key Distribution,” Apr. 2023. arXiv:2304.09401 [quant-ph]. ponosova_protecting_2022 A. Ponosova, D. Ruzhitskaya, P. Chaiwongkhot, V. Egorov, V. Makarov, and A. Huang, “Protecting fiber-optic quantum key distribution sources against light-injection attacks,” PRX Quantum, vol. 3, p. 040307, Oct. 2022. shin-ichi_optical_2004 T. Shin-ichi and S. Inoue, “Optical fuse made of silica glass optical fibers spliced through low-melting glass with carbon-coating,” in Proceedings of the XXth International Congress on Glass, Report No. O-14-010, (Kyoto, Japan), 2004. zhang_securing_2021 G. Zhang, I. W. Primaatmaja, J. Y. Haw, X. Gong, C. Wang, and C. C. W. Lim, “Securing Practical Quantum Communication Systems with Optical Power Limiters,” PRX Quantum, vol. 2, p. 030304, July 2021. cartledge_extraction_1997 J. C. Cartledge and R. C. Srinivasan, “Extraction of DFB laser rate equation parameters for system simulation purposes,” Journal of Lightwave Technology, vol. 15, pp. 852–860, May 1997. fatadin_numerical_2006 I. Fatadin, D. Ives, and M. Wicks, “Numerical simulation of intensity and phase noise from extracted parameters for CW DFB lasers,” IEEE Journal of Quantum Electronics, vol. 42, pp. 934–941, Sept. 2006. koch_effect_1986 T. L. Koch and R. A. Linke, “Effect of nonlinear gain reduction on semiconductor laser wavelength chirping,” Appl. Phys. Lett., vol. 48, pp. 613–615, Mar. 1986. henry_theory_1982 C. Henry, “Theory of the linewidth of semiconductor lasers,” IEEE Journal of Quantum Electronics, vol. 18, pp. 259–264, Feb. 1982. troger_novel_1999 J. Troger, P.-A. Nicati, L. Thevenaz, and P. Robert, “Novel measurement scheme for injection-locking experiments,” IEEE Journal of Quantum Electronics, vol. 35, pp. 32–38, Jan. 1999. lau_enhanced_2009 E. K. Lau, L. J. Wong, and M. C. Wu, “Enhanced Modulation Characteristics of Optical Injection-Locked Lasers: A Tutorial,” IEEE Journal of Selected Topics in Quantum Electronics, vol. 15, pp. 618–633, May 2009. liu_optical_2020 Z. Liu and R. Slavík, “Optical Injection Locking: From Principle to Applications,” Journal of Lightwave Technology, vol. 38, pp. 43–59, Jan. 2020. bjerkan_measurement_1996 L. Bjerkan, A. Royset, L. Hafskjaer, and D. Myhre, “Measurement of laser parameters for simulation of high-speed fiberoptic systems,” Journal of Lightwave Technology, vol. 14, pp. 839–850, May 1996. yuan_interference_2014 Z. Yuan, M. Lucamarini, J. Dynes, B. Fröhlich, M. Ward, and A. Shields, “Interference of Short Optical Pulses from Independent Gain-Switched Laser Diodes for Quantum Secure Communications,” Phys. Rev. Applied, vol. 2, p. 064006, Dec. 2014. shakhovoy_influence_2021 R. Shakhovoy, V. Sharoglazova, A. Udaltsov, A. Duplinskiy, V. Kurochkin, and Y. Kurochkin, “Influence of Chirp, Jitter, and Relaxation Oscillations on Probabilistic Properties of Laser Pulse Interference,” IEEE Journal of Quantum Electronics, vol. 57, pp. 1–7, Apr. 2021. 2022 higham_algorithmic_2001 D. J. Higham., “An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations,” SIAM Rev., vol. 43, pp. 525–546, Jan. 2001.
http://arxiv.org/abs/2310.17803v1
{ "authors": [ "Victor Lovic", "Davide G. Marangon", "Peter. R. Smith", "Robert I. Woodward", "Andrew J. Shields" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231026223403", "title": "Quantified Effects of the Laser Seeding Attack in Quantum Key Distribution" }
Department of Physics, University of CincinnatiDepartment of Astronomy, University of MichiganCentre for Extragalactic Astronomy, Durham University Department of Astronomy and Astrophysics/Kavli Institute for Cosmological Physics, University of Chicago Institute of Theoretical Astrophysics, University of OsloSteward ObservatoryUniversity of Arizona, Observational Cosmology Lab, Code 665, NASA Goddard Space Flight Center Department of Physics/Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of TechnologyThrough observational tests of strong lensing galaxy clusters, we can test simulation derived structure predictions that follow from Λ Cold Dark Matter (ΛCDM) cosmology. The shape and centroid deviations between the total matter distribution, stellar matter distributions, and hot intracluster gas distribution serve as an observational test of these theoretical structure predictions. We measure the position angles, ellipticities, and locations/centroids of the brightest cluster galaxy (BCG), intracluster light (ICL), the hot intracluster medium (ICM), and the core lensing mass for a sample of strong lensing galaxy clusters from the SDSS Giant Arcs Survey (SGAS). We utilize HST WFC3/IR imaging data to measure the shapes/centroids of the ICL and BCG distributions and use Chandra ACIS-I X-ray data to measure the shapes/centroids of ICM. Additionally, we measure the concentration parameter (c) and asymmetry parameter (A) to incorporate cluster dynamical state into our analysis. Using this multicomponent approach, we attempt to constrain the astrophysics of our strong lensing cluster sample and evaluate the different components in terms of their ability to trace out the DM halo of clusters in various dynamical states.Understanding Shape and Centroid Deviations in 39 Strong Lensing Galaxy Clusters in Various Dynamical States Raven [email protected] Matthew B. Bayliss1 Keren Sharon2 Guillaume Mahler3 Michael D. Gladders4 Håkon Dahle5 Michael K. Florian6 Jane R. Rigby7 Michael McDonald8 Lauren Elicker1 M. Riley Owens1January 14, 2024 =========================================================================================================================================================================================================================== § INTRODUCTIONGalaxy clusters form at the nodes of our Universe's cosmic web. As detailed in Λ Cold Dark Matter (ΛCDM) physics, these galaxy clusters form via the gradual accretion and incorporation of originally separate halo systems <cit.>. Through this process of hierarchical mergers, they are the most massive self-gravitating objects in the known universe <cit.>.In an idealized system where the cluster is unaffected by interfering astrophysical phenomena at all scales, the brightest cluster galaxy (BCG), intracluster light (ICL), and the hot intracluster medium (ICM) should align with the DM halo of a galaxy cluster defined by its core lensing mass <cit.>. Even when we incorporate the complex astrophysical landscape of galaxy clusters, we still expect the differences between these distributions to be small and infrequent. However, previous work has found that some of the mass components are not always aligned with the DM halo or with each other <cit.>. Utilizing the constraining power of strong lensing mass models, we can more accurately measure the centroid, shape, and orientation of the DM dominated gravitational potential to more accurately quantify the frequency of these deviations. In some cases, this is due to the disturbed nature of the cluster <cit.>, though the frequency for which deviations occur in relaxed vs. disturbed clusters is largely unexplored in the current literature.Given enough time to relax the BCG and ICL should revert to their shared orientation and centroid with their DM halo <cit.>. Still, Kim et al. 2017 <cit.> and Harvey et al. 2017 <cit.> found that relaxed clusters can have BCGs that show residual "wobbling." This is beyond ΛCDM predictions. Though the ICL has not been found to "wobble" in the same way, the effect of dynamical disruptions on the ICL’s ability to trace out the shape of the DM halo can be tested using DM halo models derived from strong lensing. The ICM gas can show extreme misalignments for disturbed clusters. Additionally, hydrodynamical gas oscillations in relaxed clusters can cause some deviations in the ICM to persist even when the stellar components have realigned with the core lensing mass <cit.>. § DATA AND MEASUREMENTSIn this work, we measure the shape and centroid of the BCG, ICL, and core lensing mass derived from strong lensing for 39 clusters from the Sloan Giant Arcs Survey (SGAS). Each sample cluster has a corresponding well defined lens model derived from multiband HST data and spectroscopic data <cit.>. A subset of 27 clusters have Chandra ACIS-I X-ray data that allow us to make measurements of the ICM in tandem with the other 3 components. §.§ BCG/ICL Measurement In order to isolate the BCG, we modeled the core using . To separate the ICL from all other objects, we usedto derive masks for all objects in the field. For both distributions, we use iterative elliptical isophote fitting applying the Pythonfunction.§.§ ICM Measurement We start by using the publicly availabletools to process our Chandra X-ray event data. The data was point-source subtracted, binned by 2”, and set to only include events in the broad energy band (0.5-7 keV with an effective energy of 2.3 keV). The resulting processed ICM distribution was modeled using themodeling functions, applying 2 elliptical Gaussian distributions to model the ICM while simultaneously fitting the background. Thefitting statistic <cit.> is used to derive the best fit parameters and 1 sigma deviation for each object.§.§ Dynamical State MeasurementWe measure both the concentration and asymmetry of each of our clusters in order to understand their dynamical states. Both these parameters utilize the reduced Chandra data described in <ref>.We define concentration by equation <ref>. c_[R500]=Flux(r<0.2*R500)/Flux(r<R500). Any object with a concentration measurement c > 0.25 is considered relaxed whereas any object with a concentration measurement c < 0.3 is disturbed <cit.>. Objects that fall between 0.25 < c < 0.3 require additional relaxation proxy measurements.In addition to concentration, we measure the asymmetry found by rotating the initial X-ray distribution I by 180^∘ and subtracting the rotated distribution R from the original distribution as illustrated by equation <ref> <cit.>. A_180=∑(|I-R|)/∑I Objects with an asymmetry parameter A_180 < 1.1 are considered relaxed whereas objects with A_180 > 1.1 are disturbed <cit.>.Combining these 2 measurements along with by-eye inspection we are able to accurately define the physical state of the galaxy clusters in our sample.§.§ Strong Lensing Models and Core Lensing Mass Measurement The strong lensing models for these galaxy clusters were derived in a series of papers by Sharon and colleagues <cit.> using the publicly available Lenstool software <cit.>. We use 100 lens models drawn from theMCMC with the derived lensing parameters to sample the posterior probability distribution. Though strong lensing allows us to study the distribution at small scales, the obvious core cluster that dominates the distribution at large scales is the component we are interested in since we would expect it to align with the BCG, ICL, and ICM components. The parameters from the most likely image plane model are taken to be the true values of the core lensing mass.§ RESULTS§.§ Position Angles Figure <ref> shows the various cluster components’ position angle differences with respect to the core lensing mass. Equation <ref> defines the difference in position angle. ΔPA=|PA_1-PA_2|(|PA_1-PA_2|≤90^∘)ΔPA=180-|PA_1-PA_2|(|PA_1-PA_2|>90^∘) 0^∘≤ΔPA≤90^∘ For disturbed and relaxed objects alike, we mostly measure small position angle differences which implies that cluster orientation is maintained from a few tens of kpc up to ∼1Mpc. This consistency over large spatial scales has been previously observed <cit.>.Still, we find multiple instances of large position angle differences (ΔPA>30^∘), more so for disturbed clusters than relaxed ones. All components are sensitive to dynamical disruptions. However, the relatively small number of large position angle differences between the core lensing mass and ICL suggests that the ICL may be the best proxy for the DM halo distribution, consistent with previous studies <cit.>. §.§ EllipticitiesFigure <ref> compares the ellipticity measurements for the various cluster components. We define ellipticity as the flattening parameter described by e=1-b/a, where a and b are the semi-major and semi-minor axes.Overall, there is no clear ellipticity trends based on degree of relaxation. From the top-left panel, we see that the DM and the ICL have roughly similar ellipticities with a large scatter. As seen in the bottom 2 panels, the ICM is rounder than the DM or ICL components due to hydrodynamical effects occurring in the ICM <cit.>. In the top-right panel, we see the BCG is more circular due to dynamical friction caused by high stellar density <cit.>. §.§ CentroidsFrom figure <ref>, we see the centroid difference comparisons of the various components of our galaxy clusters. We use the difference in projected radius as defined by ΔR=|r_1-r_2| in units of kpc to quantify the difference in centroid.As expected, we measure small deviations in centroid for the ICL and BCG when compared to the core lensing mass which illustrates the typical alignment of the stellar components with respect to the center of the DM potential. The ICL is very slightly more aligned with the core lensing mass centroid than the BCG since the BCG experiences a greater residual wobble. Even after excluding obvious major merger systems with two distinct/significantly radially displaced cores, we still see that the ICM Gas is displaced much more than the ICL or BCG when compared to the core lensing mass. This may happen since major merger activity in the cluster’s past can cause ICM “sloshing” <cit.>. The sloshing behavior can persist even when the stellar components have relaxed back the to DM centroid since it is a consequence of hydrodynamical physics that affects the ICM only <cit.>.Surprisingly, we do not see a tendency for the BCG or ICL to have much larger radial displacements for disturbed clusters. This could be due to the stellar components having relaxed back to the DM centroid but experiencing displacements due to small scale astrophysics as opposed to merger activity. For the ICM, there is a slight preference to larger displacements for disturbed clusters due to the ICM's sensitivity to mergers.
http://arxiv.org/abs/2310.18250v1
{ "authors": [ "Raven Gassis", "Matthew B. Bayliss", "Keren Sharon", "Guillaume Mahler", "Michael D. Gladders", "Håkon Dahle", "Michael K. Florian", "Jane R. Rigby", "Michael McDonald", "Lauren Elicker", "M. Riley Owens" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20231027163652", "title": "Understanding Shape and Centroid Deviations in 39 Strong Lensing Galaxy Clusters in Various Dynamical States" }
[email protected]@berkeley.edu Department of Civil and Environmental Engineering, University of California, Berkeley, CA, 94720, United States[correspondingauthor]Corresponding author: Ziqi WangPerturbed by natural hazards, community-level infrastructure networks operate like many-body systems, with behaviors emerging from coupling individual component dynamics with group correlations and interactions. It follows that we can borrow methods from statistical physics to study the response of infrastructure systems to natural disasters. This study aims to construct a joint probability distribution model to describe the post-hazard state of infrastructure networks and propose an efficient surrogate model of the joint distribution for large-scale systems. Specifically, we present maximum entropy modeling of the regional impact of natural hazards on civil infrastructures. Provided with the current state of knowledge, the principle of maximum entropy yields the “most unbiased” joint distribution model for the performances of infrastructures. In the general form, the model can handle multivariate performance states and higher-order correlations. In a particular yet typical scenario of binary performance state variables with knowledge of their mean and pairwise correlation, the joint distribution reduces to the Ising model in statistical physics. In this context, we propose using a dichotomized Gaussian model as an efficient surrogate for the maximum entropy model, facilitating the application to large systems. Using the proposed method, we investigate the seismic collective behavior of a large-scale road network (with 8,694 nodes and 26,964 links) in San Francisco, showcasing the non-trivial collective behaviors of infrastructure systems. collective behavior; infrastructure systems; maximum entropy modeling; natural hazards§ HIGHLIGHTS* Maximum entropy modeling for regional hazard responses of infrastructure systems.* A surrogate of the maximum entropy model for large-scale systems. * Collective seismic behaviors of a large-scale road network. § INTRODUCTIONCivil structures and infrastructures, when forming into networked systems such as transmission (e.g., water, gas, and power) and transportation networks, serve as the backbone of a community. In assessing their risks from natural hazards, stakeholders are not only interested in the performance of individual structures but also the community-level integrity and functionality. Such assessments require accurate and efficient modeling of infrastructure network responses to hazards <cit.>. However, the collective hazard behavior of infrastructures is intricate owing to a large number of components, complex network topology, component inter-dependencies, and incomplete knowledge. To reveal community-level regularities, the primary step is understanding the direct impacts of hazards on infrastructure systems. Due to the aleatory uncertainties in the occurrence and intensity of hazards, probabilistic approaches are dominant. In performance-based engineering, fragility functions are widely used to describe the structure's probability of exceeding limit states (such as performance/damage levels) as a function of hazard intensities. In performance-based earthquake engineering, Incremental Dynamic Analysis (IDA) <cit.>, Multiple Stripe Analysis (MSA) <cit.>, cloud analysis <cit.>, and extended fragility analysis <cit.>, are widely recognized approaches for fragility analysis. It was shown in <cit.> that IDA provides an upper bound of the actual structural fragility and the cloud method only provides suboptimal fragility curves due to its inherent assumptions in the objective function; the MSA was shown to be a subcase of the Bernoulli model introduced by Shinozuka et al. <cit.>, which is then part of the extended fragility analysis framework. In performance-based wind engineering <cit.>, fragility analysis emphasizes more on the limit states related to serviceability and comfort <cit.>. Fragility functions are typically defined at the structure-level, i.e., a specific fragility curve is assigned to each of the civil structures in a region of interest. In this context, the correlation between the performance states of different structures is contributed by the interdependencies of local intensity measures of hazards; it is not straightforward to consider the contribution from the similarity of structures. With the ever-increasing computational power, a simulation-based approach using probabilistic hazard and physics-based infrastructure models becomes viable to complement fragility analysis <cit.>. Physics-based simulations can capture complex interdependencies among structures, at the cost of computing power and interpretability. In this paper, we present maximum entropy modeling <cit.> as a complementary perspective to fragility and simulation-based approaches. The maximum entropy modeling has gained much success in biostatistics <cit.>, such as inferring direct interactions from gene expression data <cit.>, exploring spatial sources of disease outbreaks <cit.>, and uncovering patterns in brain activity <cit.>. The method has the potential to reveal interactions between different groups of network components. However, its applicability in regional hazard impact assessment for civil infrastructures has yet to beinvestigated. In this study, we will show the procedures of building maximum entropy-based models for the responses of infrastructure systems to hazards. Furthermore, we emphasize the collective behavior in hazard responses of infrastructure components. This concept is rarely discussed in civil engineering, but it is often stressed in neuroscience <cit.>, social sciences <cit.>, and material science <cit.>. Weak local, microscopic correlations can trigger nonlocal, macroscopic behaviors <cit.>. In the context of civil engineering, a meaningful collective behavior is the system-level transition from functioning to failure. Using the maximum entropy modeling equipped with the lens of statistical physics <cit.>, we can investigate the phase transitions <cit.> and critical phenomena to better understand the hazard resilience of infrastructures. Incidentally, it is worth mentioning that collective behaviors could trigger cascading behaviors <cit.>, but the former focuses on spontaneous responses while the latter emphasizes sequential propagation processes.A substantial drawback of the maximum entropy model is the poor scalability toward larger systems. Learning a maximum entropy model is an important topic of Boltzmann machine learning <cit.>. For high-dimensional models, a computational bottleneck in the training process is the sample estimates for the mean and cross-moment values of a high-dimensional random vector; this computation needs to be performed at each iterative learning step. Since independent sampling from the maximum entropy model is typically infeasible, the sample estimates are often obtained from a Markov Chain Monte Carlo (MCMC) algorithm <cit.>, which is computationally demanding. Approximate approaches such as contrastive divergence learning <cit.> and pseudo maximum likelihood estimations <cit.> have been proposed to accelerate the training of maximum entropy models, at the cost of accuracy <cit.> and stability <cit.>. Moreover, an MCMC algorithm will be required again in sampling from the trained maximum entropy model. In this paper, we adopt an alternative route of using near-maximum entropy models. Specifically, we investigate the use of a dichotomized Gaussian model <cit.>, a near-maximum entropy model <cit.> allowing for independent sampling, as an efficient surrogate of the maximum entropy model for large-scale systems.This paper is organized as follows. Section <ref> introduces the formulation of maximum entropy models for regional hazard responses of infrastructure systems. Section <ref> presents the training process for the maximum entropy model. Section <ref> introduces a surrogate model for the maximum entropy model. Section <ref> demonstrates the proposed method in analyzing the collective behavior of a large-scale road network under an earthquake scenario. Section <ref> discusses possible future research topics and Section <ref> provides concluding remarks.§ MAXIMUM ENTROPY MODELINGThe principle of maximum entropy states that, among all probability distributions that satisfy the current state of knowledge about a system, the distribution that maximizes the information entropy is the most unbiased, and thus the “best", representation of the system <cit.>. Entropy, in this context, is a measure of uncertainty. A distribution with maximum entropy is the least informative in the sense that it makes the least amount of assumptions beyond the known constraints. The maximum entropy model has significant potential in modeling the collective hazard responses of networked civil systems. In this section, we will propose a general model that can accommodate to various hazards. §.§ A brief introduction to the principle of maximum entropyConsider a random variable X∈{s_1, s_2, …,s_n} representing the discrete performance state of a structure. In performance-based engineering, s_i can represent performance levels such as “immediate occupancy", “life safety", “collapse prevention", and “collapse". Let p(x) denote the unknown distribution of X, i.e., p(x)≡ℙ(X=x). We seek the distribution p(x) under the constraints, i.e., current state of knowledge, of generalized moments expressed by⟨ f_i(X) ⟩ = ∑_x p(x) f_i(x) ,i=1,2,…,m ,where f_i are functions of X and the expectations ⟨ f_i(X) ⟩ are known, e.g., collected from data, estimated from computational models, etc. It is worth mentioning that the form of Eq. (<ref>) is fairly general to represent various quantities collected in practice. For example, on top of the conventional moments where f_i(x)=x^k, k∈ℕ^+, Eq. (<ref>) can also represent probability constraints if f_i(x)=1(x-s_k), where 1 is a binary indicator function for x-s_k=0. The problem of determining p(x) given Eq. (<ref>) can be ill-defined if the constraints are insufficient for a unique solution. Naturally, information theory <cit.> enters the picture as a theoretical foundation to introduce additional assumptions. Specifically, information theory defines the information entropy as a quantity/functional accounting for the “amount of uncertainty” encoded in a probability distribution:H(X) = -∑_x p(x) log p(x) .Under the constraints of Eq. (<ref>), the most “unbiased” probability distribution must maximize the entropy. It follows that the Lagrangian multipliers λ and μ_i can be introduced to maximizethe following equationL(p) =-∑_x p(x) log p(x) + λ(∑_x p(x)-1) + ∑_iμ_i (∑_x p(x) f_i(x) - ⟨ f_i(X) ⟩) .The solution takes the Boltzmann distribution form:p(x)=1/Zexp(∑_iμ_i f_i(x)) ,where μ_i and Z can be determined from the constraints together with the normalization condition; the normalizing constant Z is also called partition function in statistical physics.The equations introduced above can be extended to the multivariate case such that x is replaced by vector x representing the joint state of multiple structures. Hereafter, we provide a simple example for illustration purposes. Assume there are three individual structures in a network. Their joint state is denoted as x=(x_1,x_2,x_3)∈{0,1}^3, where 0 denotes “safe" and 1 “failure". Our current state of knowledge is their individual failure probabilities, i.e., ⟨ X_i⟩, i=1,2,3. It follows that the maximum entropy joint distribution is p(x)=1/Zexp(∑_i=1^3μ_i x_i) ,where μ_i are “free" parameters of the model that need to be inferred from the constraints. Notice that Z is not a free parameter because it is coupled with μ_i by the normalization condition. Since there is no second-order information on X, the maximum entropy solution yields independence between the three random variables. The maximum entropy distributionis a parsimonious model with the minimum number of free parameters that is consistent with the current state of knowledge. The number of free parameters is equal to the number of independent constraints. §.§ Modeling regional hazard response under cross-moment constraintsFor a community of d structures, let X=(X_1,X_2,…,X_d)∈{s_1,s_2,…,s_n}^d denote their (random) performance states. We assume that the constraints are cross-moments, then the f_i( x) in Eq. (<ref>) has the form:f_i( x)=x_1^k^i_1·x_2^k^i_2·...· x_d^k^i_d , i=1,2,...,m ,where k^i_1,k^i_2,...,k^i_d∈ℕ. The previous example withEq. (<ref>) corresponds to cross-moment constraints of order 1, i.e., ∑_j k^i_j=1. If second or higher-order information is collected, the maximum entropy distribution takes the formp(x)=1/Zexp(∑_i ℋ_i x_i+∑_i,j𝒥_i j x_i x_j+∑_i,j,k𝒦_i j k x_i x_j x_k+⋯) ,where to simplify the notations we let the summations run over {1,2,...,d}; this has to be accompanied by setting the parameter to zero if the corresponding cross-moment is not collected. For example, suppose X_1X_2 is not collected, we set 𝒥_1,2≡0. The total number of free parameters in Eq. (<ref>) should equal the number of constraints, m. In civil engineering practice, collecting information regarding the first and second-order cross-moments are most common. In this context, if we further focus on a binary failure/safe performance state, Eq. (<ref>) drops the third and higher-order terms and reduces to the Ising model in statistical physics. On the other hand, if the number of performance levels approaches infinity, we can model X as continuous. In this case, the maximum entropy model under mean and covariance constraints is the multivariate Gaussian. §.§ The Ising modelLet X=(X_1,X_2,…,X_d)∈{0,1}^d, and further assume that the first and second-order cross-moments are collected. The maximum entropy distribution admits a compact form:p(x;J)=1/Z(J)exp(-H( x;J))=1/Z(J)exp(xJx) ,where J is a d× d matrix, and it is seen that the Hamiltonian H is quadratic. This compact form is derived using the identity x_i^2=x_i for x_i∈{0,1}, indicating the first-order terms can be absorbed into the diagonal entries of J. The conventional Ising model assumes a binary state of {-1,1} and thus requires an explicit decomposition of first and second-order terms (see Eq.(<ref>)). However, this difference is trivial because the number of free parameters stays the same, i.e., equals the number of independent constraints, regardless of the superficial equation form. Suppose the full covariance matrix[Here, if the full covariance matrix of X is known, it would be redundant to specify the mean vector.] of X is collected, J has (d^2+d)/2 parameters/entries to be determined. This task is challenging for a large d. § PARAMETER IDENTIFICATIONIn this section, we focus on identifying the Ising model parameters, while the algorithms discussed here can be applied to general maximum entropy models. §.§ Preliminary formulationsFor applications of the Ising model to regional hazard responses, we may meet two scenarios: (1) random samples of the performance states of structures, i.e., samples of X, are collected, and (2) the first and second-order cross-moments of X are specified. The first scenario becomes relevant if a regional-scale physics-based simulator is available, so that it can generate random structural responses under a stochastic hazard model. The second scenario fits the situation where one has empirical models for failure probabilities and pairwise correlations, e.g., fragility functions and spatial correlation models for hazard responses. It is worth mentioning that there exists alternative scenarios, such as a “one-shot" sample of regional hazard response is collected from post-hazard survey. This “third" scenario needs to be augmented by additional models/assumptions/data to make the inference possible, such that it will eventually be transformed into scenario (1) or (2). In this work, we address the parameter identification for both scenarios. We start by assuming that N independent and identically distributed observations 𝒟 = {x^(i)}_i=1^N on X are collected, where x^(i)∈{0, 1}^d; this assumption can be relaxed in a later stage. We need to find the parameter matrix J that maximizes the likelihood functionJ^* =max_ Jℒ(J)=max_ J∏_ x∈𝒟p(x;J) ,where ℒ is the likelihood function and p(x;J) is from Eq. (<ref>). If wehave some prior knowledge on J, i.e., specifying a prior distribution p(J), we can extend the point estimation of J into a posterior distribution p(J|𝒟) ∝ℒ(J)p( J). In this work, we focus on the more basic question of point estimation by maximum likelihood, leaving the Bayesian parameter estimation to future studies. The log-likelihood function isℓ( J)= lnℒ(J) = ∑_ x∈𝒟(-H(x;J) - ln Z(J) )= ∑_x∈𝒟 -H(x;J) - Nln(∑_x∈Ω_ Xexp(-H(x;J)) ) ,where Ω_ X denotes the sample space of X, containing 2^d elements. The negative log-likelihood can be viewed as an energy function with state variables J, and we seek the ground state such that the energy is minimized. Clearly, the computational bottleneck of the log-likelihood function for large d is the evaluation of the second summation term, which is infeasible to be computed exactly. The gradient of ℓ(J) can be written as∂ℓ(J)/∂J= -∑_ x∈𝒟∂H(x;J)/∂J + N ∑_x∈Ω_ Xexp(-H(x;J))/Z(J)∂H(x;J)/∂J = -N(1/N∑_x∈𝒟∂H(x;J)/∂J - ∑_x∈Ω_ Xexp(-H(x;J))/Z(J)∂H(x;J)/∂J)= -N⟨∂H(x;J)/∂J⟩_data +N⟨∂H(x;J)/∂J⟩_model .The last line of Eq. (<ref>) is interpretable: the first “expectation" [More precisely, it is a sample mean approximation for the expectation with respect to the unknown distribution that generates the data.] is taken with respect to data, while the second expectation is with respect to the parametric model for a specified J. Since the maximum likelihood estimation for J seeks Eq. (<ref>) being zero, it follows that the “data" and “model" terms should be equal. This is intuitively expected: if the expectation estimated from data matches that of the distribution model, the data is indeed generated by the model and the corresponding J is “correct". Recall that H(x;J)=-xJx, the gradient ∂ℓ (J) / ∂J has the explicit form: ∂ℓ(J)/∂J= N⟨xx⟩_data -N ⟨xx⟩_model = ∑_x∈𝒟xx - N/Z(J)∑_x∈Ω_ Xxx·exp(-xJx) .Since the maximum likelihood solution seeks ∂ℓ(J)/∂J=0, the equation for J is 1/Z(J)∑_x∈Ω_ Xxx·exp(-xJx)=1/N∑_x∈𝒟xx .This equation suggests that the aforementioned two scenarios regarding the form of collected data can be treated in a unified manner. Specifically, if samples of X are collected, the summation in the right-hand side of Eq. (<ref>) can be evaluated using the data. If the mean and cross-correlation of X are specified (from empirical fragility and correlation models), the right-hand side of Eq. (<ref>) can be regarded as an expectation that is directly given.§.§ Gradient descent solutionsIdentifying parameters of the Ising model is an important topic of Boltzmann machine learning, where the Ising model is also known as the fully visible Boltzmann machine (VBM) <cit.>. We can use the gradient descent algorithm to find the approximate solution of Eq. (<ref>):J^(τ + 1) = J^(τ) - η∘(⟨xx⟩_data - ⟨xx⟩_p(x;J^(τ))) ,where η is the learning rate/step size, which can be adaptively tuned depending on the specific algorithmic implementation; ∘ means the entry-wise product. This updating equation can be extended to the general maximum entropy model described by Eq. (<ref>), such that: ℋ_i^(τ + 1)= ℋ_i^(τ) - η_ℋ(⟨ x_i⟩_data - ⟨ x_i⟩_p(x;ℋ^(τ),𝒥^(τ),𝒦^(τ),…)), 𝒥_ij^(τ + 1)= 𝒥_ij^(τ) - η_𝒥(⟨ x_ix_j⟩_data - ⟨ x_ix_j⟩_p(x;ℋ^(τ),𝒥^(τ),𝒦^(τ),…)) ,𝒦_ijk^(τ + 1)= 𝒦_ijk^(τ) - η_𝒦(⟨ x_ix_jx_k⟩_data - ⟨ x_ix_jx_k⟩_p(x;ℋ^(τ),𝒥^(τ),𝒦^(τ),…)) , where η_ℋ, η_𝒥, and η_𝒦 are learning rates. Following the pattern, one can write the updating equations for the fourth and higher-order terms. In Eq. (<ref>), as expected, the computational bottleneck is the evaluation of the expectation ⟨xx⟩_p(x;J^(τ)). For high-dimensional problems with large d, it is infeasible to exhaust the 2^d combinations. One solution is to use Monte Carlo simulation, i.e., using Markov Chain Monte Carlo techniques <cit.> to generate random samples from p(x;J^(τ)) and approximating ⟨xx⟩_p(x;J^(τ)) by samples. However, this process can be time consuming because the MCMC sampling may require a long chain to reach equilibrium. An alternative is to use the Contrastive Divergence (CD) learning <cit.>. The idea is to run MCMC for a few steps (typically only one step) and use those premature samples to approximate the expectation ⟨xx⟩_p(x;J^(τ)). The theoretical argument is to replace the gradient of the original Kullback-Leibler (KL) divergence between data and model[Notice that maximizing the likelihood of observing the data is equivalent to minimizing the KL divergence between the data and the model, because the marginal likelihood p(𝒟) is a constant independent of p( x; J).] by that of a Contrastive Divergence, expressed by <cit.>CD_n = KL(p_0 p_∞) - KL(p_n p_∞) ,where p_0 denotes the distribution of the data, p_n denotes the distribution associated with running the Markov chain for n steps, and p_∞ denotes the equilibrium distribution, i.e., the model. It is seen that p_∞ is cancelled out in the definition of CD, thus the gradient of Eq. (<ref>) does not involve evaluating p_∞. It follows that the updating equation of J in CD learning isJ^(τ + 1) = J^(τ) - η∘(⟨xx⟩_data - ⟨xx⟩_p_n) .Theoretically speaking, CD learning is a biased algorithm; however, empirical results suggest the bias is typically small <cit.>. It is worth mentioning that there are alternative approaches to CD learning, such as score matching <cit.>, maximum pseudolikelihood learning <cit.>, minimum velocity learning <cit.>, and minimum probability flow learning <cit.>. In practice, we found that CD learning works better for cases with weak pairwise correlation, where the estimation of the marginal distribution is more accurate. Deep learning methods can be potentially useful to accelerate the training of maximum entropy models. A recent study <cit.> showed that the deep learning method can be leveraged to efficiently sample from maximum entropy models even when the modes are well-separated. §.§ Mean-field approximationsAlthough Boltzmann learning with gradient descent algorithms can lead to accurate approximations of the maximum entropy model parameters, it is usually slow. There are many approximate solutions based on the mean-field theory, such as naive mean-field theory, independent-pair approximation, Sessak-Monasson approximation, and inversion of TAP equation <cit.>. These approximate methods can be used to generate warm starting points for optimization algorithms, but their accuracy depends largely on the network size and correlation structure <cit.>. §.§ An illustrative example In this section, we use a simple example to illustrate the maximum entropy modeling.We consider a network with 40 nodes subjected to the impact of hazards (Figure <ref>); each node has a binary state of {0(work),1(fail)}. The current knowledge/constraint is assumed to be the mean value and correlation matrix of the state vector, which is created by the method in Sec. <ref>. Therefore, the maximum entropy model is the Ising model expressed by Eq. (<ref>). Fig. <ref> shows the failure probabilities of nodes and their correlation coefficients.The gradient descent algorithm expressed by Eq. (<ref>) is used to estimate J. The starting point J^(0) is initialized randomly, and ⟨xx⟩_p(x;J^(τ)) is estimated by Gibbs sampling using 100,000 samples with a burn-in period of 20,000 samples. The gradient descent algorithm is iterated for 2,000 steps, and the learning rate is set to η=0.2. Fig. <ref> shows the identified J, and Fig. <ref> shows the errors in the covariance matrix reconstructed from the trained model. The trained maximum entropy model can be leveraged to investigate the collective behavior and global functionality of the network. A concrete example will be presented in a later section. Finally, the parameters of the Ising model have clear statistical physics interpretations, suggesting future research directions of bringing theories of phase transitions and critical phenomena to the regional analysis of civil engineering systems.§ DICHOTOMIZED GAUSSIAN DISTRIBUTION AS A SURROGATE FOR THE ISING MODELFor large-scale infrastructure networks with numerous components, the high-dimensional Boltzmann learning becomes computationally infeasible, and it is meaningful to seek efficient approximations for maximum entropy models. The Dichotomized Gaussian distribution <cit.> is an attractive candidate for a surrogate of the Ising model. This model has been successfully applied to investigate the pairwise correlations and collective behaviors in neural populations <cit.>.§.§ The dichotomized Gaussian model The main idea of the dichotomized Gaussian approximation is to treat the binary state vector as a filtered continuous Gaussian vector. The filtering is deterministic and fixed; therefore, the model will be uniquely determined if the mean and covariance matrix of the latent Gaussian vector are specified. Specifically, assume that the binary random state variables X∈{0,1}^d are generated from latent Gaussian variables Z∈^̊d, such thatX = 1_≥0(Z), Z∼𝒩(γ,Λ) ,where 1_≥0 is a vector indicator function that maps each component Z_i≥0 to 1 and Z_i<0 to 0. Assigning unit variances for the latent Gaussian distribution <cit.>, i.e., Λ_ii=1, i=1,2,...,d, the mean μ and covariance matrix Σ of X can be expressed asμ_i = Φ(γ_i) , Σ_ii= Φ(γ_i)(1 - Φ(γ_i)) = Φ(γ_i)Φ(-γ_i) Σ_ij=Ψ(γ_i,γ_j;Λ_ij) ,for i≠ j ,where Ψ(x,y;λ) = Φ(x,y;λ) - Φ(x)Φ(y); Φ(·) is the cumulative distribution function (CDF) for the standard univariate Gaussian distribution, and Φ(·,·;λ) denotes the joint CDF of the bivariate Gaussian with unit variances and a pairwise correlation λ. The parameters μ and Σ are observable, while γ and Λ are latent. Therefore, we need to use Eq. (<ref>) to find γ and Λ given μ and Σ. To determine γ, we simply use the inverse function γ_i = Φ^-1(μ_i). To determine Λ_ij, we need to numerically solve the one-dimensional equation Ψ(γ_i,γ_j;Λ_ij) = Σ_ij. This is straightforward because the function Ψ(γ_i,γ_j;Λ_ij) is monotonic in Λ_ij.Provided with (γ,Γ), the joint probability mass function of the dichotomized Gaussian distribution isq(x)=1/(2 π)^d / 2|Λ|^1 / 2∫_a(x_1)^b(x_1)⋯∫_a(x_d)^b(x_d)exp(-1/2(z-γ)Λ^-1(z-γ)) dz ,where a_i=0, b_i=∞ if x_i=1; a_i=-∞, b_i=0 if x_i=0. This PMF expression is listed here for completeness. In fact, the PMF expression is seldom used in most applications, because for large systems, a generic performance metric ⟨ g(X) ⟩ is typicallyapproximated by Monte Carlo simulation. The random samples of q( x) can be easily generated via Eq. (<ref>), without resorting to MCMC algorithms. To summarize, the computational advantage of the dichotomized Gaussian distribution over the Ising model lies both in parameter estimation and random sampling; the cost is being a near maximum entropy model. §.§ Numerical verification in covariance matrixWe reproduce the example in Sec. <ref> using the dichotomized Gaussian model and compute the covariance matrix for a comparison. As shown in Fig. <ref>, the identification error in the covariance matrix is negligible; this result is expected because the mapping between the covariance matrices of the latent Gaussian and observable binary vectors are relatively simple. The training time for the dichotomized Gaussian model is around 1.5 seconds, while the Boltzmann learning for the Ising model takes around 5.5 minutes.§.§ Numerical verification in entropySince the Ising model maximizes the entropy given the first- and second-order cross-moment constraints, we need to compare the entropy between the Ising and dichotomized Gaussian models. We let the number of components, or size, of the system vary from 2 to 40 and compare the entropy obtained from the Ising and dichotomized Gaussian models. The technical details on estimating the entropy using variance reduction methods are introduced in <ref>. The failure probabilities and their pairwise correlations at different sizes are adopted from Fig. <ref>, such that the system with size j consists of nodes {1,2,...,j} in Fig. <ref>.As shown in Fig. <ref>, the entropies of the two models are close. Theoretically speaking, the entropy of the Ising model should always be larger; this condition can be breached in practice due to parameter identification and sampling variabilities. The entropy difference is expected to be small if the correlation is weak. For a simple illustration, we consider the extreme case of no correlation. The Ising model reduces top(x) = 1/Zexp(∑_i=1^dℋ_ix_i) = 1/Z∏_i=1^dexp(ℋ_ix_i) = ∏_i=1^dp_i(x_i) ,where p_i(1) is the failure probability of node i. The dichotomized Gaussian model reduces toq(x) =1/(2 π)^d / 2|I|^1 / 2∫_a(x_1)^b(x_1)⋯∫_a(x_d)^b(x_d)exp(-1/2(z-γ)I^-1(z-γ)) dz = ∏_i=1^d1/√(2π)∫_a(x_i)^b(x_i)exp(-1/2( z_i - γ_i)^2)dz_i = ∏_i=1^d p_i(x_i) . In this context, p(x) ≡ q(x) are the independent multivariate Bernoulli distribution, and the entropy achieves the maximum (see Fig. <ref>), i.e., for any joint distribution p(x_1,x_2,...,x_d), we must haveH( X_1,X_2,…,X_d) = -∑_x_1∈{ 0,1 }∑_x_2∈{ 0,1 }⋯∑_x_d∈{ 0,1 } p( x_1,x_2,…,x_d) logp( x_1,x_2,…,x_d) ≤ - ∑_i=1^d p_i(x_i) log p_i(x_i) .§ APPLICATION: SEISMIC COLLECTIVE BEHAVIORS OF THE ROAD NETWORK IN SAN FRANCISCO In this section, we will illustrate an important application of the maximum entropy modeling–-the assessment of global functionalities and collective behaviors of infrastructure systems. We analyze the post-earthquake behaviors of road networks in San Francisco. For illustrative purposes, an empirical seismic hazard model<cit.> is adopted to generate the mean values of road failures and their pairwise correlations. Since this road network has 8,694 nodes and 26,964 links, a dichotomized Gaussian distribution is established as a surrogate for the underlying maximum entropy model, i.e., the Ising model. The global functionalities and collective behaviors of the road network are analyzed using random samples generated from the near-maximum entropy model.§.§ Seismic hazard model §.§.§ First-order constraintWe adopt the seismic hazard model developed in <cit.> to generate the first and second-order cross-moments to build the near maximum entropy model. Specifically, assuming a log-normal fragility curve for each of the road components, the mean of X_i is expressed byX_i=X_i=1=Φ(D̅_i-C̅_i/√(σ_D_i^2+σ_C_i^2)) ,where Φ(·) is the cumulative distribution function (CDF) of the standard Gaussian distribution; D̅_i is the average seismic demand in terms of peak ground acceleration (PGA) and C̅_i average seismic capacity; σ_D_i^2 and σ_C_i^2 are variances of the random demand and capacity, respectively. To model the average seismic demand D̅_i, we adopt the empirical attenuation relation in <cit.>, expressed byD̅_i=-0.5265+(-0.3303+0.0599(M_w-4.5)) ln(r_i^2+1.35^2)-0.0115 √(r_i^2+1.35^2) ,where M_w is the earthquake magnitude; r_i is the distance between the epicenter and the ith road component in kilometers. Adopting the parameter values of <cit.>, we assume σ_D_i^2=0.32, σ_C_i^2=0.48, and C̅_i=ln(0.85) for all components.§.§.§ Second-order constraintThe correlation coefficient between X_i and X_j is modeled by <cit.>ρ_X_iX_j=σ_D^2 ρ_D_i D_j+σ_C_iσ_C_jδ_i j/√(σ_D^2+σ_C_i^2)√(σ_D^2+σ_C_j^2) ,where δ_ij is the Kronecker delta, and ρ_D_i D_j represents the correlation coefficient between the PGAs of two sites, expressed by <cit.>ρ_D_i D_j=σ_η^2/σ_D^2+ρ_ε_i ε_j(Δ_i j) σ_ε^2/σ_D^2 ,where σ_η^2 is the variance of the random inter-event residual, σ_ε^2 is the variance of the random intra-event residual, and σ_D^2=σ_η^2+σ_ε^2; we set σ_η^2=0.07 and σ_ε^2=0.25. Finally, ρ_ε_i ε_j(Δ_i j) is modeled by the intra-event spatial correlation model <cit.>, expressed asρ_ϵ_iϵ_j (Δ_ij) = exp (-0.27Δ_ij^0.40) ,where Δ_ij is the distance between site i and j.Fig. <ref> shows the marginal failure probabilities of the road segments in San Francisco. The epicenter is chosen as 37.80^∘N, 122.27^∘W. The 26,964×26,964 correlation matrix is obtained from Eq. (<ref>). §.§ Origin-Destination pairsWe construct the hourly Origin-Destination (OD) pairs based on San Francisco County Transportation Authority's “TNC Today” report. The TNC data is believed to reflect Uber/Lyft pick-ups and drop-offs in the city by Traffic Analysis Zone (TAZ) <cit.>. Fig. <ref> shows the average weekly accumulated OD demands according to the TAZ-level partition. The data can only reflect the OD demands at the coarse-grained TAZ level. As shown in Algorithm <ref>, we derive the OD matrix through iterative matrix adjustment to make the OD demands consistent with the TAZ level values. In the algorithm, the target O_i or target D_i indicates the Origin or Destination demand at the ith TAZ. §.§ Collective behaviors in trip completion rateWe investigate the post-earthquake road network functionality in terms of the trip completion rate given the commuting pattern illustrated by Fig. <ref>. Specifically, for each OD pair, we examine if there is at least one route; this event can be represented by a Bernoulli variable B_i∈{0,1}. The trip completion rate is a random variable defined as 1/n_od∑_iB_i, where n_od is the number of OD pairs. For specific earthquake magnitudes, we investigate the joint distribution of the trip completion rate and the road removal rate to generateFig. <ref>. The figure also shows the scenario with no correlation in road failures, highlighted by the red circles. The motivation of Fig. <ref> is to project topology (road removal rate), functionality (trip completion rate), and hazard intensity (earthquake magnitude) into the same figure to investigate the collective behaviors of the topological and functional changes of a road network influenced by earthquakes. It is observed that the product outcome space of road removal rate and trip completion rate exhibits two phases and the increase of earthquake magnitude stimulates the transition from one phase to another; while if there is no correlation in road failures, there is only one phase. It follows that the tendency of road components to “fail" or “work" collectively serves as a double-edged sword: for relatively small hazard intensity, there is a mild chance for system-level malfunction; for relatively large hazard intensity, there is also a mild chance of global functioning. Finally, it is worth mentioning that Fig. <ref> shares some features of a percolation analysis <cit.>, but it offers more information and diverges from percolation in many aspects–the road removal rate in Fig. <ref> is not a “free" or control parameter, and the goal is to discover probabilistic patterns in functionality and topology, rather than to identify the critical road removal rate to trigger a phase transition or the distribution of functionality metrics at the criticality. § ADDITIONAL REMARKSThe maximum entropy model can be extended to involve temporal evolution <cit.>, such as the Ising model with Glauber dynamics <cit.>. A time-dependent maximum entropy model can be utilized to study how a complex infrastructure evolves and recovers after a natural hazard. Furthermore, the coupling effects are not necessarily symmetric <cit.>. This asymmetric coupling becomes important if we model functional dependencies between infrastructures. For example, a hospital may rely on a power transmission infrastructure to operate, but not vice versa. Therefore, it is promising to extend the maximum entropy modeling framework to investigate the time-dependent collective behaviors of civil infrastructure systems with complex asymmetric functional dependencies. § CONCLUSIONSIn this study, we present maximum entropy modeling for the regional hazard responses of civil infrastructure systems. For the special but typical case where the performance states of individual structures are binary, and the mean performance state values and their pairwise correlations are given, the maximum entropy model reduces to the Ising model in statistical physics. For this special case, we propose using a dichotomized Gaussian distribution as a near-maximum entropy surrogate model. This surrogate model has satisfactory scalability, which is then applied to study the post-earthquake functionality of the drivable road network in San Francisco, consisting of 8,694 nodes and 26,964 links. We have observed that the joint outcome space of road removal ratio and trip completion rate exhibits two patterns, and the increase of earthquake magnitude stimulates the transition from one to another; while if there is no correlation in road failures, there is only one trivial pattern induced by the Central Limit Theorem. This paper focuses on the probabilistic modeling of hazard responses of infrastructure systems, which is the foundation for understanding a community's post-hazard behaviors and resilience. For future research, we will adapt and adopt the maximum entropy model for various natural hazards and investigate the temporal evolution of the collective behaviors of infrastructure systems.§ CODE AND DATA ACQUISITIONAll codes and data are made available upon reasonable requests. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Xiaolei Chu: Writing - original draft, Conceptualization, Formal analysis, Investigation, Methodology, Validation, Visualization. Ziqi Wang: Conceptualization, Supervision, Writing - review & editing, Funding acquisition. § ACKNOWLEDGMENTSWe thank Dr. Jianhua Xian for the help in estimating the entropy using a variance reduction method. Thanks also go to Dr. Dongkyu Lee for providing the seismic hazard model.§ DECLARATION OF COMPETING INTERESTThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.§ ENTROPY ESTIMATION§.§ The Ising modelRecall that the PMF of the Ising model is defined as p(x;J)=exp( xJx)/Z(J). Typically, we can estimate an expectation, such as 𝔼_p(x;J)[f(X)], without knowing the partition function Z(J), but the entropy explicitly involvesthe partition function. Specifically, the entropy for the Ising model is𝔼_p(x;J)[ -log p(X;J) ] = 𝔼_p(x;J)[ -XJX] + Z(J) ,where the evaluation of the partition function is the computational bottleneck. We adapt the sequential Monte Carlo <cit.> to this problem by defining Z_n = ∫_Ω_xexp(xJx/T_n)dx, where T_n is the “temperature,“ used as an annealing parameter at the nth iteration. It follows that Z_n can be rewriten as Z_n= ∑_ xexp(xJx/T_n) = ∑_ xexp( xJx/T_n)/exp(xJx/T_n-1)/Z_n-1exp(xJx/T_n-1)/Z_n-1 .Then we haveZ_n/Z_n-1= ∑_ xexp( xJx/T_n)/exp(xJx/T_n-1)exp(xJx/T_n-1)/Z_n-1 = 𝔼_p(x;J/T_n-1)[ exp( xJx/T_n)/exp(xJx/T_n-1)] .We start from a high temperature T_1, such that the distribution is dispersed, and stop at T_N_T=1 to restore the original partition function. We choose T_n=1.6^20/N_T(N_T-n) with N_T=100 to tune the annealing process, and the partition function is estimated asZ(J) = Z_N_T≈ Z_0∏_n=1^N_TZ_n/Z_n-1 ,where Z_0 is chosen as the partition function of a multivariate independent Bernoulli distribution with marginals p_i(1)=p_i(0)=0.5 and Z_0=2^d. d is the dimension of x.Finally, the entropy is estimated by𝔼_p(x;J)[ -log p(X;J) ] = ∑_ x(-xJx) p(x;J) + Z(J) ≈ -∑_i=1^N_0x_iJx_i + Z_0∏_n=1^N_TZ_n/Z_n-1where x_i are samples from the Ising model and N_0=10^5 is used in this study.§.§ Dichotomized Gaussian modelRecall that the PMF of the dichotomized Gaussian distribution isq(x)=1/(2 π)^d / 2|Λ|^1 / 2∫_a(x_1)^b(x_1)⋯∫_a(x_d)^b(x_d)exp(-1/2(z-γ)Λ^-1(z-γ)) dz ,where a_i=0, b_i=∞ if x_i=1; a_i=-∞, b_i=0 if x_i=0. In this case, we need to use Monte Carlo simulation to estimate both q(x) and the entropy 𝔼_q(x)[ -log q(X) ]. The samples to estimate q(x) are generated by the Gaussian distribution 𝒩(γ,Λ), while the samples to estimate 𝔼_q(x)[ -log q(X) ] are from Eq. (<ref>) and Eq. (<ref>). For q(x), we haveq(x)=∫_z∈Ω_x f(z;γ,Λ)dz=∫_z∈ℝ^d1(z∈Ω_x) f(z;γ,Λ) dz ,where Ω_x=∏_i=1^d[a(x_i),b(x_i)] is the integral domain; 1(·) is the indicator function;f(y;γ,Λ) is the Gaussian density in Eq. (<ref>). Eq. (<ref>) is in the classic form of reliability analysis and rare-event simulation, so the estimation of q(x) can be handled by rare event simulation techniques <cit.>. We do not repeat the details here. The estimation of the entropy 𝔼_q(x)[ -log q(X) ] is performed by direct Monte Carlo simulation.
http://arxiv.org/abs/2310.17798v1
{ "authors": [ "Xiaolei Chu", "Ziqi Wang" ], "categories": [ "stat.AP", "physics.data-an" ], "primary_category": "stat.AP", "published": "20231026220206", "title": "Maximum entropy-based modeling of community-level hazard responses for civil infrastructures" }
ArcheType: A Novel Framework for Open-Source Column Type Annotation using Large Language Models Juliana Freire Accepted XXX. Received YYY; in original form ZZZ =============================================================================================== In this paper, we propose a novel negotiation dialogue agent designed for the online marketplace. Our agent is integrative in nature i.e, it possesses the capability to negotiate on price as well as other factors, such as the addition or removal of items from a deal bundle, thereby offering a more flexible and comprehensive negotiation experience. We create a new dataset called Integrative Negotiation Dataset (IND) to enable this functionality. For this dataset creation, we introduce a new semi-automated data creation method, which combines defining negotiation intents, actions, and intent-action simulation between users and the agent to generate potential dialogue flows. Finally, the prompting of GPT-J, a state-of-the-art language model, is done to generate dialogues for a given intent, with a human-in-the-loop process for post-editing and refining minor errors to ensure high data quality. We employ a set of novel rewards, specifically tailored for the negotiation task to train our Negotiation Agent, termed as the Integrative Negotiation Agent (INA). These rewards incentivize the chatbot to learn effective negotiation strategies that can adapt to various contextual requirements and price proposals. By leveraging the IND, we train our model and conduct experiments to evaluate the effectiveness of our reward-based dialogue system for negotiation. Our results demonstrate that the proposed approach and reward system significantly enhance the agent's negotiation capabilities. The INA successfully engages in integrative negotiations, displaying the ability to dynamically adjust prices and negotiate the inclusion or exclusion of items in a bundle deal[Codes and dataset available at <https://github.com/zishan-ai/neg> and <https://www.iitp.ac.in/ ai-nlp-ml/resources.html#INA>].§ INTRODUCTIONIn an online marketplace, customers and sellers engage in discussions involving product inquiry and bargaining before reaching a common consensus <cit.>. In such a setting, negotiation between the customer and the seller is a core facet of discourse that ultimately decides the profit of sale and customer satisfaction. Negotiation on the price of a product is very common, however, customers have an open-ended approach to negotiation often also involving negotiation on certain aspects related to the deal. For example, while buying a chair the customer may negotiate a deal without the cushions, or even negotiate between delivery and in-store pick-up.As a result, a dialogue system for negotiation in an online marketplace should be capable of engaging in negotiation on different aspects such as price, product, and delivery. Additionally, such a system should also be capable of responding to product inquiries with relevant and knowledge-grounded information.A systemic survey conducted by <cit.> discussed various datasets, evaluation metrics, and methodologies in common literature. From this, it can be implied that bargain in the marketplace typically follows a "Distributive" strategy where each party involved aims to maximize their gain rather than mutually benefiting outcomes. This strategy follows a win-lose model, where one party can gain only if the other party loses. The CraigslistBargains dataset <cit.> is the most prominent dataset in the price bargain domain with other datasets having less than 1,000 dialogues. This dataset contains dialogues between two human agents assigned the role of customer and seller negotiating over a product on Craigslist, the strategy used in the dialogues are largely distributive in nature. In contrast to a distributive approach, an "Integrative" approach to negotiation aims to reach a win-win situation by understanding the other party's needs and reaching a mutually satisfying consensus. It has been shown that an integrative approach to negotiation in retail e-commerce is more effective and leads to better customer satisfaction, than distributive approaches <cit.> that typically utilize agents that negotiate only on price. It is common in online marketplaces for products to have several items, such as a "A chair and its cushion", a negotiation agent that is capable of satisfying customers that only want select items from the product such as customers that only want a chair or customers that only want a cushion is beneficial since the agent better understands customer requirements and may lead to win-win outcomes. Hence, treating a product as a "bundle" of items that customers can choose is a more integrative approach than treating the product as a single entity. To incorporate this integrative approach, in this paper, we propose a novel dialogue system for negotiation in the online marketplace domain, which can respond to customers' inquiries and engage in negotiation with the customer. Unlike existing systems <cit.> that primarily focus on negotiation over the price of a product, our system follows a more integrative approach wherein negotiation involves different aspects such as adding or removing products from the aforementioned "bundle" of products, the price of the bundle, and the delivery of the product. Datasets for negotiation such as the CraigslistBargains dataset do not explicitly model the product as a bundle of smaller items. Hence, we construct a dataset (IND) consisting of integrative negotiation dialogues where the deal is modeled as a bundle of products. To avoid complete manual data creation, we design prompts for the GPT-J model <cit.> to generate integrative negotiation utterances. To ensure the dataset’s quality, we use humans in the loop for minor edits and filtering of the generated dialogues.Using the constructed dataset, we build an integrative negotiation-powered dialogue agent (INA) using a supervised learning (SL) + reinforcement learning (RL) approach. To train our system, we leverage a novel reward function andmaximize it using PPO loss <cit.> to ensure aspects of negotiation consistency, negotiation power, and intent consistency. As per our knowledge, this is the first attempt to build an integrative-negotiation based dialogue system. Therefore we present a pioneering effort in developing an integrative-negotiation-based dialogue system, making several key contributions. First, we introduce a new task of integrative negotiation, expanding the scope of dialogue system research. Second, we propose an efficient approach for automatically generating data with minimal manual intervention, addressing the challenge of data scarcity in certain domains. This contribution will drive the development of more robust dialogue systems. Third, we create a unique dataset of integrative negotiation dialogues. Finally, we leverage the strengths of both supervised and reinforcement learning to construct a powerful dialogue system empowered by integrative negotiation strategies.§ RELATED WORK <cit.> studied the effects of various intra-personal processes, such as mood, and interpersonal processes, such as emotion, on negotiation outcomes. They defined integrative negotiation as "the extent to which the negotiated outcome satisfies the interests of both parties in a way that the outcome cannot be improved upon without hurting one or more of the parties involved". They also reported that the studies on the effectiveness of computer-mediated negotiation with respect to face-to-face negotiation give mixed results. <cit.> stated the importance of user adaptation in negotiation dialogue systems by performing experiments using different policies on simulated users in a newly designed negotiation dialogue game. <cit.> proposes a semi-automatic negotiation wherein a dialogue manager decides the intent after which a natural language generator presents conversational strategies to a human expert that writes the final utterance. <cit.> prepares a dataset and proposes end-to-end dialogue systems for "multi-issue bargaining". In this type of bargaining, two agents are presented with a set of items and asked to assign each item to one agent, each agent is also given a value function to decide the value of an item. <cit.> prepares the CraiglistBargains dataset where two human agents negotiate over the price of a product listed on Craigslist, further, they decouple negotiation strategy and dialogue generation by proposing a dialogue manager to decide the intent of the next utterance and a generator that uses the intent to generate the utterance. Following this work, <cit.> proposes a framework to integrate "Theory of mind" <cit.> for inferring personality types to enhance negotiation dialogues.Unlike these previous works, our proposed negotiation agent (INA) is capable of doing integrative negotiation. Our agent is not only capable of negotiation with respect to the price of an item but can also modify the deal to better suit the customer's preference. Similarly, our agent can also handle the customization of a deal proposed by the customer and decide on accepting or rejecting the deal. These capabilities are currently absent in any negotiation agent.§ DATASET CREATIONWe construct the IND dataset for the task of integrative negotiation. To save on human effort and resources, we come up with a novel mechanism based on prompting a large language model for dataset creation. We keep human annotators in the loop only for making minor edits and filtering the automatically generated dialogues to ensure the quality of the conversations. The overall process consists of creating a skeleton of dialogues by dynamically deciding the correct intent for any arbitrary conversation. Our overall dataset creation process consists of 5 steps: (i). Background Data Creation, (ii). Intent Definition, (iii). Dialogue Flow Generation, (iv). Prompting for Dialogue Generation, and (v). Data Correction.§.§ Background Data Creation Although our method can be adapted to any product negotiation, we mainly focus on a list of 10 different electronic items: (i). Air Conditioning, (ii). Television, (iii). Refrigerator, (iv). Oven, (v). Washing Machine, (vi). Printer, (vii). Smart Phone, (viii). Laptop, (ix). Tablet, and (x). Camera. Along with these products, the deal bundle consists of a set of accessories related to the product. Therefore, our background database consists of the following information, such as Product Name, Product Description, Product Features, Price, Accessory List, and Accessory Description.§.§ Intent DefinitionIn order to build a robust negotiation system it is vital to define intents that can cover a diverse range of scenarios during negotiation. For an integrative negotiation agent, the scenario in the scope of the agent is not just price negotiation, but also item-level negotiation in the given bundle. To cover these properties, we come up with the following intents[Example utterances for each intent provided in Table <ref> of the appendix.]: * Greet: The utterances with general greetings like welcome and thank you come under this category.* Ask: This intent is triggered when a user explicitly asks for information about an item or the ongoing negotiation.* Inform: The agent may use the 'inform' intent to share detailed information about the products or services involved in the negotiation.* Ask-Clarification: This intent captures the user's intention to seek further explanation or clarification regarding certain aspects of the negotiation or the overall deal according to the current negotiation state.* Negotiate-Price-Increase: This intent indicates that the agent is seeking to increase the pricing terms of a product or service during the negotiation process.* Negotiate-Price-Decrease: This intent indicates that the agent is seeking to decrease the pricing terms of a product or service during the negotiation process.* Negotiate-Price-NoChange: This is an intent by the agent in a negotiation system indicating the system's intention to propose or assert that the price of a product or service should remain unchanged during the negotiation process. This is ideally done by highlighting the value and fairness of the current deal.* Negotiate-Add-X: This intent by the agent or user refers to the intention to propose or suggest the addition of a specific item or feature to enhance the value of a product or service during the negotiation process. This may or may not lead to an increase in the price of the deal.* Negotiate-Remove-X: This intent by the agent or user in refers to the intention to propose or suggest the removal of a specific item or feature from the deal in the negotiation process. This may or may not lead to a decrease in the price of the deal.* Accept:This refers to the agent or user's intention to agree or accept a proposal, offer, or condition reached during the negotiation process.* Reject:This refers to the agent or user's intention to agree or reject a proposal, offer, or condition reached during the negotiation process.The above intents can occur either individually or in combination with other intents (e.g.: Greet-Ask).§.§ Dialogue Flow Generation Our dialogue flow generator module assumes that the dialogue flow (intent sequence) during negotiation can be random. However, we also put some obvious constraints on this dataset-generation process. One simple constraint is that the conversation would be initiated by the customer with a greet intent. This greet intent could be accompanied by a request for clarification or one of the `negotiate' intents for the customer. The agent can respond by the inform intent or one of the agent `negotiate' intents.For all the deal bundles, wemaintain negotiation details of the ongoing deal with the customer, which consist of: (i). Minimum Seller price, (ii). Current Seller price, (iv). Tolerance value (tol) and (iii). Current Customer price. To enforce the integrative nature of our agent, we limit only price-based negotiations to d turns after which the `Negotiate-Add-X' or `Negotiate-Remove-X' intents would take over. To propose a price for the next turn, we assume that a decay in price difference (increment for customer and decrement for seller) over dialogue turns. This is in line with <cit.> where a similar function is used to model the price negotiation between the customer and seller. Equations <ref> and <ref> are used for the computation of the proposed price by customer (P_b) or seller (P_s) at dialogue turn t. In the equations, k is a constant to control the rate of price change from one turn to the next. If it k is larger there will higher rate of concession, at a low value the rate of concession provided by the seller is low. For our setting, we have assumed a higher k value for the seller and a lower k for the customer, considering the customer is strict with their budget.Ps_t = Pb_t-1 + (Ps_t-1 - Pb_t-1)e^-kt Pb_t = Ps_t-1 - (Ps_t-1 - Pb_t-1)e^-kt The seller will choose intent `Accept' when the customer offered price is less than or equal to the amount Ps_t - tol*Ps_t. The customer will choose intent `Reject' when the conversation has crossed the negotiation deadline, and the seller is no more ready to lower the bundle price. The dialogue flow terminates with the acknowledgment of `accept' intent or the `reject' intent. §.§ Prompting for Dialogue Generation We design few-shot prompts <cit.>[Example prompts provided in Section <ref> of the Appendix] for each intent, with around four shots for each prompt (due to the token limit of 2,048 in GPT-J). Each shot contains three parts, a description of the task, a summary of the relevant information from the dialogue, and an utterance following the intent, all in a natural language format. The summary of the relevant information is designed considering the intent flow of the previous utterances of the dialogue. The description of the task is the sentences in the prompt that explains the situation and the goal of the intent, for instance, the task description for the "Acknowledge acceptance" intent is "A customer has agreed to purchase a product from a seller, the seller wants to thank the customer and proceed with the transaction". The utterance following the intent is a manually designed utterance following the task description and the information summary of the shot.The flow generation module creates an ordered list of intents along with relevant information for each intent, for instance, for the intent "Negotiate-Add-X" the item to be added is mentioned, and for "Negotiate-Price-Decrease" the price to be proposed is mentioned. Our algorithm uses the list created by the flow generation module to create a shot that is augmented to the prompt of the respective intent, this prompt is then passed to the GPT-J model to produce the utterance.§.§ Data CorrectionTo ensure the quality of the automatically generated dataset, we implemented manual correction and filtration steps. We engaged three human experts who possess post-graduate qualifications and have two years of experience in the field. Their instructions were to make edits to the generated dialogues in order to ensure grounding in the provided background database, intent, action, and negotiation flow. Additionally, any utterances produced by the agent that referred to its own experiences or feelings, pretending to be human, were to be rephrased or removed (to maintain authenticity). The experts were also responsible for correcting minor grammatical errors. Furthermore, they were asked to rate the fluency of each utterance on a scale of 0-2, where 0 represented non-fluency and 2 indicated complete fluency. Dialogues containing utterances rated as 0 fluency were dropped from the dataset. These measures were implemented to uphold the quality standards of the dataset. § DATASET STATISTIC The statistics of the dataset created are given in Table <ref>. The dataset has a total of 4,163 utterances and we follow an 80:12:8 split between train, test, and validation sets. The average number of turns per dialogue in the dataset is 13 and the number of unique words in the dataset, excluding numbers is 12,219, both these metrics are comparable to the metrics in the Craigslist Bargain dataset (avg. turns:9; unique words:11,799). Following <cit.>, to automatically measure the variability of conversations of our dataset, we compute BLEU-1 and METEOR scores between the utterances. We obtain low BLEU-1 and METEOR scores of 0.08 and 0.05, respectively, indicating high variability between the utterances in IND. We ask three human experts to rate the `engagingness' and`fairness' of dialogues on a scale of 1 to 3 (higher the better). The dialogues obtained an average rating of 2.17 for `engagingness' and 2.26 for `fairness'[The overall inter-annotator agreement using Krippendorff’s alpha <cit.> was found to be 0.84]. § METHODOLOGY To force a language model to negotiate with the user while following its own price goal as well as approach, we fine-tune it using a novel-designed reward function in a reinforcement learning setting. Here, first, a pre-trained language model (GPT-2-medium) is fine-tuned in a supervised setting using traditional cross-entropy loss between the ground truth and predicted utterances probability distributions. For a supervised dialogue dataset D = {d_0, d_1, .. , d_N}, where, d = {a_0, u_0, .. , a_i, u_i, .. , a_T-1, u_T-1} - a multi-turn dialogue with u_i + cxt_i (u_i - user's utterance at i^th turn and cxt_i = {a_0, u_0, .. , a_i-1}) as input and a_i (agent's utterance at i^th turn) as output. The supervised learning dialogue model ρ_θ(d) can be expressed as: ρ_θ(d) = ∏_T=0^T-1ρ_u(u_i|u_<i,a_<i)ρ_a(u_i|u_<=i,a_<i) where ρ_u and ρ_a are the user's and agent's utterances probability distributions. This trained SLDM is fine-tuned in an RL setting using the PPO loss formulated as below:L^CLIP(θ)=Ê[min(pr_r(θ)Â_̂r̂, clip(pr_y(θ), 1-ε,1+ε)Â_̂r̂)]where pr_r(θ) =𝒫_θ^new/𝒫_θ^old. ε and Â_̂ŷ denote the clipping range and normalized rewards, respectively. Finally, the parameters' updation is done as follows:θ_k+1 = θargmaxs,a∼𝒫_θ_kE[L^CLIP] Here, normalized rewards are obtained by a novel designed reward function (R) incorporating intent consistency reward (R_1), price gap reward (R_2), negotiation strategy reward (R_3) and interactiveness (R_4) in generated responses. R intuitively nudges SLDM towards these aspects by providing appropriate respective aspect penalization/reward for generated responses. For example, if the model generates intent inconsistent response then R_3 will penalize the model to discourage it from generating a similar type of content. All five rewards can be written as: oindent Intent consistency: In a negotiation system with complex intents there can often be divergence between the predicted intent and the intent of the generated utterance. To enforce this consistency, we propose Intent Consistency (IC) reward. This reward function is implemented by first training a BERT model <cit.> on the training set of IND for the task of intent prediction. This task is modeled as a classification task where the input to the BERT model is an agent utterance at turn t, Ua_t, and the expected output is the intent of the utterance Ia_t. The accuracy of the trained intent classifier is 71.2%. We use the [CLS] token for computing the probability distribution of the intent classes. We sample the probability value P_it of the intent predicted i by our end-to-end SLDM dialogue model and use it as R_1 (Eq. <ref>).R_1 = P_it(u_t)Price Gap Reward:The purpose of negotiation is to find a win-win solution for both the customer and the seller. The winning scenario for a seller would be as little reduction in the initially proposed price as possible. In line with this logic, we propose a Price Gap (PG) reward. This reward is simply the fraction of the initial proposed price by the agent P_ai and the final selling price after negotiation P_af (Eq <ref>). The higher the final price the greater the reward.R_2 =P_af/P_aiNegotiation Strategy Reward: A successful negotiation might not always entail deal acceptance. In cases where the customer wants to go below the minimum selling price of the agent P_a-min it would not be judicious for the seller to satisfy the customer. In such situations where the negotiation could result in a win-lose situation, the deal should be rejected. Hence, the success criterion of the negotiation lies in not just acceptance of the deal but also the fairness of the deal. To ensure that our negotiation succeeds only in win-win scenarios we design the Negotiation Strategy (NS) reward.R_3 =F(P_b - P_a-min/P_a-min)G(Intent_f) G(Intent_f)=1, Intent_f = accept-1, Intent_f = reject F(x)=0, x < 0e^x, x ≥ 0 In the above equations, P_b is the customer's proposed price, and Intent_f ∈{Accept, Reject} is the final intent in the conversation used to capture the negotiation result. The reward incentivizes acceptance of a deal when the negotiated price is within the limit of a minimum price for the seller, and rejection when the negotiated price is below this minimum price. Interactiveness:To ensure interactiveness, repetitions, and conversation blackholes are penalized such that system can engage the user for a longer duration with interactive responses. To penalize the generation of similar utterances for a given intent in the dialogue we use Equation <ref>. R_4 = 1 - ∑_i=1^i=mv_k^in.v_i^in/|v_k^in||v_i^in|/mwhere v_k^in is the vector (bag of words) representing the generated utterance with intent in. v_i^in to v_m^in are the vectors representing the previously generated utterances in the dialogue with the same intent. The final normalized reward function R is formulated as: R = γ_1R_1 + γ_2R_2 + γ_3R_3 + γ_4R_4with γ_1 + γ_2 + γ_3 + γ_4 = 1. § EXPERIMENTS §.§ Evaluation MetricsTo properly assess INA's performance, we perform both automatic and manual evaluations. In automatic evaluation to measure the surface similarity with the gold responses, we compute METEOR <cit.>. For semantic similarity, we compute BERT Score (BS-F1) <cit.> and Word Mover distance (WM). We also report the Perplexity (PPL) and the average response length (R-LEN) of the generated responses.Human evaluations were conducted by three postgraduate evaluators who possess proficiency in similar tasks. Each evaluator interacted with the proposed system 15 times and assessed the conversations based on: (i). Negotiation Consistency (N-Con): It is the measure of consistency (absence of arbitrariness) in the negotiation approach within a dialogue (ii). Bargaining Efficacy (B-Eff): It measures the ability of the negotiation system to present compelling arguments, reasoning, or incentives that influence the other party's decision-making process., (iii). Outcome fairness (O-fair): It assesses the fairness or equity of the final outcomes reached during the negotiation process.,(iv). Dialogue-fluency (D-F): It measures the overall grammatical correctness of the generated responses, and (v). Dialogue-Engagingness (D-E): Measures the extent to which a conversation or dialogue is interesting, captivating, and able to hold the attention of the participants. The evaluators assigned scores on a scale of 1 to 3 for each metric (The higher the better).§ RESULTS AND ANALYSISoindent Automatic Evaluation:It can be noticed from Table <ref> that the proposed INA performs better than all the four baselines viz. ARDM, ARDM + BK (Background Knowledge), ARDM + In (Intent), and Neg-TOD <cit.>, in terms of all the five metrics viz. METEOR, BS-F1, WM, PPL, and R-LEN. For the evaluation metrics measuring similarity (of the generated utterance) with the gold utterance i.e METEOR, BS-F1 and WM, INA attains scores of 0.43, 0.86 and 0.57, respectively. The obtained scores are significant improvements <0.141, 0.042, 0.04>, <0.158, 0.032, 0.04>, <0.144, 0.032, 0.03> and <0.137, 0.029, 0.03> over the baselines, ARDM, ARDM+BK, ARDM+In, and NegTOD, respectively.It can also be inferred that the difference of BS-F1, and WM scores decrease in the following order: INA>INA-NS>INA-PG>INA-I. This shows the importance of task-specific rewards in our proposed system INA. It can also be observed from Table <ref> that INA obtains lower (better) PPL = 1.56 score than that of ARDM, ARDM+BK, ARDM+In, and NegTod with a difference of 1.39, 1.19, 1.24, and 1.37 points, respectively. Further, we obtain a score of R-LEN = 39.93 is also better than that of ARDM, ARDM+BK, ARDM+In, and Neg-TOD with a difference of 15.72, 2.28, 13.5, and 1.76, respectively. This indicates that the INA is able to generate longer responses, hence, showcasing more engagingness with the user. It can be due to the incorporation of all four rewards where R_1, R_2, and R_3 play the crucial role in handling negotiation and price consistency, and R_4 helps in maintaining the non-repetitiveness, hence, driving the agent to build the rapport with a user as well as be on the goal by generating diverse and interactive negotiation responses. oindent Human Evaluation: Table <ref> shows the human evaluation results for all the eight models viz. ARDM, ARDM+BK, ARDM+In, NegTOD, INA-IC, INA-PG, INA-NS and INA-I. It may be noted that INA yields better scores for N-Con, B-Eff, O-fair, D-F, and D-E compared to the baselines. Scores of N-Con: 2.4, D-F: 2.8, and D-E: 2.6 shows that the intent-consistency (IC) and interactiveness (I) rewards play a crucial role in obtaining consistent, fluent, and engaging, responses as compared to other models. Further, in terms of B-Eff and O-fair, INA attains a score of 1.8 for both. The ablation of the price-gap (PG) and negotiation-strategy (NS) rewards showcases the importance of these rewards in terms of B-Eff and O-fair. Therefore, it can be inferred that employing intent consistency, price gap, and negotiation strategy rewards help in a more consistent, persuasive, and overall fair negotiation with the customer. § CONCLUSIONIn this paper, we have presented a novel dialogue agent for negotiation (INA) in the online marketplace domain, focusing on an integrative approach that goes beyond price negotiations. Our system can respond to customer inquiries and engage in negotiations that encompass various aspects, such as modifying product bundles, adjusting prices, and arranging product delivery. Unlike existing systems that mainly concentrate on price negotiations, our approach provides a more comprehensive and versatile solution. To achieve this, we constructed a dataset of negotiation dialogues (IND) where the product is represented as a bundle of smaller items. To minimize manual effort in data creation, we employed prompts for the GPT-J model to generate integrative negotiation utterances. Using IND, we developed a INA through a combination of supervised learning and reinforcement learning. Our training process incorporated a novel reward function that suits the negotiation task, which we optimized using the Proximal Policy Optimization (PPO) loss. Our results show that INA is able to perform integrative negotiations with the customer and enable engaging negotiations that can lead to a win-win deal for the seller and the customer.In the future, it would be interesting to explore the role of the customer persona like age, gender, hobbies, etc. during negotiation. § LIMITATIONSOur data creation steps and modeling have some limitations. First, to create the data, GPT-J is used which requires a large GPU memory size (here, 40 GB). Another limitation of GPT-J is that it has a context window of 2,048 tokens, which constrains our prompting mechanism. Within this context window, we need to fit background data as well as dialogue history with a few shot examples. This allows us to only go for a maximum of 4 shots while prompting leading to some hallucinations in the created data which needed to be fixed manually.§ ETHICAL CONSIDERATIONSSince negotiation by nature entails bargain with the customer, it should be done ethically. Our integrative approach to negotiation gives greater flexibility to the customer and hence leads to a win-win scenario in negotiation. Our negotiation is not aimed at as a zero-sum game where a party has to lose in order for the other to win. The customer at any point of the conversation can reject the deal and thus is not compelled to continue with the negotiation if it does not suit them.The dataset created in this work will be made available only after filling and signing an agreement declaring that the data will be used only for research purposes. The annotation, filtering/editing of data, and manual evaluations were done by human experts, who are regular employees of our research group and are paid in accordance with the institute's policy. There are no other issues to declare.§ ACKNOWLEDGEMENTAuthors acknowledge the grant received from Accenture LLP for the project T ”Conversational Agents with Negotiation and Influencing ability”. acl_natbib§ APPENDIX §.§ Implementation Details For generating the INA corpus, GPT-J model <cit.> with 6 billion parameters was used. INA is trained in an RL framework by employing a fine-tuned GPT-2 small <cit.> model (117 million parameters) on our proposed IND dataset. For dialogue flow generation, we keep the value of d as 2. In each iteration of RL-training, n=3 candidate responses are generated. It is selected as per PPL score, after experimenting with different values i.e. n= 2, 3, 4, 5, 10. Nucleus sampling <cit.> with temperature T= 0.8 and probability p=0.9 is used to decode the generated utterances. INA trained is done using seed_value = 10, human_reward = 10, max_candidate_length = 50with optimizer = AdamW <cit.> and learning rate α = 2e-05, ε = 0.2 and epochs = 17. The reward weight combination of 0.2, 0.2, 0.3, 0.2 are chosen as the final weights for γ_1, γ_2, γ_3, and γ_4 respectively. §.§.§ Specifications of Computational ResourceTo train the MLE-loss-based conversational model, and proposed INA, following configurations are used: * GPU: A100-PCIE-40GB.* CUDA Support: CUDA 11.x (or later.* Memory clock: 1215 MHz.* Total board power: 250 W.* GPU clocks: Base: 765 MHz, Boost: 1410 MHz.* Memory Size: 40 GB.* Memory Type: HBM2.* Bus Width: 5120 bits.§ DATASET We ensure that the utterances in INA are grounded on the background knowledge consisting of product and deal details. Table <ref> shows example utterances from our dataset for different agent intents. It can clearly be observed that the utterances are well-grounded in the background knowledge and do contain factual hallucinations.Table <ref> shows example utterances for each intent defined in Section <ref>. The table also shows examples of custom intents obtained through the combination of the defined intents. §.§ Prompts Each prompt contains around 4 shots and each shot contains a task description, required context, and an example utterance.An example shot for the intent "Negotiate-Price-NoChange" is as follows: "A seller is negotiating with a customer for a laptop called Dell X8, it has 16 GB ram, 11-inch screen and Intel i7 processor. The seller ideally wants it for $770 and is not willing to reduce the price." ”'The customer tells "I appreciate that you need to make a profit on this item, but unfortunately, $770 is above my budget for a laptop. I was ideally hoping to purchase the Dell X8 for $500, but I'm willing to negotiate up to $570 if necessary. Is there any way you could lower the price to meet me somewhere in the middle?"”' "(Remember, the seller cannot go lower than $770) The seller endorses the product by saying." "<start> While I appreciate your offer, I cannot go as low as $570. I cannot lower the price further since the laptop is high-end and is well worth $770. It has 16 GB ram and an Intel i7 processor, making it ideal for heavy duty applications. I'm sure that you would be pleased with it even for $770!" Here, we provide context in terms of the previous utterance of the customer and the product description. The task mentioned is to endorse the product and not go below a particular price.An example shot for the intent "Negotiate-Remove-X" is as follows: "A customer is negotiating with a seller about a product. The customer wants to ask for another deal to the seller." "The initial deal was a laptop called Dell X8, it has 16 GB ram, 11-inch screen and Intel i7 processor along with a gaming mouse. The price for this deal was $800. The customer wants to remove the gaming mouse from the deal." "The customer asks for the new deal by saying." "<start> I do not really need the mouse, is it possible to just sell me the laptop?" Here the task description is mentioned explicitly in 3 lines. The context is provided as the previous deal.§ INA GENERATED CONVERSATION SAMPLEIn Table <ref> we show one sample interaction between a human and INA. The negotiation is for a bundle deal of a TV, TV Stand, and Extension Cord. The interaction shows the capability of our dialogue agent (INA) in handling and integrative negotiation.
http://arxiv.org/abs/2310.18207v1
{ "authors": [ "Zishan Ahmad", "Suman Saurabh", "Vaishakh Sreekanth Menon", "Asif Ekbal", "Roshni Ramnani", "Anutosh Maitra" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027153116", "title": "INA: An Integrative Approach for Enhancing Negotiation Strategies with Reward-Based Dialogue System" }
GNN-GMVO: Graph Neural Networks for Optimizing Gross Merchandise Value in Similar Item RecommendationAnonymous January 14, 2024 ========================================================================================================== We introduce the notion of strongly k2-free graphs,which contain dp-minimal graphs. We show that under some sparsity assumption,given a rainbow k2-free blockade we can find a rainbow k-12-free blockade.This might serve as an intermediate step towards Erdős-Hajnal property for dp-minimal graphs. § INTRODUCTIONErdős-Hajnal conjecture <cit.> says for any graph H there is ϵ>0 such that if a graph G does not contain any induced subgraph isomorphic to H then G has a clique or an anti-clique of size ≥ |G|^ϵ. More generally, we say a family of finite graphs has the Erdős-Hajnal property if there is ϵ>0 such that for any graph G in the family, G has a clique or an anti-clique of size ≥|G|^ϵ.Malliaris and Shelah proved in <cit.> that the family of stable graphs has the Erdős-Hajnal property.Chernikov and Starchenko gave another proof for stable graphs in <cit.> and in <cit.> they proved that the family of distal graphs has the strong Erdős-Hajnal property.In general, we are interested in whether the family of finite VC-dimension (i.e. NIP <cit.>) graphs, which contains both stable graphs and distal graphs, has the Erdős-Hajnal property.Motivation for studying this problem was given in <cit.>, which also gave a lower bound e^(log n)^1-o(1) for largest clique or anti-clique in a graph with bounded VC dimension.In this paper, we consider graphs in dp-minimal theories, a special case of NIP graphs. This paper is inspired by <cit.>.Techniques used also come from <cit.>. We will prove the following lemma: (main lemma) Given k∈ℕ, d∈ℝ withk≥ 2, d≥ 2,there exists τ_0=τ_0(k,d),L_0=L_0(k,d) satisfying the following: Let τ<τ_0, G a strongly k2-free τ-critical graph, and 𝒜=(A_i:1≤ i≤ t)⊆ G an equicardinal blockade of width |G|/t^d with |G|/t^2d≤ W_G, of lengthL_0 ≤ t≤2|G|^1/d such that for all a∈ A, |E(a,A)|< |G|/t^d. Then there exist b∈ A, an (t', |G|/t^2d+2)-comb ((a_j,A_j'):1≤ j≤ t') in (E_b∩ A,E_b∩ A) such that 𝒜'=(A_j':1≤ j≤ t') is an equicardinal minor of 𝒜 with width ≥|G|/t^2d+2,length ≥ t^1/8.Section <ref> gives the definition of dp-minimality andcombinatorial notions we need for the proof. Section <ref> proves the lemma. The author is grateful to her advisor Sergei Starchenko for helpful suggestions. Also many thanks to István Tomon andAlex Scott for pointing out mistakes and giving comments.§ PRELIMINARIESAs usual, a graph is a pair G=(V,E) where V is a finite set, E is a binary symmetricanti-reflexive relation on V. <cit.> Fix a structure ℳ. An ICT pattern in ℳ consists of a pair of formulas φ(x,y) and ψ(x,y) and sequences {a_i :i∈ω} and {b_i :i∈ω} from ℳ so that, for all i, j ∈ω, the following is consistent: φ(x,a_i)∧ψ(x,b_j)l≠ i⋀φ(x,a_l)k≠ j⋀ψ(x,b_k). By compactness, for a pair of formulas φ, ψ, if for all n∈ω, there exist {a_i :i∈ n}, {b_i :i∈ n} such that for all i,j∈ n, φ(x,a_i)∧ψ(x,b_j)l≠ i⋀φ(x,a_l)k≠ j⋀ψ(x,b_k) is consistent, then there is an ICT pattern in ℳ if ℳ is sufficiently saturated.<cit.> A theory T is said to be dp-minimal if there is no ICT pattern in any model ℳ T. The family ℱ of cographs is the smallest family of graphs satisfying the following: * if G is a graph with a single vertex,then G∈ℱ;* if G is in ℱ, then G is inℱ;* if G_1=(V_1,E_1) and G_2=(V_2,E_2) are in ℱ,then the graph G=(V,E) with V=V_1∪̇V_2 and E=E_1∪ E_2 is in ℱ. Any such G∈ℱ is called a cograph.As a corollary, we also have if G_1=(V_1,E_1) and G_2=(V_2,E_2) are cographs,then the graph G=(V,E) with V=V_1∪̇V_2 and E=E_1∪ E_2 ∪{(x,y):x∈ V_1, y∈ V_2} is a cograph,because this is the complement of the disjoint unionof G_1 and G_2. Also, it is well-known that every cograph Gcontains a homogeneous set of size ≥ |G|^1/2.We say that a graph G is τ-critical if the largest size of a cograph in G is <|G|^τ, and for every induced subgraph G' of G with G'≠ G, there is a cograph in G' of size ≥ |G'|^τ .<cit.> Let G be a graph.A blockade ℬ in G is a sequence (B_1,..., B_t) of pairwise disjoint subsets of V (G) called blocks. We denote B_1∪ ...∪ B_t by B. The length of a blockade is the number of blocks, and its width is the minimum cardinality of a block.A pure pair in G is a pair A, B of disjoint subsets of V(G) such that A is either complete or anticomplete to B. A blockade ℬ = (B_1, ... , B_t) in G is pure if (B_i , B_j ) is a pure pair for all i, j with 1≤ i < j ≤ t. For a pure blockade ℬ=(B_1,...,B_t), let P be the graph with vertex set {1,..., t}, in which i, j are adjacent if B_i is complete to B_j. We say P is the pattern of the pure blockade ℬ.<cit.> If ℬ = (B_i: i ∈ I) is a blockade, let I'⊆ I; then (B_i: i ∈ I') is a blockade, of smaller length but of at least the same width, and we call it a sub-blockade of ℬ. Second, for each i ∈ I let B_i'⊆ B_i be nonempty; then the sequence (B_i': i ∈ I) is a blockade, of the same length but possibly of smaller width, and we call it a contraction of ℬ. A contraction of a sub-blockade (or equivalently, a sub-blockade of a contraction) we call a minor of ℬ. <cit.> Say a blockade is equicardinal if all its blocks have the same cardinality.Let G=(V,E) be a graph, X⊆ V,k∈ℕ anda_1,...,a_k∈ V. We saya_1,...,a_k has k2-propertyover X if there exists{b_ij}_ 1≤ i<j≤ s⊆ X such thatfor each pair i≠j,1≤ i<j≤ s,E(b_ij,a_i)∧E(b_ij,a_j)∧m≠ i,m≠ j⋀ E(b_ij,a_m). Given a blockade ℬ=(B_i:1≤ i≤ t) in G, and a_1,...,a_s∈ V, we say (a_1,...a_s) is a ℬ-rainbow tuple if for any i≠ j, there exist i'≠ j' such that a_i∈ B_i' and a_j∈ B_j'. For k≥ 2, we say G is k2-free if there is no k-tuple (a_1,...,a_k) of distinct vertices in G withk2-property over G.G is strongly k2-free if both G and G are k2-free. Given a blockade ℬ=(B_i:1≤ i≤ t) in G, we say ℬ is rainbowk2-free if there is no ℬ-rainbow tuple (a_1,...,a_k) with k2-property over B. <cit.> Let G be a graph, and let t, k ≥ 0 where t is an integer. We say ((a_i, B_i) : 1 ≤ i ≤ t) is a (t, k)-comb in G if: * a_1,..., a_t ∈ V (G) are distinct, and B_1,..., B_t are pairwise disjoint subsets of V(G)∖{a_1,..., a_t};* for 1 ≤ i ≤ t, a_i is adjacent to every vertex in B_i;* for i, j ∈{1,..., t} with i≠ j, a_i has no neighbour in B_j ; and* B_1,..., B_t all have cardinality at least k.If C, D ⊆ V (G) are disjoint and a_1,..., a_t ∈ C, and B_1,..., B_t ⊆ D, we call this a (t, k)-comb in (C, D). The following fact originated from <cit.> and was stated in <cit.>. It was also adopted in <cit.>.We use here the version in <cit.>.Let G be a graph with a bipartition (A,B), such that every vertex in B has a neighbour in A; and let Γ,Δ,d > 0 with d < 1, such that every vertex in A has at most Δ neighbours in B. Then either: * for some integer t ≥ 1, there is a (t, Γ t ^-1/d)-comb in (A, B); or * |B| ≤3^d+1/3/2-(3/2)^dΓ^dΔ^1-d.Let G be a graph and X⊆ V(G). Then G[X] denotes the induced subgraph of G on the subset X andG[X] denotes the complement of G[X].<cit.>For every graph H and all ϵ > 0, there exists δ > 0 such that for every H-free graph G, there exists X ⊆ V (G) with |X|≥δ|G|, such that one of G[X], G[X] has maximum degree at most ϵδ|G|.(Variant of <cit.>): For alls≥ 1, let D_s=2^s- 1d^2s-1, d=4. Letℬ=(B_i:i=1,...,D_s)be a rainbow 22-free blockade in G of lengthD_s and width W. Then G admits apure blockade 𝒜with a cograph pattern, oflength 2^s and width ≥W/D_s.The proof is similar to that of<cit.>.For s=1, D_s=4. LetX=B_1∪ B_2, Y=B_3∪B_4. Let A={x∈ X: ∀y∈ B_3 E(x,y)},A'={x∈ X: ∀ y∈ B_4 E(x,y)}.By assumption,A∪ A'=X. Hence |A|≥1/2|X| or|A'|≥1/2|X|.Ineither case, the conclusionholds.For s+1,D_s+1=2^sd^2s+1. Let L=B_1∪...∪B_D_s+1/4,R=B_D_s+1/4+1∪...∪B_D_s+1/2, L'={x∈ (B_1∪...∪ B_D_s+1)∖ (L∪R):∀ y∈ L E(x,y)},R'= {x∈ (B_1∪...∪B_D_s+1)∖ (L∪R):∀ y∈ R E(x,y)}. Then L'∪ R'=(B_1∪...∪B_D_s+1)∖ (L∪ R) and |L'|≥1/2| (B_1∪...∪B_D_s+1)∖(L∪ R)| or |R'|≥1/2| (B_1∪...∪B_D_s+1)∖(L∪ R)|. In either case, we have pure pairs (A,B) suchthat|A|,|B|≥WD_s+1/d^2.The rest is the same as in<cit.>Given ℬ= (B_i:i=1,...,t) be arainbow 22-free blockade inGof length t≥ 4 and widthW, let s be a positive integersuchthat D_s≤ t<D_s+1, where D_s is as defined in claim <ref>. Since for s≥ 1, 2^s≥D_s+1^1/10 =2^s/2+1/5≥ t^1/10, and W/D_s≥W/t, by claim <ref>,G hasa pure blockade𝒜 with acograph pattern, of length ≥ t^1/10 and width ≥W/t.§ PROOF We start from a simple observation:claim <ref> says that given arainbowk2-free blockade 𝒜 = (A_i:1≤ i≤ t),we can find a rainbow k-12-free minor if there exist a∈ V(A) and((a_j,B_j): 1≤ j≤ t') a comb in(E_a,E_a) such thatℬ= (B_j: 1≤ j≤ t') is a minor of 𝒜and for all j,B_j∩ A_i_0=∅, where a∈ A_i_0. Lemma <ref> says we can construct such a comb in a τ-critical graph G in a given “sparse" blockade 𝒜. Let 𝒜 = (A_i:1≤ i≤ t) be a rainbowk2-free blockade.If a∈ V(A) and((a_j,B_j): 1≤ j≤ t') is a comb in(E_a∩ A, E_a∩ A) such thatℬ= (B_j: 1≤ j≤ t') is a minor of 𝒜and for all j,B_j∩ A_i_0=∅,where a∈ A_i_0, thenℬ is rainbow k-12-free. Let 𝒜,ℬ be as in the claim. Suppose (b_1,...,b_k-1) is aℬ-rainbow tuple withk-12-property over B,witnessed by{c_lm}_l≠ m, 1≤ l,m ≤ k-1. Then (a,b_1,...,b_k-1) has k2-propertyover A,witnessed by{c_lm}_l≠ m, 1≤ l,m ≤ k-1∪{a_j}_1≤ j≤ t', a contradiction. Let G be a graph.Let ℱ_G = {((b_i,B_i):1≤ i≤ s)⊆ G: ∃ a∈ V[G] ((b_i,B_i):1≤ i≤ s) is a (s,|G|/s^2)-comb in (E_a, E_a) and (B_i:1≤ i≤ s) is an equicardinal blockade }.Then let W_G denote minimal width of equicardinal blockades (B_i:1≤ i≤ s) satisfying that there exist (b_i:1≤ i≤ s) such that ((b_i,B_i):1≤ i≤ s) is in ℱ_G ∪ℱ_G, if ℱ_G ∪ℱ_G≠∅; let W_G=|G|, if ℱ_G ∪ℱ_G=∅.(main lemma) Given k∈ℕ, d∈ℝ withk≥ 2, d≥ 2,there exists τ_0=τ_0(k,d),L_0=L_0(k,d) satisfying the following: Let τ<τ_0, G a strongly k2-free τ-critical graph, and 𝒜=(A_i:1≤ i≤ t)⊆ G an equicardinal blockade of width |G|/t^d with |G|/t^2d≤ W_G, of lengthL_0 ≤ t≤2|G|^1/d such that for all a∈ A, |E(a,A)|< |G|/t^d. Then there exist b∈ A, an (t', |G|/t^2d+2)-comb ((a_j,A_j'):1≤ j≤ t') in (E_b∩ A,E_b∩ A) such that 𝒜'=(A_j':1≤ j≤ t') is an equicardinal minor of 𝒜 with width ≥|G|/t^2d+2,length ≥ t^1/8. We will construct a sequence (a_u,Δ_u,R_u):We define R_0=A, a_0∈ R_0 such that E(a_0,R_0) is maximum, Δ_0=|E(a_0,R_0)|. Suppose(a_u,Δ_u,R_u) isconstructed.If Δ_u≥|G|/t^2d, we use the idea of fact <ref> to construct the comb we want.If a comb in lemma <ref>exists, then we end construction; if it does not exist, then the set {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u) E(x,y)} has sizebounded above by |G|/t^d· t^1/4. We take R_u+1 to be{y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u) E(x,y)}, a_u+1∈ R_u+1 such that E(a_u+1,R_u+1) is maximum, Δ_u+1=|E(a_u+1,R_u+1)|. If Δ_u<|G|/t^2d, by the choice of W_G and by <cit.>, the set {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u) E(x,y)} has sizebounded above by 3^1/2+1/3/2 -(3/2)^1/2·|G|/t^d.We take R_u+1 to be{y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u) E(x,y)}, a_u+1∈ R_u+1 such that E(a_u+1,R_u+1) is maximum, Δ_u+1=|E(a_u+1,R_u+1)|. Because in the construction of the R_u's,we removed a set of“small" size at each step, R_⌈ t^1/8⌉ has size at least|G|/t^d· t^1/2. We then discuss two cases depending on the size of Δ_⌈ t^1/8⌉. We observe that for all u<u', for all x∈ E(a_u;R_u),y∈ E(a_u';R_u'), E(x,y). If Δ_⌈ t^1/8⌉≥|G|/t^2d,then (E(a_u;R_u): 1≤ u≤⌈ t^1/8⌉) is apure blockade of cograph pattern and since G is τ-critical, we have in G a cograph of size ≥|G|^τ/t^2dτ· t^1/8≥ |G|^τ contradicting the choice of τ; if Δ_⌈ t^1/8⌉ < |G|/t^2d, then we follow the proof of<cit.> to get a contradiction.Let k,d be given. Let K=α=1∞∑ (2/3)^α. Let L_0 be the smallest positive integer such that for all L≥ L_0,L^1/4≥(3+9/2K)L^1/8 + 3 +3^1/2+1/3/2 -(3/2)^1/2,L- 2 L^1/8 (1 + 2^d + L^1/4 ) ≥L^1/2.Let τ_0>0 be small such that for all τ<τ_0, τ-1/2d<-(d+1)τ,L_0^-d-1/2+2dτ+3^1/2+1/3/2 -(3/2)^1/2·L_0^-1/2+2dτ+2^-1/2<1, and L_0^1/8-2dτ>1. Fix any τ<τ_0. Let G be τ-critical. Let 𝒜=(A_i:1≤ i≤ t)⊆ G be an equicardinal blockade with width |G|/t^d such that |G|/t^2d≤ W_G, length L_0 ≤ t≤ 2|G|^1/d and for all a∈ A, |E(a,A)|<|G|/t^d. Suppose the comb in the statement does not exist. Construct a_u,Δ_u,R_u such that* Δ_u is the maximal degree in G[R_u]* a_u∈ R_u such that |E(a_u,R_u)|=Δ_uas follows:Let R_0=A. Let Δ_0be the maximum of{|E(x,A)|:x∈ A}. Leta_0∈ R_0 such that|E(a_0,R_0)|=Δ_0. Suppose Δ_u,R_u,a_uare constructed. We constructΔ_u+1,R_u+1,a_u+1: Let C=E(a_u,R_u). Let D= E(a_u,R_u). We repeatthe construction ofcombs in <cit.>. Constructk_s∈ℕ, a^s_1,...,a^s_k_s∈ C, T^s_1,...,T^s_k_s⊆ D, C_s⊆ D asfollows: s=1: Choose a^1_1,...,a^1_k_1∈ C with k_1 max such that for all i there exist ≥2/3Δ_u vertices in D that are adjacent to a^1_i and non adjacent to a^1_1,...,a^1_i-1. Let T^1_i=E(a^1_i,D)∖(E(a^1_1,D)∪...∪ E(a^1_i-1,D)), C_1=i⋃ E(a^1_i,D). Then C_1=i⋃ T^1_i. s+1: Suppose k_1,...,k_s∈ℕ, a^1_1,...,a^1_k_1,...,a^s_1,...,a^s_k_s∈ C, T^1_1,...,T^1_k_1,...,T^s_1,...,T^s_k_s⊆ D, C_1,...,C_s⊆ D are defined. Then by maximality of k_s, every vertex in C has <(2/3)^sΔ_u neighbours in F, where F=D∖ (C_1∪...∪ C_s). Choose a^s+1_1,...,a^s+1_k_s+1∈ C with k_s+1 max such that for all i there exist ≥(2/3)^s+1Δ_u vertices in F that are adjacent to a^s+1_i and non adjacent to a^s+1_1,...,a^s+1_i-1. Let T^s+1_i=E(a^s+1_i,F)∖(E(a^s+1_1,F)∪...∪ E(a^s+1_i-1,F)) and C_s+1=i⋃ E(a^s+1_i,F). Then C_s+1=i⋃ T^s+1_i.Observe that * for all s, all i< i' and all y∈ T^s_i', E(a^s_i,y). * for all s<s', all i,i' and all y∈ T^s'_i', E(a^s_i,y).* for all s,i,(2/3)^sΔ_u≤|T^s_i|≤(2/3)^s-1Δ_u.Case 1: Δ_u≥|G|/t^2d Let l be the largest such that (2/3)^l-1Δ_u≥|G|/t^2d. Construct I_l,...,I_1 as follows: ConstructI_l: Let I^k_l_l={a^l_k_l}. Choose A_i such that |A_i∩ T^l_k_l|≥2/3·|G|/t^2d+1. Such A_i exists, since we assumed (2/3)^lΔ_u≥2/3·|G|/t^2d. Denote this chosen i by i^l_k_l and let P^l_k_l=A_i^l_k_l∩ T^l_k_l.Suppose I^m_l,...,I^k_l_l are constructed. For a^l_m-1, if <1/2|T^l_m-1| many vertices of T^l_m-1 is adjacent to an element in I^m_l, and there exists A_i such that A_i∩ P^l_m'=∅ for all a^l_m' in I^m_l (i.e. i≠ i^l_m' for all a^l_m' in I^m_l) and |A_i∩ T^l_m-1∩a^l_m'∈ I^m_l⋂ E(a^l_m',D)|≥|G|/t^2d+2, then let I^m-1_l={a^l_m-1}∪ I^m_l and choose such i to be i^l_m-1. Let P^l_m-1=A_i^l_m-1∩ T^l_m-1∩a^l_m'∈ I^m_l⋂ E(a^l_m',D). Otherwise, let I^m-1_l=I^m_l.Take I_l=I^1_l.Suppose I_l,...,I_n are constructed.ConstructI_n-1: Let I^k_n-1+1_n-1 =∅. Suppose I^m+1_n-1,...,I^k_n-1+1_n-1 are constructed. For a^n-1_m, if <1/2|T^n-1_m| many vertices of T^n-1_mis adjacent to an element inI^m+1_n-1∪ I_n∪ ...∪ I_land there exists A_i suchthat A_i∩P^α_m'=∅for all a^α_m'∈I^m+1_n-1∪ I_n∪...∪ I_l and |A_i∩ T^n-1_m∩a^α_m'∈ I^m+1_n-1∪ I_n∪...∪ I_l⋂ E(a^l_m',D)|≥|G|/t^2d+2,then set I^m_n-1={a^n-1_m}∪ I^m+1_n-1 and choose such i to be i^n-1_m. Let P^n-1_m=A_i^n-1_m∩ T^n-1_m∩a^α_m'∈ I^m+1_n-1∪ I_n∪...∪ I_l⋂ E(a^l_m',D). Otherwise, let I^m_n-1=I^m+1_n-1. Take I_n-1=I^1_n-1.Then ((a^α_m,P^α_m):a^α_m∈ I_1∪...∪ I_l) is a (E_a_u∩ A,E_a_u∩ A)-comb such that for (α,m)≠ (α',m'), i^α_m≠ i^α'_m' with width ≥|G|/t^2d+2. If |I_1∪...∪ I_l| ≥ t^1/8, then it's a comb satisfying the statement of the lemma, a contradiction. Hence |I_1∪...∪ I_l| < t^1/8 and we bound the size of C_1∪...∪ C_l=α∈{1,...,l},m∈{1,...,k_α}⋃T^α_m as follows:For each a^α_m∉ I_1∪...∪ I_l with α∈{1,...,l}, ≥1/2·|T^α_m| many vertices of T^α_m are in x∈ (I_α∩{a^α_m+1,...,a^α_k_α})∪ I_α+1...∪ I_l⋃E(x,R_u) or<1/2·|T^α_m|many vertices ofT^α_m are inx∈(I_α∩{a^α_m+1,..., a^α_k_α})∪ I_α+1...∪ I_l⋃E(x,R_u), but for each i∈[t]∖{i^α'_m': a^α'_m'∈I_α+1∪...∪ I_lor a^α'_m'∈I_α with m'>m },|A_i∩ T^α_m∩a^α'_m'∈ (I_α∩{a^α_m+1,..., a^α_k_α}) ∪I_α+1∪...∪ I_l⋂E(a^α'_m',D)| < |G|/t^2d+2.Let S_1= {a^α_m∉I_1∪...∪ I_l :α∈{1,...,l},≥1/2·|T^α_m|many vertices ofT^α_m are inx∈(I_α∩{a^α_m+1,..., a^α_k_α})∪ I_α+1...∪ I_l⋃E(x,R_u)}.By construction of theT^α_m's, for each fixed pair α, m,|T^α_m|≥ (2/3)^αΔ_u, and T^α_m∩x∈(I_α∩{a^α_m+1,..., a^α_k_α})∪ I_α+1...∪ I_l⋃E(x,R_u)⊆ T^α_m∩x∈I_α∪I_α+1∪...∪ I_l⋃E(x,R_u) = x∈I_α∪I_α+1∪...∪ I_l⋃E(x,T^α_m) ⊆ x∈I_α∪ I_α+1∪...∪ I_l⋃E(x,C^α).By construction, for each x∈ I_α∪ I_α+1∪ ...∪ I_l, |E(x, C_α)|<(2/3)^α-1Δ_u. Thus, for any fixed α∈{1,...,l}, we have (|{x∈ S_1: x=a^α_mfor some m∈{1,...,k_α}}|)·1/2·(2/3)^αΔ_u≤ (|I_α|+...+|I_l|)· (2/3)^α-1Δ_u.Hence there exist at most 3·(|I_α|+...+|I_l|) many a^α_m's in S_1 for each fixed α and |a^α_m∈ S_1⋃T^α_m| ≤α=1l∑ 3· (2/3)^α-1Δ_u·(|I_α|+...+|I_l|).Let S_2={a^α_m∉ I_1∪...∪ I_l : α∈{1,...,l}, <1/2·|T^α_m| many verticesof T^α_m are in x∈ (I_α∩{a^α_m+1,...,a^α_k_α})∪ I_α+1∪...∪ I_l⋃E(x,R_u),but for each i∈[t]∖{i^α'_m':a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α})∪ I_α+1∪...∪ I_l},|A_i∩ T^α_m∩a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α}) ∪ I_j(α+1)∪...∪ I_l⋂ E(a^α'_m',R_u)|<|G|/t^2d+2}.Define S^α_m:={i^α'_m':a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α})∪ I_α+1∪...∪ I_l}.Then|a^α_m∈ S_2⋃ (T^α_m ∩a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α}) ∪ I_α+1∪...∪ I_l⋂ E(a^α'_m',R_u))|= |a^α_m∈ S_2⋃(i∈ [t]∖ S^α_m⋃ (A_i∩ T^α_m ∩a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α}) ∪ I_α+1∪...∪ I_l⋂ E(a^α'_m',R_u)) ∪ i∈ S^α_m⋃ (A_i∩ T^α_m ∩a^α'_m'∈ (I_α∩{a^α_m+1,..., a^α_k_α}) ∪ I_α+1∪...∪I_l⋂E(a^α'_m',R_u)) )|Since there are t manyA_i's and becauseT^α_m≥ (2/3)^αΔ_u≥2/3·|G|/t^2d for allα∈{1,...,l},there are at most |G| · t/t^d·2/3·|G|/t^2d =3/2· t^d+1many T^α_m withα∈{1,...,l},we have, |a^α_m∈ S_2⋃i∈ [t]∖S^α_m⋃ (A_i∩T^α_m ∩a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α}) ∪ I_α+1∪...∪ I_l⋂ E(a^α'_m',R_u))| ≤|G|/t^2d +2· t·3/2· t^d+1Since for each i∈[t], |A_i|=|G|/t^d, we have,|a^α_m∈ S_2⋃i∈ S^α_m⋃ (A_i∩ T^α_m ∩a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α}) ∪ I_α+1∪...∪ I_l⋂ E(a^α'_m',R_u)) | ≤ |i∈{i^α'_m': a^α'_m'∈ I_1∪...∪ I_l}⋃A_i|≤ |G|/t^d· (|I_1|+...+|I_l|).Hence, |a^α_m∈ S_2⋃ (T^α_m ∩a^α'_m'∈ (I_α∩{a^α_m+1,...,a^α_k_α}) ∪ I_α+1∪...∪ I_l⋂ E(a^α'_m',R_u))| ≤|G|/t^2d +2· t·3/2· t^d+1 + |G|/t^d· (|I_1|+...+|I_l|).Since for each a^α_m∈ S_2, <1/2·|T^α_m| many vertices of T^α_m are in x∈ (I_α∩{a^α_m+1,...,a^α_k_α})∪ I_1∪...∪ I_l⋃E(x,R_u), we have |a^α_m∈ S_2⋃ T^α_m| ≤ 2· ( |G|/t^2d+2· t·3 /2· t^d+1 + |G|/t^d· (|I_1|+...+|I_l|)).Hence, we have|s≤ l⋃C_s| = |α∈{1,...,l},m∈{1,...,k_α}⋃T^α_m| = |a^α_m∈ I_1∪...∪ I_l⋃ T^α_m ∪a^α_m∈ S_1⋃ T^α_m ∪a^α_m∈ S_2⋃T^α_m| ≤ |G|/t^d·(|I_1|+...+|I_l|)+α=1l∑ 3· (2/3)^α-1Δ_u·(|I_α|+...+|I_l|) + 2· ( |G|/t^2d+2· t·3/2· t^d+1 + |G|/t^d· (|I_1|+...+|I_l|)) ≤ |G|/t^d· t^1/8+3·|G|/t^d·3/2·α=1l∑α' =1α∑ (2/3)^α'|I_α|+2· ( |G|/t^d·3/2 + |G|/t^d· t^1/8) ≤ |G|/t^d· t^1/8+3·|G|/ t^d·3/2·α=1l∑ (2/3)^α·(|I_1|+...+|I_l|)+2· ( |G|/t^d·3/2 + |G|/t^d· t^1/8)≤ |G|/t^d·t^1/8+3·|G|/t^d·3/2·α=1l∑ (2/3)^α· t^1/8+2· ( |G|/t^d·3/2 + |G|/t^d· t^1/8)= |G|/t^d· ((3+9/2K)t^1/8+3), whereK=α=1∞∑ (2/3)^α.For s>l, by the proof of<cit.>,if k_s>2(2|G|/(2/3)^sΔ_u)^1/2, then there is a (t,|G|/t^2)-comb in(E_a_u, E_a_u) with width1/2 (2/3)^sΔ_u <|G|/t^2d <W_G,contradicting the choice ofW_G. Hence, as in the proof of<cit.>,k_s≤ 2(2|G|/(2/3)^sΔ_u)^1/2 and |s>l⋃C_s| ≤3^1/2+1/3/2 -(3/2)^1/2|G|^1/2 ((2/3)^lΔ_u)^1/2≤3^1/2+1/3/2 -(3/2)^1/2 |G|^1/2(|G|/t^2d)^1/2≤3^1/2+1/3/2 -(3/2)^1/2·|G|/t^dHence, | {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}|=|s⋃C_s|≤ |G|/t^d· ((3+9/2K) t^1/8 +3 +3^1/2+1/3/2 -(3/2)^1/2) ≤|G|/t^d· t^1/4.TakeR_u+1= {y∈E(a_u,R_u): ∃ x∈E(a_u,R_u)E(x,y)};Let Δ_u+1 be themaximal degree inG[R_u+1]. Let a_u+1∈ R_u+1 such that |E(a_u+1,R_u+1)|= Δ_u+1. Case 2: Δ_u<|G|t^2dBy <cit.> and the choice of W_G,|s⋃C_s| ≤3^1/2+1/3/2 -(3/2)^1/2|G|^1/2Δ_u^1/2≤3^1/2+1/3/2 -(3/2)^1/2 |G|^1/2(|G|/t^2d)^1/2≤3^1/2+1/3/2 -(3/2)^1/2·|G|/t^dTakeR_u+1= {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}.Let Δ_u+1 be themaximal degree inG[R_u+1]. Let a_u+1∈R_u+1 such that|E(a_u+1,R_u+1)| = Δ_u+1.This is the end of the construction of (a_u,Δ_u,R_u).We use the following claim toshow that R_⌈t^1/8⌉ is not too small.For each u≤⌈ t^1/8⌉, |R_u|≥|A|-(|G|/t^d+1+ |G|/t^d· t^1/4 )·u. Induction on u:u=0: Since R_0=A, |R_0|≥|A|.u+1: Suppose |R_u|≥|A|-(|G|/t^d+1+ |G|/t^d· t^1/4)·u. If Δ_u≥|G|/t^2d, thenR_u+1= {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}.Since R_u = E(a_u;R_u)∪{a_u}∪{y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}∪ R_u+1,we have|R_u| ≤|G|/t^d +1 +|G|/t^d· t^1/4 + |R_u+1|Hence |R_u+1|≥ |R_u| -|G|/t^d -1 -|G|/t^d· t^1/4 .By induction,|R_u+1|≥|A| -(|G|/t^d +1 +|G|/t^d· t^1/4 ) · (u+1).If Δ_u<|G|/t^2d, thenR_u+1= {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}, R_u = E(a_u;R_u)∪{a_u}∪{y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}∪ R_u+1.Hence,|R_u| ≤|G|/t^d +1 + 3^1/2+1/3/2 -(3/2)^1/2·|G|/t^d + |R_u+1|≤|G|/t^d +1 +|G|/t^d· t^1/4 + |R_u+1|. By induction,|R_u+1|≥|A| -(|G|/t^d +1 +|G|/t^d· t^1/4 ) · (u+1). It follows that R_⌈ t^1/8⌉≥|G|/t^d· (t-⌈ t^1/8⌉ (1 + t^d/|G| + t^1/4 )) ≥|G|/t^d· (t- 2 t^1/8 (1 + t^d/|G| + t^1/4 )) ≥|G|/t^d· t^1/2.Depending on the size of Δ_⌈ t^1/8⌉, we have the following two cases. Case (i): Δ_⌈ t^1/8⌉≥|G|/t^2d. Then (E(a_u;R_u): 1≤ u≤⌈ t^1/8⌉) is a blockadesuch that for any 1≤u<u'≤⌈ t^1/8⌉,any x∈ E(a_u;R_u),y∈ E(a_u';R_u'),E(x,y) and for all1≤ u≤⌈ t^1/8⌉, |E(a_u;R_u)|≥|G|/t^2d. Since G is τ-critical, G has an induced cograph of size ≥ (|G|/t^2d)^τ·⌈t^1/8⌉≥|G|^τ· t^1/8-2dτ>|G|^τ,a contradiction.Case (ii): Δ_⌈ t^1/8⌉ < |G|/t^2d. In this case, we follow the proof of <cit.>:LetW_u = {y∈ E(a_u,R_u): ∃ x∈ E(a_u,R_u)E(x,y)}.Let u_0 be the largest such that Δ_u_0>0.Let R= R_u_0∖ (E(a_u_0,R_u_0)∪{a_u_0}∪ W_u_0). ThenR_⌈ t^1/8⌉ = u≥⌈ t^1/8⌉⋃ E(a_u,R_u) ∪u≥⌈ t^1/8⌉⋃ W_u ∪{a_u:u≥⌈ t^1/8⌉}∪ RFor u≥⌈ t^1/8⌉, let x_u:=|E(a_u,R_u)||R_⌈ t^1/8⌉| and let H_u⊆ E(a_u,R_u) be an induced cograph of maximal size. Since Δ_⌈ t^1/8⌉ < |G|/t^2d, for u≥⌈ t^1/8⌉,x_u ≤|G|/t^2d/|G|/t^d· t^1/2 = t^d-1/2/t^2d = t^-d-1/2.Since G is τ-critical, for each u≥⌈ t^1/8⌉,|H_u| ≥ |E(a_u,R_u)|^τ = x_u^τ·|R_⌈ t^1/8⌉|^τ≥x_u^τ·(|G|/t^d· t^1/2) ^τ = x_u^τ· (1/t^d-1/2) ^τ· |G|^τ.Since u≥⌈ t^1/8⌉⋃ H_u is a cograph and G is τ-critical, we have|u≥⌈ t^1/8⌉⋃ H_u| = u≥⌈ t^1/8⌉∑ |H_u| < |G|^τ.Hence,u≥⌈ t^1/8⌉∑x_u^τ· (1/t^d-1/2) ^τ≤u≥⌈t^1/8⌉∑|H_u|/|G|^τ = 1/|G|^τ·u≥⌈ t^1/8⌉∑ |H_u| <1andu≥⌈t^1/8⌉∑x_u^τ < (1/t^d-1/2) ^-τSince {a_u:u≥⌈ t^1/8⌉} does not have an edge, it is a cograph and |{a_u:u≥⌈ t^1/8⌉}| < |G|^τ.So|{a_u:u≥⌈ t^1/8⌉}|/|R_⌈ t^1/8⌉|≤|{a_u:u≥⌈t^1/8⌉}|/|G|/t^d· t^1/2≤|G|^τ/|G|/t^d· t^1/2=|G|^τ-1·t^d-1/2We estimate | u≥⌈ t^1/8⌉⋃ E(a_u,R_u)|/|R_⌈ t^1/8⌉|:| u≥⌈ t^1/8⌉⋃ E(a_u,R_u)|/|R_⌈ t^1/8⌉| = u≥⌈ t^1/8⌉∑ |E(a_u,R_u)|/|R_⌈ t^1/8⌉| = u≥⌈ t^1/8⌉∑|E(a_u,R_u)|/|R_⌈ t^1/8⌉| = u≥⌈ t^1/8⌉∑ x_u = u≥⌈ t^1/8⌉∑ x_u^τ· x_u^1-τ≤u≥⌈ t^1/8⌉∑ x_u^τ·t^(-d-1/2)(1-τ) ≤ (1/t^d-1/2) ^-τ·t^-d-1/2+(d+1/2)τ=t^-d-1/2+2dτWe bound | u≥⌈ t^1/8⌉⋃ W_u|/|R_⌈ t^1/8⌉|:| u≥⌈ t^1/8⌉⋃ W_u|/|R_⌈ t^1/8⌉| = u≥⌈ t^1/8⌉∑ |W_u|/|R_⌈ t^1/8⌉| = u≥⌈ t^1/8⌉∑ |W_u|/|R_⌈ t^1/8⌉|≤ u≥⌈ t^1/8⌉∑3^1/2+1/3/2 -(3/2)^1/2(|G|/|R_⌈t^1/8⌉|)^1/2(Δ_u/|R_⌈t^1/8⌉|)^1/2≤ 3^1/2+1/3/2 -(3/2)^1/2·(|G|/|G|/t^d· t^1/2)^1/2u≥⌈ t^1/8⌉∑ x_u^1/2= 3^1/2+1/3/2 -(3/2)^1/2·t^d/2-1/4u≥⌈ t^1/8⌉∑ x_u^τ x_u^1/2-τ≤ 3^1/2+1/3/2 -(3/2)^1/2·t^d/2-1/4u≥⌈ t^1/8⌉∑ x_u^τ· (t^-d-1/2)^1/2-τ≤ 3^1/2+1/3/2 -(3/2)^1/2·t^-1/2+(d+1/2)τ· (1/t^d-1/2) ^-τ= 3^1/2+1/3/2 -(3/2)^1/2·t^-1/2+2dτ.Since u_0 is the largest such that Δ_u_0>0, R has no edge. Hence, |R|<|G|^τ. It follows that 1= |R_⌈ t^1/8⌉|/|R_⌈ t^1/8⌉| =|u≥⌈ t^1/8⌉⋃ E(a_u,R_u) ∪u≥⌈ t^1/8⌉⋃ W_u ∪{a_u:u≥⌈ t^1/8⌉}∪ R|/|R_⌈ t^1/8⌉|≤ t^-d-1/2+2dτ+3^1/2+1/3/2 -(3/2)^1/2·t^-1/2+2dτ+ |G|^τ-1·t^d-1/2+|G|^τ-1·t^d-1/2≤ t^-d-1/2+2dτ+3^1/2+1/3/2 -(3/2)^1/2·t^-1/2+2dτ+ |G| ^τ-1/2d·2^d+1/2≤ t^-d-1/2+2dτ+3^1/2+1/3/2 -(3/2)^1/2·t^-1/2+2dτ+ |G| ^-(d+1)τ· 2^d+1/2which is <1 by the choiceof τ and t and by thefact that |G|^τ≥ 2since a graph with twovertices is always a cograph.This gives a contradiction. Hence, there exists a comb asdescribed in the statement. alpha
http://arxiv.org/abs/2310.17730v2
{ "authors": [ "Yayi Fu" ], "categories": [ "math.CO", "math.LO" ], "primary_category": "math.CO", "published": "20231026184125", "title": "Towards Erdős-Hajnal property for dp-minimal graphs" }
[ Yongqian Zhang January 14, 2024 ==================== The two-hand interaction is one of the most challenging signals to analyze due to the self-similarity, complicated articulations, and occlusions of hands. Although several datasets have been proposed for the two-hand interaction analysis, all of them do not achieve 1) diverse and realistic image appearances and 2) diverse and large-scale groundtruth (GT) 3D poses at the same time. In this work, we propose Re:InterHand, a dataset of relighted 3D interacting hands that achieve the two goals. To this end, we employ a state-of-the-art hand relighting network with our accurately tracked two-hand 3D poses. We compare our Re:InterHand with existing 3D interacting hands datasets and show the benefit of it. Our Re:InterHand is available in https://mks0601.github.io/ReInterHand/here. § INTRODUCTION Humans often make two-hand interactions during daily conversation or when interacting with objects. Self-similarity, complicated articulations, and small sizes of hands make analyzing such two-hand interactions greatly challenging. In particular, when the input of an analyzing system is a single image, the problem becomes much more difficult as in most cases, most of a hand is occluded by the other hand. One fundamental direction to successfully analyze interacting hands is collecting large-scale 3D interacting hands datasets, which contain in-the-wild images and corresponding 3D groundtruth (GT). Unfortunately, this is not trivial. Due to the inherent scale and depth ambiguity, true 3D data is not obtainable from a single 2D observation. In addition, a single 2D observation does not provide enough information of other viewpoints, necessary for the 3D data collection. Therefore, there have been three alternative approaches to collect 3D hand data.§.§ Lab datasetsLab datasets are captured from specially designed studios with hundreds of calibrated and synchronized cameras. InterHand2.6M <cit.> is the most widely used 3D interacting hands dataset, and it is captured from a studio with 100 calibrated and synchronized cameras. Fig. <ref> (a) shows an image example of InterHand2.6M.Pros. They provide large-scale, diverse, and accurate GT 3D poses.Cons. Images have monotonous appearances. The figure shows that images have far and limited diversities of color, backgrounds, and illuminations compared to those of in-the-wild images. §.§ Natural datasetsNatural datasets, such as HIC <cit.> and RGB2Hands <cit.>, are captured from daily environments with a much smaller number of cameras, for example, a single RGBD camera. Fig. <ref> (b) shows an image example of HIC.Pros. As the figure shows, the image appearance is close to those of in-the-wild images.Cons. The diversity and scale of the dataset is limited. Although the capture setup is much lighter than that of lab datasets, bringing the setup and capturing at diverse places is not easy, which makes appearance diversity limited (e.g., in front of desks). As only a few cameras are used, such datasets could not provide accurate annotations for complicated interacting hands. Therefore, they provide simple poses.§.§ Composited datasetsComposited datasets, such as Ego3DHands <cit.>, are a composition of hand images with random background images. The purpose of the composition is to enhance the appearance diversity of lab images or synthesized images. Fig. <ref> (c) shows an example of it. Pros. They often have accurate and diverse GT 3D poses as the composition is performed on lab datasets or synthesized datasets. Cons. The figure shows that its image appearances are not realistic due to the light inconsistency between foreground and background.§.§ The proposed Re:InterHand datasetAll the existing three approaches have their own limitations. In this work, we propose Re:InterHand dataset, which complements all three existing dataset collection approaches. Fig. <ref> (d) shows an image example of our Re:InterHand dataset. Our dataset is constructed by rendering 3D hands with accurately tracked 3D poses and relighting it with diverse environment maps. By using accurately tracked 3D poses from our multi-camera studio, we could secure diverse GT 3D poses. For the relighting, we employ a state-of-the-art hand relighting network <cit.>, which provides diverse and realistic image appearances. The figure shows that our rendered data has close appearances compared to those of in-the-wild images. § RELATED WORKS3D hand datasets. Tab. <ref> shows comparisons of various 3D hand datasets. Motivated by the Kinect device, early datasets comprise depthmaps <cit.>. For more practical applications without requiring depth cameras, RGB-based datasets have been introduced. STB <cit.> includes sequences with simple hand poses. HIC <cit.> is one of the earliest approaches to address two-hand interactions. RHD <cit.> consists of synthetically rendered images using commercial software and composited with web-crawled background images. EgoDexter <cit.> includes sequences with simple hand-object interactions. Panoptic Studio <cit.> is captured from a specially designed dome, and it contains whole-body humans. FPHA <cit.> includes hand sequences captured from first-person viewpoints. GANerated <cit.> is synthetically generated using generative adversarial networks and composited with background images. EHF <cit.> is a small-scale dataset captured from a multi-camera studio. It includes a whole-body performance of a single subject. ObMan <cit.> includes simple hand-object interactions. It is synthetically rendered using commercial software and composited with background images. FreiHAND <cit.> is captured with a portable multi-camera setup in various places. It consists of natural images and composited images. Mueller et al. <cit.> introduced a synthetic depth map dataset of interacting two hands. YT3D <cit.> includes web-crawled videos and 3D pseudo-GT of hands. NeuralAnnot <cit.> introduced 3D pseudo-GT of hands on MSCOCO <cit.> dataset. Both YT3D and NeuralAnnot fit a 3D hand model <cit.> to 2D joint coordinates to obtain 3D pseudo-GT. They mostly contain single-hand 3D pseudo-GT without 3D relative translation between two hands due to depth and scale ambiguity. HO3D <cit.> includes 3D hands interacting with various types of objects. RGB2Hand <cit.> introduced a small-scale 3D interacting hands dataset with 3D joint coordinate annotations without fingertips. InterHand2.6M <cit.> is a large-scale 3D interacting hands dataset, captured from a specially designed multi-camera studio. ContactPose <cit.> contains sequences of 3D hands and contact maps, generated from hand-object interactions. HUMBI <cit.> is a large-scale dataset that provides 3D whole-body annotations, captured from a specially designed multi-camera studio. DexYCB <cit.> includes large-scale 3D hands interacting with various types of objects. r0.4< g r a p h i c s > t-SNE of two-hands' 3D pose of our Re:InterHand, InterHand2.6M <cit.>, and HIC <cit.>.Ego3DHands <cit.> is a composition with random background images and rendered two-hand images. H2O <cit.> contains two hands interacting with objects. AGORA <cit.> is rendered with 3D scans of people and scenes. Like our Re:InterHand dataset, AGORA considers light consistency between foreground and background, which makes their image appearances realistic. Ego4D <cit.> includes a huge amount of first-person viewpoint videos; however, it does not provide 3D hand annotations. DART <cit.> contains rendered images of a single hand with accessories and their texture map, alpha-blended with background images from MSCOCO <cit.>. Assembly101 <cit.> contains large-scale videos of 3D hands assembling several objects. AssemblyHands <cit.> improved Assembly101 <cit.> with a better annotation pipeline. ARCTIC <cit.> includes 3D hands and whole-body annotation with 3D objects. BlurHand <cit.> is made from a subset of InterHand2.6M <cit.>. It includes blurred hand images and corresponding GT 3D hands. Although there have been many 3D hands datasets introduced, there is a small number of datasets with strong two-hand interactions <cit.>. Among them, InterHand2.6M <cit.> and HIC <cit.> are widely used as RGB2Hands <cit.> have no 3D fingertip annotations, and images of Ego3DHands <cit.> are not photorealistic. Some datasets <cit.> have two-hand annotations; however, they have weak interactions between hands. Fig. <ref> shows that only HIC <cit.>, InterHand2.6M <cit.>, and our Re:InterHand have a short distance between two hands and meaningful ratio of contacting samples. Unfortunately, none of such two-hand datasets has achieved the two goals at the same time: 1) rich and realistic image appearances and 2) accurate and diverse GT 3D poses of interacting hands. Our Re:InterHand is the first dataset that achieves the two goals. In addition, Fig. <ref> shows that our Re:InterHand has much more diverse 3D interacting hand poses than InterHand2.6M <cit.>, and HIC <cit.>.3D interacting hands recovery. Due to the absence of large-scale datasets, early works <cit.> are based on a fitting framework, which fits 3D hand models to geometric observations, such as RGBD sequence <cit.>, hand segmentation map <cit.>, and dense matching map <cit.>. InterHand2.6M <cit.> motivated many regression-based methods <cit.>. Such regression-based methods outperform the above fitting-based approaches while running in real-time. Li et al. <cit.> introduced a Transformer-based network with the cross-attention between right and left hands. Moon <cit.> presented a 3D interacting hands recovery network that addresses the domain gap between multi-camera datasets and in-the-wild datasets, which results in robust performance on in-the-wild images.Relighting humans. Several works <cit.> are proposed to relight faces and bodies; however, these models are not animatable. To enable relighting with animation, Bi et al. <cit.> presented a deep relightable appearance model for facial avatars. DART <cit.> provides a dataset of relighted hands; however, their images are not photorealistic as they do not consider light consistency between foreground and background. Iwase et al. <cit.> introduced an efficient neural relighting system for photorealistic hand relighting using a student-teacher framework and feature-based relighting <cit.>. We use the relighting system of Iwase et al. <cit.> due to their high-quality results and rendering efficiency.§ DATASET CONSTRUCTIONFig. <ref> shows the overall pipeline for the construction of our dataset. It consists of two stages: capture and relight. §.§ Capture stage   The capture stage captures hand data from our multi-camera studio. We capture data from 10 subjects, as shown in Fig. <ref>. Two types of sequences, peak poses and range of motion, are captured following InterHand2.6M <cit.>. The peak pose is a sequence, which includes a transition from a neutral pose to a pre-defined pose and then transition back to the neutral pose.The purpose of the peak pose is to capture as diverse poses as possible including extreme poses and maximal finger bent.The range of motion is a sequence, which includes natural hand motion driven with minimal instructions, such as waving hands as if friends are coming over.In this way, we could capture both 1) diverse poses from the peak pose sequences and 2) natural hand motion from the range of motion sequences. We provide more image and pose examples of our dataset in the supplementary material. Capture studio. Our capture studio has 469 lights and 170 calibrated synchronized cameras. All cameras lie on the front, side, and top hemispheres of the hand and are placed at a distance of about one meter from it. Images are captured with 4096 × 2668 pixels at 90 frames per second (fps). Following Bi et al. <cit.>, we interleave fully lit frames and partially lit frames at every 3 frames. The capture stage only uses fully lit frames, and the relight stage uses partially lit frames to train the relighting network. 2D joint coordinates and 3D scans. We process the raw video data by performing 2D joint detection <cit.> and 3D scan <cit.>.The 2D joint detector is trained on our held-out manually annotated dataset, which includes 900K images with rotation center coordinates of hand joints, where our manual annotation tool is similar to that of Moon et al. <cit.>. Our 2D joint detector has an error of 2.5 pixels in a 1024×667 image space.3D joint coordinates. InterHand2.6M <cit.> triangulated detected multi-view 2D joint coordinates with the RANSAC algorithm. We found that their approach suffers from temporally inconsistent results as the triangulation does not take into account the similarity between close frames. For example, some joints could have inconsistent semantic positions across viewpoints due to the failure of the 2D joint detector. In this case, triangulated 3D coordinates of such joints could be very different between close frames if selected viewpoints by RANSAC are different. Instead of triangulation, we train a 3D joint detection network, which takes a voxelized 3D scan of hands and is supervised with multi-view 2D joint coordinates. Our network produces much more temporally consistent and smooth results as inputs of close frames (i.e., voxelized 3D scans) are almost the same. The network is designed with V2V-PoseNet <cit.>, a state-of-the-art 3D joint detection network from voxelized hands. First, we make two volumes from 3D scans by making 3D bounding boxes around the mean of initially obtained left and right hands' 3D joint coordinates, where the initial ones are obtained with the RANSAC algorithm. Then, we voxelize 3D scans around each 3D bounding box to (96,96,96) resolution. The voxelized 3D scans are passed to the V2V-PoseNet, which consists of stacked 3D convolutional layers. We perform soft-argmax <cit.> to the output of the V2V-PoseNet, which produces 3D joint coordinates in a differentiable way. The obtained 3D joint coordinates are supervised with multi-view 2D joint coordinates by projecting the 3D ones to each viewpoint and calculating L1 distance from the 2D ones. We train V2V-PoseNet on all frames, which takes 1 day, and test it on the same frames to obtain 3D joint coordinates of them. Our obtained 3D joint coordinates have an error of 2.0 mm.The errors are measured against our held-out human-annotated set.r0.4< g r a p h i c s > Comparison of 3D human model fits from (a) triangulation of InterHand2.6M <cit.> and (b) our V2V-PoseNet. The three frames are consecutive ones, and the time difference between near frames is 0.02 seconds. Given the very short time difference between frames, the three frames should have almost the same 3D hands.(a) not only suffers from the collisions but also suffers from temporal inconsistency between very close frames.On the other hand, (b) does not suffer from the collisions and achieves temporal consistency between close frames.3D hand model fitting. We additionally obtain 3D meshes of hands as 1) they provide useful surface information that does not exist in the 3D joint coordinates and 2) they are inputs of the relighting network <cit.>. To this end, we fit 3D hand models, such as MANO <cit.>, to the obtained 3D joint coordinates and 3D scans of the above using NeuralAnnot <cit.>. The 3D hand model is a parametric model that produces 3D hand meshes from 3D pose and identity (ID) codes. The 3D pose represents 3D joint angles, and ID codes determine 3D hand shape, such as thickness, in the zero pose. NeuralAnnot takes a single image and 3D joint coordinates as inputs and outputs 3D pose and ID codes, used to drive 3D hand models. We use the network architecture of Pose2Pose <cit.> for NeuralAnnot.The 3D pose and ID codes are supervised with the 3D joint coordinates after performing forward kinematics. Also, 3D meshes from the 3D pose and ID codes are supervised with 3D scans by minimizing the closest distance between 3D meshes and 3D scans. Several regularizers, such as 1) L2 regularizers to 3D pose and ID codes, which prevents extreme meshes, and 2) a collision avoidance regularizer are applied as well. We separately train NeuralAnnot for each subject, and the ID code is directly optimized, not regressed from the input image and 3D joint coordinates. In this way, 3D hands from the same subject have a consistent ID code. Training NeuralAnnot takes less than 1 hour for each capture. After training NeuralAnnot, we test it on the training set and manually inspect all frames. Frames with wrong fitting results are excluded for the following relight stage. Fig. <ref> shows that it produces temporally consistent results. We checked that the MANO meshes from NeuralAnnot have 1.3 mm errors from the 3D scans without any translation/rotation/scale alignments.§.§ Relight stage   After capturing data in the above capture stage, we train a relighting network <cit.> for each subject following their original training strategy.Following them, we train the relighting networks on single-hand data as 3D hand model fittings are more accurate for the single-hand data than the two-hand data, which makes training the relighting network more stable. Please note that the single-hand data to train the relighting networks are also obtained from NeuralAnnot by training and testing it on single-hand captures. For more details, please refer to Iwase et al. <cit.>. After training the relighting networks, we use the 3D poses from NeuralAnnot <cit.> of the above capture stage to render two hands with specified camera parameters.For illuminations, we use 2144 high-resolution environment maps of Gardner et al. <cit.>.§ DATASET RELEASE Our Re:InterHand dataset includes 1) relighted images, 2) non-binary masks, and 3) 3D hand model fittings, as shown in Fig. <ref>. The relighted images and non-binary foreground masks are from Sec. <ref>, and 3D hand model fittings are from Sec. <ref>. Out of 10 captures, we split 7 captures for the training set and the remaining 3 captures for the testing set. Relighted images. To render relighted images, we first sample cameras out of our 170 cameras for each capture to make overall rendering faster and remove redundancy. To sample cameras, we sum 2D joint detection confidence from Sec. <ref> for each camera. Then, we pick the top 50 cameras based on the sum of confidence values. In this way, we can exclude cameras where hands are almost not visible. The farthest iterative sampling algorithm samples N cameras from the selected 50 cameras based on the camera positions to obtain as diverse viewpoints as possible. For the frame-based research, we downsample captures at 5 fps and set N=20. Then, we render images with a different environment map for each frame, which results in 493K images. Also, for the video-based research, we set N=5 and render images at 30 fps with a different environment map for each segment, which results in 739K images. For both frame-based and video-based split, images with the same frame index and different viewpoints are rendered from the shared environment map in a multi-view consistent way.One advantage of our approach is that we can render images with any novel camera parameters. In addition to the above pre-defined 3rd-person viewpoints, we also render relighted images from random egocentric viewpoints to contribute our Re:InterHand to the egocentric 3D hand community. To this end, we first manually put a reference camera in the middle of two eyes using 3D scans that include both hands and a face. The orientation of the reference camera is set to see the center of the hands. Then, for each frame, we randomize 3D camera positions within 20 cm of a 3D box around the reference camera. We also randomize the 3D orientation of the camera by applying [-30,30] pitch, yaw, and roll. The principal point is set to the image center, and the focal length is randomized from [0.7,1.8] times the image size. To simulate the fisheye cameras, often used for egocentric viewpoints, we randomize distortion of the fisheye cameras by pre-defined mean and standard deviation. For the frame-based research, we render images with a different environment map for each frame at 30 fps, which results in 148K images. Also, for the video-based research, we render images with a different environment map for each segment at 30 fps, which results in 148K images. For each peak pose sequence, we exclude frames at the first and last segment whose velocity of hands is less than a threshold to remove many neutral pose frames. Both 3rd-person and egocentric viewpoints images are rendered in 1K resolution.Non-binary masks. We provide non-binary masks, obtained from the relight stage. The non-binary mask is different from binary masks rendered from MANO fittings as the non-binary ones are perfectly aligned with images including detailed silhouettes, such as nail and muscle bulging.3D hand model fittings. We provide MANO <cit.> fitting as it is the most widely used 3D hand model in the community. Also, we provide the 3D hand model fittings, used to render relighted images.§ EXPERIMENTS For all experiments, we report right hand-relative vertex error (RRVE), a Euclidean distance (mm) between estimated and GT 3D meshes of two hands after aligning translation of the right hand's root joint (i.e., wrist). Note that the most widely used metric of previous works <cit.> (MPVPE) is calculated after aligning the translation of the right and left hand separately; hence, their MPVPE does not consider relative position between two hands, while our RRVE does. For the 3rd-person viewpoint experiments, we report RRVE on the test split of InterHand2.6M (H) <cit.>, HIC <cit.>, and the test split of our Re:InterHand. For the egocentric viewpoint experiments, we report RRVE on the test split of our Re:InterHand after training methods on the training split of it. For all experiments, the frame-based split of Re:InterHand is used. For all datasets, the errors are calculated only for two-hand samples.Effectiveness of the relight stage. Tab. <ref> shows the effectiveness of the relight stage. It is noteworthy that our relight stage greatly reduces the error of HIC, which consists of real and natural images. Please note that images of HIC are entirely novel ones as they are only used for the testing purpose. Our relight stage also significantly reduces the test error on our Re:InterHand test set while slightly reducing errors on InterHand2.6M. As both InterHand2.6M and data from the capture stage consist of lab images, the data from the capture studio (the second row of the table) reduces the error on InterHand2.6M the most. However, it could not improve the test results on HIC with real and natural images as image appearances are far from those of real images, as shown in Fig. <ref> (a). The variants with composition (the third and fourth rows) make the performance on HIC of the baseline (the first row) worse. We think the reason is that their images have inconsistent light between foreground and background, as shown in Fig. <ref> (b) and (c). For more harmonic images, we applied AdaIn <cit.> to raw RGB pixels of the foreground to make them follow distributions of the background pixels. Unfortunately, as it is not aware of the reflections, it often changes the colors of hands to unrealistic colors without preserving skin colors and only changing lights, which results in performance degradation on HIC with real and natural image appearances.Benchmark. Tab. <ref> and  <ref> provide benchmark results with IntagHand <cit.> and InterWild <cit.>, state-of-the-art 3D interacting hands recovery methods. We use their official checkpoints, and GT hand boxes are used for IntagHand as it assumes them.§ CONCLUSION Summary. We present Re:InterHand dataset, which provides images with highly realistic and diverse appearances of interacting hands and their corresponding GT 3D hands. To this end, our accurately tracked 3D poses, the state-of-the-art relighting network <cit.>, and a number of high-resolution environment maps are used. We hope our dataset can make the community one step closer to the 3D interacting hands recovery in the wild. Limitations. As Fig. <ref> shows, our rendered images have cut at the forearm area. This is because our relighting network only takes a 3D hand geometry, not a whole-body one. We think this is not a severe issue as most 3D hand analysis systems take cropped hand images followed by hand detectors, where hand detectors can be trained on large-scale real datasets only with 2D annotations. Also, we observed that there are sometimes artifacts in the relighted images. This is because the relighting network is trained on single-hand data and tested on two-hand data, which sometimes results in pose generalization failure. We expect a better relighting network could alleviate this issue.Supplementary Material of“A Dataset of Relighted 3D Interacting Hands" In this supplementary material, we provide more experiments, discussions, and other details that could not be included in the main text due to the lack of pages. The contents are summarized below: * Pose examples of Re:InterHand * Privacy§ POSE EXAMPLES OF RE:INTERHANDFig. <ref> and  <ref> show pose examples of our Re:InterHand dataset. § PRIVACYAll data captures are done after obtaining signatures from subjects with Meta's consent form. Our Re:InterHand dataset does not have personally identifiable information or offensive content.plain
http://arxiv.org/abs/2310.17768v1
{ "authors": [ "Gyeongsik Moon", "Shunsuke Saito", "Weipeng Xu", "Rohan Joshi", "Julia Buffalini", "Harley Bellan", "Nicholas Rosen", "Jesse Richardson", "Mallorie Mize", "Philippe de Bree", "Tomas Simon", "Bo Peng", "Shubham Garg", "Kevyn McPhail", "Takaaki Shiratori" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231026202650", "title": "A Dataset of Relighted 3D Interacting Hands" }
prologue,table,xcdrawxcolor compat=1.18 distri-bution mdexampleExample example[backgroundcolor=teal!12, roundcorner=10pt, linewidth=0pt]*mdexample*Example example*[backgroundcolor=teal!12, roundcorner=10pt, linewidth=0pt, innertopmargin=1pt, innerbottommargin=5pt, skipabove=5pt, skipbelow=3pt] 0000-0002-7938-542XNew York Universitybf996,yurong.liu,chinmay.h,[email protected] New York University 2 MetroTech Center, 10th floor New York New York 11201 New York University 2 MetroTech Center, 10th floor New York New York 11201 New York University 2 MetroTech Center, 10th floor New York New York 11201 Existing deep-learning approaches to semantic column type annotation (CTA) have important shortcomings: they rely on semantic types which are fixed at training time; require a large number of training samples per type and incur large run-time inference costs; and their performance can degrade when evaluated on novel datasets, even when types remain constant. Large language models have exhibited strong zero-shot classification performance on a wide range of tasks and in this paper we explore their use for CTA.We introduce ArcheType, a simple, practical method for context sampling, prompt serialization, model querying, and label remapping, which enables large language models to solve CTA problems in a fully zero-shot manner.We ablate each component of our method separately, and establish that improvements to context sampling and label remapping provide the most consistent gains. ArcheType establishes a new state-of-the-art performance on zero-shot CTA benchmarks (including three new domain-specific benchmarks which we release along with this paper), and when used in conjunction with classical CTA techniques, it outperforms a SOTA DoDuo model on the fine-tuned SOTAB benchmark. Our code is available at <https://github.com/penfever/ArcheType>. ArcheType: A Novel Framework for Open-Source Column Type Annotation using Large Language Models Juliana Freire Accepted XXX. Received YYY; in original form ZZZ =============================================================================================== PVLDB Reference Format:. . PVLDB, (): , .https://doi.org/doi: [This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:[email protected]@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. , No. ISSN 2150-8097. https://doi.org/doi: ]footnote-1PVLDB Artifact Availability:The source code, data, and/or other artifacts have been made available at <>.§ INTRODUCTIONMotivation. The goal of semantic column type annotation (CTA) is to associate each column of a relational table with one among several pre-defined semantic types that go beyond atomic types such as string, integer, or Boolean. CTA is a useful computational primitive in numerous settings, including data cleaning, where detection, correction, and transformation are performed using rules based on data types <cit.>,and schema matching for data discovery, where the semantic type can be used to constrain the search for matching attributes <cit.>. Beyond being useful from a computational standpoint, efficient methods for CTA can also enable democratization of access to large, well-curated datasets by reducing labeling costs.Learning-Based CTA. Recent papers have employed learning-based techniques to perform CTA on new, unseen columns.<cit.>Such techniques may train from scratch on a large training corpora of columns, annotated with their semantic types, or they may be fine-tuned from pre-trained transformer-based language models (LMs) such as BERT <cit.> for the specific task of CTA <cit.>. Learning-based approaches have been shown to be effective on a broad range of types for which there exists sufficient training data. For example, Sherlock <cit.> was trained on over 675,000 columns retrieved from the VizNet corpus to recognize 78 semantic types from DBpedia <cit.> such as album, city, plays, or birth place. However, these approaches exhibit important limitations.(1) Recent papers have shown that the vast majority of standard deep computer vision models for image classification perform significantly worse under so-called distribution shifts. Distribution shift is measured on new test datasets acquired from different sources which retain the same labels as the original test set. <cit.> Suppose we fix a given column type , and suppose our pre-training distribution is sourced from NYC Open Data <cit.>. Then we might see entries like , , , which are locations in New York City. But if we use a model trained on this data to perform CTA on adataset for a different country, for example, it is unlikely to assign thelabel toandwhich are locations in Rio de Janeiro.Just the rewording above addresses my comment; I suggest we comment out thus new parag since it breaks the flow In 2020, Taori et al conducted a meta-analysis of 204 ImageNet models in 213 different test conditions and found that most current techniques provided little to no robustness. <cit.> Subsequent large-scale studies found similar effects on several other image classification datasets. <cit.> Although recent analyses have shown that it is possible to reduce the robustness gap by increasing dataset size, carefully considering architecture and modifying pretraining objectives, as of 2023, the only image classification models in the literature which have been shown to retain more than 90% of their ImageNet robustness under shift, on average, are OpenAI's CLIP, trained on over 400 Mn. samples, and variants of it trained on equally massive datasets. <cit.> Since closed-set deep learning models for column type annotation utilize a conventional pretraining objective, we posit that the phenomenon of distribution shift may occur in them, even when their column types match closely. As a simple empirical validation of this problem, we compare the performance of the fine-tuned DoDuo model on SOTAB to a DoDuo model which has been pre-trained on VizNet alone (reusing labels from the VizNet label set where possible), and find that performance declines over 60% (from 84.8% to 23.8%). (2) Existing learning-based models require that label sets be specified at training time and kept frozen henceforth. However, real-world data is vast, and pre-trained labels rarely map cleanly to categories of interest in newly-encountered datasets. In many scenarios, datasets do not have a schema which fit neatly into these pre-trained types. Consider the NYC Open Data repository <cit.> which contains thousands of datasets published by NYC agencies, including NYC-specific semantic types such as public schools, agencies, parks and boroughs. As point of reference regarding the specificity of this collection, <cit.> computed the overlap between the contents of datasets in NYC Open Data and word vectors trained with GloVe (which uses Wikipedia as a source) and found that Glove covers only 8% of the terms in the collection.Existing ontologies and taxonomies such as DBpedia <cit.> define generic types that encompass the NYC-specific types. For example, a high school can be classified as , but this semantic type includes many institution types that are not public schools, including colleges, medical centers and libraries. If we use this semantic type to find tables to augment information about NYC high schools, many irrelevant tables would be retrieved. Training a model to recognize new types is both time-consuming and costly as it requires the acquisition of labeled data and the creation of new deep neural network models. This can severely limit the applicability of learning-based approaches <cit.> to long-tail, rare types, which in turn can negatively affect downstream applications.(3) The volume of training data required by modern CTA models is substantial. Besides the earlier example of Sherlock, we note that over 397,000 tables were used for training versions of the current state-of-the-art DoDuo <cit.>. This imposes high data cleaning and labeling costs which can be oppressive, particularly for infrequent classes.Using LLMs for CTA.As a silver lining, the recent dramatic advances in generative large language models (LLMs) open the opportunity to address these challenges and create robust models for a broad set of semantic types without requiring large volumes of labeled data.LLMs are trained over a very large and diverse corpus and they are thus able to accumulate knowledge that covers a plethora of semantic types. Furthermore, LLMs have the capability to perform in-context learning, where the label set can be specified as user-defined context during inference time; this opens the possibility to perform open-set classification even for rare types. When presented with the text ,GPT-3.5-Turbo learns in-context that it is being asked to perform classification, asserts that it is a , and follows up by producing a list of other high schools. This capability makes it possible to do either zero-shot CTA, or to generate labels that can be used to fine-tune models for domain-specific types.LLMs have also been shown to perform much better than other learning-based models under distribution shift <cit.>, opening the possibility for the creation of robust CTA models.Our contributions. In this paper, we take several steps towardsestablishing the effectiveness and limitations of LLMs for CTA.We discuss the challenges involved in using LLMs for CTA and systematically delineate the different components required to perform CTA using LLMS: sampling the data context, prompt serialization, model querying, and label remapping (illustrated in fig:llmcta). We propose novel methods for these components and assess their effectiveness. We also explore the impact of these components on two different modes of operation: (a) using existing LLMs for zero-shot CTA and (b) fine-tuning LLMs for CTA based on a training set of labeled column types.For both modes of operation, we report a series of results for open-source LLMs. As a basis of comparison we also study and report the performance of a closed-source LLM (GPT-3.5-Turbo). However, we emphasize open-source LLMs in our work, since closed-source models are not transparent: since we do not know how they were constructed, it may be difficult to understand their behavior. Moreover, many closed-source LLMs are constantly being updated and the reported results are not reproducible. We perform a detailed evaluation of our approach against the state-of-the-art systems DoDuo <cit.> and Turl <cit.> using the SOTAB benchmark, which was designed for comparing the performance of annotation systems on CTA tasks <cit.>. SOTAB consists of 162,351 columns (obtained from 107,927 tables) covering 91 distinct labels from Schema.org <cit.>. However, like other benchmarks for CTA <cit.>, SOTAB includes only well-known types defined in widely-used ontologies and taxonomies. To explore the breadth of LLM subject knowledge as well as how LLM-based CTA performs for a wide range of types (including rare, domain-specific types with novel characteristics), we create three new benchmark datasets for CTA, described in sec:new-benchmarks. Our main contributions can be summarized as follows:* We introduce ArcheType, an open-source framework to CTA centered around large language models, which leverages their strengths, adapts to their limitations, and is compatible with both open-source and closed-source LLMs.* We enumerate four essential components for any LLM-based CTA (LLM-CTA) approach: sampling, serialization, querying, and label remapping. We propose new approaches for context sampling and label remapping, and demonstrate their importance to the overall accuracy of LLM-CTA(sec:archetype-method).* We introduce three new zero-shot CTA benchmarks that cover a wide range of domain-specific schemas and attribute types (sec:new-benchmarks).* Through a detailed experimental evaluation (sec:experiments), we show that ArcheType achieves strong fine-tuned performance and state-of-the-art zero-shot performance on a large and diverse suite of benchmarks, while requiring far less tabular data for both training and inference than existing methods (sec:main-results).§ BACKGROUND: FOUNDATION MODELS The term foundation model applies to large machine learning models that are pre-trained on vast amounts of raw data to capture a wide range of knowledge, and then fine-tuned on more specific tasks or datasets <cit.>. In the case of large language models (LLMs), the pre-training objective is autoregressive; the model is tasked with predicting the next word in a sequence based on the context provided by the preceding words. The scale of LLMs results in new emergent capabilities, and their effectiveness across a multitude of tasks incentivizes the use of foundation models as a starting point (or replacement) for fine-tuning task-specific models. However, this last step must be done with care since the defects of the foundation model are inherited by all the adapted models downstream <cit.>.§.§ LLMs and Tabular Data The development of LLMs has largely been driven in the context of NLP tasks as question-answering, logical inference, and word disambiguation.Recent efforts based on instruction-following, such as <cit.> and <cit.>, have demonstrated that fine-tuning foundational LLMs on a carefully curated corpus of prompt-response pairs is an effective strategy for more generic classification tasks. However, these approaches focus on datasets that have small label sets, clean labels, balanced classes, and have largely been focused on natural language classification.There have been only a handful of attempts to apply LLMs to tasks that are germane to tabular data. Recently, <cit.> proposed a LLM-based framework for few-shot classification of tabular data and experimented with different strategies to design the prompt. They showed that their approach can outperform state-of-the-art (SOTA) neural models both in the zero- and few-shot settings.<cit.> outline a vision for leveraging LLMs for data management tasks and show that LLMs using few-shot and zero-shot approaches can achieve SOTA performance for entity matching, data imputation, and error detection.§.§ LLMs for Zero-Shot CTAAs discussed in sec:intro, LLMs present new opportunities to derive robust models for CTA that can handle a broad set of classes at a much lower cost than existing learning-based methods.Two recent approaches have been proposed that leverage LLMs for CTA <cit.>. They use OpenAI's GPT and perform CTA under a zero-shot regime. These methods do not require model training, and apply arbitrary open-vocabulary labels, either from parametric memory <cit.>, or from a set of options provided at test time <cit.>.The promise of such a direction is clear, but existing implementations have important limitations.Both <cit.>, which are to the best of our knowledgethe only existing works on zero-shot CTA, rely on closed-source models (see discussion below).They also require access to the entire table at test time to achieve their best performance, which in practice can be expensive for private models. tab:sampling-cost shows the cost to evaluate the 15,040 column test set of the SOTAB dataset (assuming sampling with replacement). It costs over $750 to apply table-at-once methods to SOTAB, and over 25% of the prompts exceed the maximum possible context window. The cost is also high for column-at-once methods when a large sample is used—for 1,000 samples the cost is over $1,000.Since these methods are highly sensitive to sample size, it is important to devise strategies that are sample-efficient. However, only simple random sampling and first-k-rows sampling methods have been explored for LLM-based CTA.Note that while these methods are costly on closed-source models, they can be impractical on open-source models, owing to their limited context windows.§.§ Open vs. Closed-Source LLMsWe consider a model open-source if, and only if, sufficient specifics of model design have been published to reproduce the architecture, checkpoints with pre-trained weights have been releasedand the contents of the pre-training corpus are available for inspection. Any model which is not open-source, we consider to be closed-source. Three clear advantages of utilizing open-source models are explainability, reproducibility and cost.But these come at the cost of performance and a limited context length.Explainability. The architectures of most closed-source models are not known to the public; nor is it known how much prompt engineering and behind-the-scenes modification of the model output is being conducted. The specifics of the data on which these models are trained is also unknown. These facts make it difficult to provide rigorous explanations of the behavior of closed-source models.Reproducibility. As noted recently, results from closed-source models are non-reproducible, non-deterministic, and cannot be ablated with respect to the model architecture or potential data contamination, all of which makes them unreliable for rigorous scientific research <cit.>.Cost. As closed-source models charge by the token, there is a cost incurred by any solution which relies on them (see tab:sampling-cost).Performance. As of this writing, the best open-source models underperform the best closed-source models across a wide range of benchmarks <cit.>. The causes of this performance gap are not fully understood, as large language models tend to exhibit unpredictable phase transitions as a function of scale. These transitions can lead to sudden leaps in performance on standard benchmarks <cit.>.Context length. The open-source large language models in common use at the time of this paper have context windows ranging from 512 to 2048 tokens <cit.> (typically between 375 and 1500 words, if the string is English). If the string is in a different language or is largely numeric, however, the tokenization process tends to be approximately 2-4x times less efficient, since standard tokenization schemes employed by such models tend to handle unicode inefficiently <cit.>. Both phenomena are common in real-world tabular data. Closed-source models are less constrained (GPT-3.5 allows over 16,000 tokens at the time of this writing). § ARCHETYPE: METHODS AND SYSTEMIn this section, we motivate and describe the core components of our method. fig:llmcta provides a high-level overview. We begin by formalizing the problem of LLM-CTA.Formal Model of LLM-CTA. Consider a table T with t columns and r rows. We denote each column C ∈ T as a function which maps row indices to strings; i.e,. for 0 <= i < t, we have C_i : ℕ→Σ_*, where i is the column index. Here, Σ_* is the set of all possible strings, Σ_C_i is the set of all strings found in column C_i, Σ_C_i⊂Σ_* ∀ i, with any individual string σ∈Σ_C_i. We make no further assumptions; C_i may include a column name, and T may contain an additional metadata field. However, neither of these properties is required to exist, and so we do not include them in our analysis. Many of our methods rely on a sample of unique values sampled from the column, U_i := (|Σ_C_i|).We explore two LLM-based approaches for CTA: fine tuned and zero shot. Let L denote a label set of strings with cardinality j. Given the above definitions, we define fine-tuned CTA ⊂ T × L as a relation between tables and labels:∀ C_i, ∃ L_j | (C_i,L_j) ∈ CTA We seek a generative method M : Σ_* →Σ_* that comes closest to satisfying the following properties:M(σ, L) = σ_L, σ_L ∈ L∀ C_i, CTA(C_i, L_j) = σ_L_j The definition of zero-shot LLM-CTA is identical to that of fine-tuned, save for a few small differences. In a zero-shot setting, the number of rows r is presumed to be small enough to preclude the possibility of fine-tuning a model. L is chosen at test-time. In zero-shot, it is also possible to define multiple values of L for one T.§.§ Elements of LLM-CTA MethodsWe observe that any LLM-CTA method must provide solutions to four problems: context sampling, prompt serialization, model querying, and label remapping. Individually, each is necessary for LLM-CTA; collectively, they are sufficient.By considering and ablating approaches to each of these problems separately, we designed ArcheType,a LLM-CTA frameworkwhich generalizes to a wide range of architectures, including popular open-source models. fig:llmcta provides an overview of ArcheType and in the remainder of this section, we describe its components in detail.Context Sampling. As of this writing, all SOTA large language models (LLMs) are transformer-based <cit.>. By design, transformers have a hard scaling limit over which their dense attention can be applied, sometimes called a context window, W. Given a context C and a set of labels L, if |C| + |L| > W, a representative sample must be selected. From a practical standpoint, the context window sizes of contemporary LLMs are small enough that this event takes place quite frequently, e.g.,<cit.> and <cit.> use simple random sampling and first-k-rows sampling, respectively. We introduce a new sampling method in sec:context-sampling and provide ablation studies in sec:ablations-context-sampling.Prompt Serialization. SOTA LLMs require prompts, or priors, to complete. Prompt serialization (or prompt engineering) is the process of transforming raw context into a prompt. Of the four components we consider here, this one has received the most attention in the existing literature; the methods introduced by <cit.> are largely focused on improvements to prompt serialization. In sec:ablations-prompt-ser, we ablate prompt serialization, independent of other components, and conclude prompt engineering should be treated as a hyperparameter rather than as a methodological contribution – we describe this approach in sec:prompt-ser. When considering a range of model architectures, we find that any reasonable serialization method is about as likely to produce a good result as any other.Model Querying. Model selection and querying is another important element of LLM-CTA. The method must correctly submit a query to some large language model(s) chosen in advance, and it must retrieve and process the response. This query may be processed on a local machine or via an API. This, too, has not been ablated in prior work. While future work may attempt to train a generative large language model from scratch specifically for this task, <cit.> use GPT, and only GPT. As part of our study, we present ablations on architectures across a range of open-source models as well as GPT (sec:ablations-model-querying) and find that no single model dominates zero-shot performance.Label Remapping. All LLMs sometimes produce responses which do not match with any of the labels provided in the prompt, i.e., σ_L ∉ L. Label remapping is a form of error correction which remaps an unbounded LLM output space to a limited set of labels. <cit.> use an embedding-based method called anchoring to remap labels, whereas <cit.> use a dictionary lookup. As the latter approach is not compatible with zero-shot LLM-CTA, we ablate only the former approach, along with two other baselines, and develop CONTAINS+RESAMPLE (sec:label-remap), an algorithm which outperforms the baselines across model architectures. We ablate our choice of remapping method in sec:ablations-label-remapping. §.§ Context SamplingCTA approaches using deep learning face severe data requirement challenges in settings that require (very) large tables and open label sets. To address these challenges, we introduce a new approach which we call context sampling and outline in alg:context_sampling. Given the unique values of a target column U_i and a target sample size ϕ, we seek to construct the representative sample S that best summarizes the column. While it is possible in LLM-CTA to have ϕ vary by column, in this paper we consider the setting where ϕ is fixed in advance and consistent across all columns.In the simplest case, we have |U_i| ≥ϕ, and S is drawn without replacement from a distribution whose construction is described later in this section. If |U_i| < ϕ, then S is drawn with replacement instead.In the fine-tuned setting, we find it is beneficial to add more features to the context window, affecting both sampling and serialization. The features we utilize are described later in this section, and are sampled as described in alg:context_sampling.The context sample is then serialized and embedded into a prompt which is passed to the LLM, the format of which follows from recent works such as <cit.> and <cit.>.Context Sampling in ArcheType. The probability distribution over U_i from which we sample is weighted according to an importance function f. The probability of selecting an element σ from U_i under P_U_i is given by:P(σ) = f(σ)/∑_j ∈ U_i f(σ_j).We consider multiple importance functions, depending on the dataset in question. Our standard f is string length, as we find that long strings are more likely to contain useful information than short ones. For the American Stories (amstr) benchmark described in sec:new-benchmarks, we find that an importance function which prioritizes samples that include any target class name is more effective.There are several challenges involved in the implementation of context sampling. These include low variance (degenerate) data U_i ≪ o(1);high variance dataU_i ≫ϕ, and duplicate columns, where for some index j over columnsΣ_C_i = Σ_C_j. Each of these situations merits discussion.High variance. In this case, helpful context may be lost in a limited sample. This phenomenon may explain why increasing the size of the context sample tends to improve model performance. However, the improvements are slight, suggesting an exponential scaling of data demands. Low variance. CTA can easily become unsolvable for low-variance or, in the extreme case, degenerate columns. Consider a column C_d such that∀ k ∈ U_i, Σ_U_i_k =“0"and a label set . There exists no unique σ_L_j such thatCTA(C_d, L_j) = σ_L_jIn some cases, we find that incorporating additional metadata (such as the filename of the table) can help with the classification task, but in other cases, we found that it simply biases the LLM to parrot back portions of the input string.Feature Selection.In context sampling, feature selection refers to what aspects of the original data we choose to include in the context. In all of our experiments, our first feature is CS, the context sample itself. We also experiment with including the file name (FN) of the table, used by <cit.>, summary statistics (SS), used by <cit.>, and samples from other columns (OC), used by <cit.>. Summary statistics (SS). Our method for SS feature selection is as follows:* We select statistics which support fast, accurate sketching.* We select measures of center and spread which can provide additional information about missing column values. The list of summary statistics included in our fine-tuned models was: standard deviation, average, mode, median, max, min.When the summary statistic is a floating-point value, we round it to two decimal places. When it is an integer, we exclude the decimal place. When all sampled values are numeric, the statistics are computed with respect to the individual column values. When any sampled value is non-numeric, the statistics are computed with respect to column value lengths.We postulate that these statistics are useful because they help the model disambiguate between numeric column samples by preserving information about overall trends in the column. However, we focused on simple-to-calculate statistics and did not extensively ablate our choices; in future work we plan to explore this aspect.Other columns. First, we take as many unique samples as are available from the target column. Then, we fill the remaining context length with an equal number of samples from each other column. We label samples from other columns with an index number in order to identify from which column they originated. We find that doing this improves fine-tuned performance, but has a negative effect on zero-shot performance. We postulate that this may be because the LLM struggles to distinguish inter-column from intra-column values without the presence of learned special characters as provided in <cit.>. §.§ Prompt SerializationThe prompt serialization stage transforms the context sample S into a prompt format suitable for querying an LLM; this includes modification of prompts that exceed the maximum allowable length of the context window and how to reformat the table. fig:enter-label shows examples of prompts for both fine-tuned and zero-shot regimes of ArcheType. We style our fine-tuned prompt after the instruction-following method described in <cit.>. We treat the semantics of the INSTRUCTION field as a hyperparameter, and fix it at training time. The extended context includes the samples, the table name, and computed summary statistics including standard deviation, median and mode. In zero-shot, we again treat INSTRUCTION as a hyperparameter, sweeping over a space of possible semantic structures. INPUT is handled identically to fine-tuned. In zero-shot, the prompt also includes OPTIONS, or allowable column names, from which the model is expected to choose. The suffix ANSWER: cues the zero-shot LLM to fill in the desired label, in this case, "number".The heuristic optimization of this process is sometimes referred to as prompt engineering, and is treated as an important contribution by existing zero-shot CTA methods <cit.>.In fine-tuned ArcheType, we fix a single prompt serialization strategy, as the prompt is learned during the fine-tuning process and has little impact on the model output, as long as it is consistent. In zero-shot ArcheType, unlike previous methods, we treat the choice of prompt as a hyperparameter. We provide experimental support for this idea in sec:ablations-prompt-ser. Serialization strategies. We explore six distinct serialization strategies, illustrated in fig:prompt-types. The strategies labeled "C" and "K" were proposed in <cit.> and <cit.>, respectively. "O" is the serialization we utilize in our fine-tuning method, and is written in a technical and formal tone. Unlike our other prompting strategies, we word "O" differently for each model architecture, optimizing it on a small subset of SOTAB. The remaining serialization strategies are designed to test the effect of varying prompt length, position, and tone; "N" adopts a casual, conversational tone and uses simple language, "I" inverts the position of prompt and context, compared to the other strategies, and "S" is designed to be as short as possible while remaining clear.Prompt Serialization in ArcheType Zero Shot (ZS).We have evaluated ArcheType ZS using all six prompts in fig:prompt-types; we report performance on the best-performing configuration. Note that we include the label set L in the prompt. In order to simplify the label space further for open-source models, we attempt to detect using simple type testing whether all elements of the context are numeric; if so, we limit L to labels which are numeric (selecting which labels are exclusively numeric is a one-time optimization per dataset – on SOTAB-27, it required about five minutes).Prompt Serialization in ArcheType Fine Tuned (FT). We follow the Alpaca instruction format described in <cit.> and omit the label set L to make more efficient use of the context window.Column-at-once Serialization. Both <cit.> and <cit.> use table-at-once serialization; the entire table is presented to the LLM at inference time, and all columns in that table are classified together. ArcheType uses column-at-once serialization; only a single column to be classified is passed to the LLM. <cit.> provides ablation studies indicating that table-at-once outperforms column-at-once on their test set, a very small subset of SOTAB. Table-at-once serialization, however, is impractical to implement on open-source models with small context windows, and inefficient in that it requires classification of all columns, whether or not the classes for all columns are required.Handling Overflow. Using the length of each prompt, we produce a conservative estimate of whether the tokenized prompt might overflow the context window. If so, we tokenize the prompt, truncate it, add the classnames and response cue to the end of the prompt, and pass it through.Examples of serialized prompts can be found in fig:enter-label. Additional examples can be found in <cit.>.§.§ Model QueryingThe third stage of ArcheType involves passing the serialized prompt as input to the LLM, a process which we refer to as model querying. The key variable here is, naturally, the choice of model and, in the case of fine-tuned CTA, the approach to training said model.Fine-Tuned Models. In the fine-tuning regime, our model is a LLAMA-7B, the smallest in abatch of LLMs from <cit.>. All models in the LLAMA family were pre-trained on the standard unsupervised language modeling task of next-token prediction, but had no instruction tuning as part of pre-training. In order to improve performance on instruction-following tasks, we apply the Alpaca method of <cit.> prior to applying ArcheType. See alg:fine-tune-llm for an overview of the fine-tuning procedure utilized to train our model, andfig:enter-label for an example of a single data point in the training set. Our results for fine-tuning are reported using a fine-tuned LLAMA-7B trained on the SOTAB-full training dataset, using our context sampling and label remapping algorithms. Following <cit.>, we fine-tune LLAMA-7B for 3 epochs, with a learning rate of 2e-5. Fine tuning took 8-12 hours on 4x A100-80GB GPUs.Zero-Shot Models.In the zero-shot regime, we consider the recent open-source OPT-IML and LLAMA-2 models from <cit.> as well as FLAN models introduced in <cit.>. We also present results on the closed-source, private GPT-3.5-turbo <cit.>. As zero-shot ArcheType is model-agnostic, we report results from the three best-performing architectures in our experiments (see tab:main-results-zs).§.§ Label RemappingThe fourth stage of ArcheType is label remapping; mapping the generative output of the LLM to the space of allowed labels.A key drawback of using standard LLMs for classification tasks (based on instruction tuning alone) is that their outputs are not guaranteed to only belongto the provided label set. In our experiments, we found small decoder-only LLMs, such as LLAMA-7B, were particularly susceptible to this behavior.Previous works such as <cit.> have proposed simply discarding all answers which are not an exact match for a label in the set, and measuring performance with respect to exact matches only.Another naïve solution is to simply map all non-matching answers to a defaultclass.However, we find that such approaches tend to underrate what the model actually provides, particularly in the CTA context. Often, the LLM's `best guess' can be reasonably remapped to an answer in the provided label set. Formally, we frame label remapping as a functionREMAP(σ_L) : Σ_* → L.In other words, the REMAP function is responsible for mapping arbitrary output strings (that are outputs of the LLM) to some specific label in the label setσ_L ∈ L. We explore multiple approaches, described below,and find that the optimal approach varies depending on the LLM and whether we are in a fine-tuned or zero-shot domain. Remap-contains employs the simplest strategy of checking for intersections:∀ L_j ∈ L, (σ⊆ L_jL_j ⊆σ) → (σ_L := L_j).In the case of multiple matches, we accept the longest match. CONTAINS is computationally efficient but has a high rate of failure; it can therefore be used in conjunction with other label remapping strategies.Remap-resample (alg:remap_resample) utilizes the probabilistic nature of LLM outputs. We fix a hyperparameter k setting both how many times we attempt the problem and how we adjust the hyperparameters on each subsequent call. The parameter k can be utilized as either an additive or a multiplicative factor; we find that additive k is suitable for adjusting top_p and repetition_penalty, while a multiplicative factor works well for temperature. For more details on these hyperparameters, please refer to <cit.>.Remap-similarity (alg:remap-similarity) employs a similarity-search strategy. Using an encoder-only transformer model, the input σ is converted to a vector embedding v_σ, as are all the strings in L. ∀ j ∈ L, we then compute the vector cosine similarity (v_σ, v_L_j). Theresult becomes the model's predicted class. For our experiments, we used the S3Bert model introduced in <cit.>. This method has the advantage of always returning a solution. However, the limitation is that the solution may be not always the desired one; moreover,introducing an additional model adds to overall computational complexity.Rule-based Label Remapping. We find that in many CTA datasets, certain types are straightforward to detect or correct using simple algorithmic approaches. Therefore, in order to provide a more realistic picture of how our method would perform in a real-world setting, we supplement both our baselines and ArcheType with rule-based label remapping functions, applied both prior to and after model querying. These rules do not always lead to performance improvements, but they can save considerable time and some space in the context window; therefore, we predict they will be a valuable component of deployed CTA systems, and devote some time to studying their effects. To conserve the zero-shot nature of the problem, we limited ourselves to two hours per dataset for devising these functions. As this is a one-time cost per label set, we consider this a reasonable time budget.In tab:rule-effects, we list the number of labels for which rules led to performance improvements, and the average amount of the improvement across all models and methods. The rules lead to a moderate improvement for the different benchmarks. In app:ex-rbrm, by way of reference, we provide an example of one of the rules we applied to the SOTAB dataset, as well as an accounting of how many labels in each set were affected, and the scale of the gains. ArcheType+. To separate the effects of rule-based remapping from other elements of the ArcheType method, we report F1 scores with and without rule-based remapping in tab:main-results-ft and tab:main-results-zs. In both tables, results with rules applied are denoted with a "+" symbol.§ NEW ZERO-SHOT BENCHMARKSExisting CTA benchmarks <cit.> are useful, but largely limited to a fixed set of labels, chosen from well-known, pre-existing ontologies and taxonomies such as DBPedia.In order to probe the breadth of LLM subject knowledge and assess the effectiveness of LLM-CTA methods over rare classes with different characteristics, we introduce three new zero-shot column type annotation benchmarks: D4Tables (D4-20), derived from the D4 dataset <cit.> and <cit.>, AmstrTables (Amstr-56), derived from the American Stories dataset <cit.>, and PubchemTables (Pubchem-20), derived from the Pubchem dataset <cit.>. In fig:dataset-samples, we provide random samples from each of the zero-shot benchmarks in our evaluation suite.Each of our benchmarks is constructed using the same general approach: we reprocess the dataset so that classes of data can be interpreted as columns, fix a random seed, and sample from the data pool to produce synthetic columns of a wide range of lengths. We treat all columns as independent.However, there are important differences in the content of the columns and the formulation of the classes which allow for a diverse evaluation suite.We follow the approach used in <cit.> and attempt to replicate, as closely as possible, the distributions encountered in real-world data.This results in some column types that are extremely low-variance (such asin D4Tables, with only 5 unique values). In other types, the set of potential unique entries in one type is entirely subsumed by another type, e.g.,in D4-Tables. Others can be addressed model free with regex pattern matching (such asin Pubchem). As noted in sec:label-remap, when such solutions are possible, we utilize them in both our baseline approaches and the ArcheType method itself. D4, Amstr and Pubchem are generated from existing data distributions – it is therefore possible to produce an arbitrary number of tables using them. Balancing time constraints with the desire to test a significant sample size, we heuristically select a sample size of 2000 columns, and apply this consistently to each benchmark. The complete class names for each dataset can be found in <cit.>.D4Tables. <cit.> clustered data from NYC Open Data in an unsupervised manner, and the most coherent clusters (representing semantic types) were assigned labels; in total, 20 clusters were labeled. For more information on the clustering method and the complete label list, please refer to our repository. For our paper, we convert the clusters to columns and sample accordingly.The classes in D4 are representative of open and public data sources, including 2 classes which correspond to city agencies, 4 classes which relate to public schools, and 5 classes which correspond to neighborhoods, streets or regions located in specific New York City Boroughs. This dataset aims to assess the model's understanding of regional information and fine-grained semantic types relevant to governments and NGOs.AmstrTables. The American Stories dataset consists of 20 million OCR scans from the Library of Congress’s public domain Chronicling America collection. Each scan contains an article written between 1774 and 1963. We adapt this dataset for CTA by: dividing the articles in the dataset according to the state in which they were originally published; and creating additional column types for author bylines, newspaper names, and subheadings. Because this dataset was published in 2023, it is unlikely that any of the models evaluated in this study have trained on this data before, reducing concerns of potential data contamination <cit.>. Another advantage of this particular dataset is that for the majority of column types, individual row entries are quite long, corresponding to entire newspaper articles. This phenomenon is commonplace in real-world data, but rare among academic CTA benchmarks. The classes in AmstrTables are particularly relevant for work in the domains of journalism and history. PubchemTables. Pubchem is the world's largest collection of freely accessible chemical information. Chemicals are identified according to their name, molecular formula, structure, biological activities, safety and toxicity information, and more. The database also contains extensive information on patents related to chemistry, such as patent abstracts and author names, as well as the names of scientific journals. We convert the RDF triple format provided by Pubchem to a columnar format suitable for CTA, and sample from the resulting distributions to produce our target columns. The types in PubchemTables require specialist domain knowledge of chemistry to classify correctly.SOTAB-27.SOTAB is an unbalanced, 91-class classification problem where the task is to match each unlabeled column name with its ground-truth label. We created a zero-shot, simplified 27-class version of the benchmark to reduce the semantic overlap among SOTAB labels. The tables in this dataset are identical to the original SOTAB benchmark; however, we remap the 91 labels in the full SOTAB benchmark to a smaller set of 27 labels. The exact details of the class remapping can be found in our github repository <cit.>.§ EXPERIMENTS §.§ Experimental Setup In the following section, we discuss our methodology for our fine-tuned and zero-shot experiments in detail, outline and discuss our main findings, and present ablations on our key model components.Fine-tuned Baselines. For our fine-tuned experiments, we compare our ArcheType LLAMA-7B (sec:modelquery) to DoDuo <cit.>, the state-of-the-art model for column type annotation,as well as TURL <cit.>. We report DoDuo and TURL results following the approach described in <cit.>, which passes the entire table to the model at inference time; we limit our own method to 15 samples per table.Zero-shot Baselines To the best of our knowledge, there exist no open-source CTA models that can operate in a zero-shot manner; therefore, we design strong baselines derived from zero-shot CTA methods which have been introduced specifically for use with GPT: C-Baseline, based on the method in <cit.>, utilizessimilarity label remapping and simple random sampling, and our C-prompt.K-Baseline, derived from <cit.>, utilizes our K-prompt, no-op label remapping and first-k-columns sampling. We omit the method described in <cit.>, which requires a custom hash table for each problem, as this invalidates the zero-shot nature of the problem we consider here. For all methods, we fix 5 samples per column and feed prompts to the model in a column-at-once manner. We always include the class names in the prompt. Benchmarks.A variety of realistic and challenging CTA benchmarks have been developed in the last few years. Prominent among these are GitTables from <cit.>, WikiTables as modified in <cit.>, and WebTables from <cit.>. However, these are usually pre-processed in an ad-hoc fashion and compared against some, but not all existing methods, making it difficult to truly measure progress in the field. For this reason, we focus on the recent SOTAB benchmark <cit.>. SOTAB was independently tested on both state-of-the-art CTA approaches, TURL and DoDuo, making it an ideal testing ground for new CTA methods. Furthermore, it is, to the best of our knowledge, the most challenging CTA benchmark in the literature; the strongest method to date, DoDuo, achieves a Micro-F1 score of 84.8 on SOTAB-91.For the zero-shot regime, we use the benchmarks introduced in sec:new-benchmarks. §.§ ArcheType EffectivenessIn order to evaluate the robustness of the methods to variations in architecture, we evaluate each method using three different architectures: the closed-source GPT-3.5-Turbo model from OpenAI (October 2023 version), and the open-source T5 and UL2 encoder/decoder LLMs from Google <cit.>.Following <cit.>, we report performance using the weighted micro-F1 score, which is the weighted average of F1 scores based on the sample size of each class. We provide 95% confidence intervals for all results using the normal approximation interval method.tab:main-results-ft summarizes our key results in fine-tuned CTA, and tab:main-results-zs shows our zero-shot findings.Our key findings are:* In the fine-tuned regime, our ArcheType-LLAMA model is competitive with DoDuo, despite training on less than 1% of the amount of data.* In the zero-shot regime, ArcheType outperforms or matches baselines on all dataset/architecture pairings we evaluate.These results underscore the effectiveness of ArcheType and serve as evidence that, using LLMs, it is possible to build models for CTA that are not just robust to distribution shift, but that can handle open-label sets defined at inference time, including for rare types. §.§ ObservationsA detailed analysis of our results has both confirmed our hypotheses regarding LLMs as well as uncovered insights into some of their limitations. We summarize these below.LLMs contain sufficient world knowledge to perform zero-shot CTA on domain-specific classes. We find that LLM performance is consistently strong across datasets and across benchmarks, emphasizing the generality of LLM-CTA, compared to fine-tuned methods such as DoDuo. In PubchemTables, we find that models are consistently able to disambiguate challenging classes such as disease, chemical, taxonomy, patent, SMILES (simplified molecular input line entry system), and molecular formula. On D4Tables, they are able to disambiguate the names of NYC public schools and NYC governmental agencies, as well as identify locations. With ϕ = 5, we find that ArcheType-T5 and UL2 are able to correctly identify whether the addresses are in Queens, the Bronx, Brooklyn or Manhattan more than 50% of the time, on average. ArcheType-GPT is even more impressive; it is able to accurately classify regions in all five boroughs more than 87% of the time, on average. Class-specific accuracies for our zero-shot models can be found in app:per-class-acc.Model error tends to be patterned and predictable when the prompt space is fixed.When zero-shot CTA fails, it tends to do so in ways that are patterned and predictable, making it easier to correct errors. The most common failure mode is class bias in favor of certain dataset classes over others. For any given prompt/model/dataset triple, this results in certain columns with near-perfect accuracy and others with near-zero accuracy, with the confusion matrix heavily concentrated in a few classes. We provide examples of this phenomenon in app:per-class-acc.Simple factors can be used to estimate zero-shot CTA performance. Zero-shot performance is stronger on datasets such as PubchemTables and D4Tables; we attribute this to smaller label spaces, smaller individual sample sizes, and a high degree of intra-column similarity and a low degree of inter-column similarity.Amstr, which has more than twice as many labels as the next-largest dataset and a high degree of inter-column similarity (because the vast majority of the labels in the dataset correspond to newspaper articles drawn from the same general distribution), is the most challenging dataset in our suite.ArcheType using open-source models is highly competitive with closed-source models. ArcheType CTA works well with a range of LLMs, small and large, open-source and closed-source. Although GPT tends to have the strongest performance, the difference is not very large, and on PubChem and Amstr, GPT underperforms compared to the open-source models.Depending on the specifics of the problem, any of the three architectures we tested may achieve the best performance. This finding argues for the importance of methods which allow for flexibility in the model querying stage. §.§ Ablation Studies §.§.§ Ablations on Context Sampling In fig:sampling_strategy_ablations, we ablate our choice of strategy using the SOTAB dataset, and find that ArcheType sampling consistently outperforms baseline methods. Sample size. The sample size 0 < ϕ≤ c is a hyperparameter fixed at training time (in the case of fine-tuned) or inference time (in the case of zero-shot). Ablations on particular values of ϕ can be found in fig:algo_ablations_zs. In general, we observe that larger values of ϕ tend to result in better model performance, with the trade-off of slower inference and a larger number of truncated prompts.Feature selection. In fig:context_type_ablations, we ablate our feature selection method, and find that a major gap in performance exists between fine-tuned ArcheType and zero-shot ArcheType. With each additional feature we add to the fine-tuned context, the model performance improves. Surprisingly, in the zero-shot domain, we see a reverse effect. We attempt to offset this in the prompt serialization stage by clearly identifying the different types of incoming context (summary statistics are already labeled according to how they were derived): TABLE NAME: " sourced from the table named " + <TABLE_NAME>OTHER COLUMNS: "For additional context, here are some entriesfrom other columns in the table:" + <OTHER_COLUMNS> However, we find that even with these modifications, zero-shot CTA fails to improve with added context. That said, we acknowledge that there are many possible ways to serialize novel features, and highlight this as an important area for future research into zero-shot column type annotation. §.§.§ Ablations on Prompt SerializationWe observe that improvements based on prompt serialization are quite sensitive to small changes in prompts; furthermore, the effects of these small changes differ depending on the LLM used.We explore six different prompts, labeled C(horus-style), K(orini-style), I(nverted), S(hort), N(oisy), B(aseline) (sec:prompt-ser). The first two prompt styles are adapted from <cit.>, respectively. We provide examples of each prompt in fig:prompt-types.We test these prompts on SOTAB-27, holding other factors constant,across three architectures. As tab:prompt-ablation shows: (1) All models are very sensitive to the choice of prompt; and (2) No prompt is a top-two performer on all three models.In app:cn-sem-pos, we also experimented with changing (3) the label associated with a class and (4) the position of a label in the string, and observed unpredictable effects on performance.These findings motivated our decision to treat the specifics of prompting as a hyperparameter to be optimized, rather than as an integral contribution to the method.Prompt serialization as a hyperparameter.Our method treats prompt serialization and classname selection as tunable hyperparameters to be optimized and reported alongside experimental results.With the understanding that any reasonable prompt is as likely to succeed as any other, for each model-dataset pair, we conduct a grid search over our six prompt styles, each of which is stylistically distinct but similar in content and meaning. All prompts follow general best practices as described in <cit.>, using capital letters, colons and line breaks to delineate instructions, label sets and context, but otherwise vary widely.§.§.§ Ablations on Model QueryingThe space of both open and closed LLMs has exploded of late, and the performance of these models on benchmarks can vary considerably. Rather than attempt an exhaustive comparison which would quickly grow out-of-date, we select strong representative models to stand for different categories of LLM which are frequently encountered in the literature. In tab:arch-comparison, we contrast two common variations in LLM architecture, encoder-decoder and decoder-only. We also compare across parameter counts.We find that parameter count is not predictive of CTA performance; a 13B ArcheType-LLAMA2 model from <cit.>, for instance, outperforms a 30B Opt-IML model from <cit.>. By contrast, encoder-decoder architectures outperform decoder-only architectures– one possible explanation is put forward by <cit.>, who argue that encoder-decoder models are more suited for classification tasks because, as the generated sequence grows in length, less and less attention is focused on the source sequence.§.§.§ Ablations on Label RemappingThe choice of label remapping algorithm can substantially impact model performance; however, the number of remapped labels depends considerably on the selections made in the other three elements of the LLM-CTA method, as well as the dataset itself. In app:llm-invalid, we provide some examples of how the need for label remapping varies, depending on other contributing factors.In the event that label remapping is required, as fig:algo_ablations_zs shows, CONTAINS+RESAMPLE (Cont+Res) outperforms the other remapping strategies across all sample sizes. §.§ Limitations Like <cit.> and <cit.>, we find that there is good reason to be optimistic about the potential for large language models to dramatically impact CTA and downstream data integration and discovery applications. Despite their strong performance, we note some limitations. Context window lengths. The ArcheType-LLAMA method requires only 15 samples per column to reach parity with DoDuo, but it is difficult to exceed 15 samples without truncating individual examples. For that same reason, it is difficult to present large numbers of classes to zero-shot models. This limitation may be short-lived, as context windows are already reaching 100k tokens in closed models <cit.>.High parameter counts. Despite generalizing very well to distribution shifts, ArcheType models have very high parameter counts when compared to previous deep learning solutions (see tab:arch-comparison). We find that increased parameter counts are likely necessary in order for the model to contain sufficient world knowledge to be applicable for CTA “in-the-wild”; however, this needs to be traded off with higher latency, energy, and carbon costs. Context sampling. As noted in sec:ablations-context-sampling, zero-shot ArcheType models struggle when new features are added during context sampling. We consider this an important area of future work.Numeric attributes. Although we benchmark ArcheType on all data types, we see the system as being primarily useful for semantic types (categorical or textual columns).Simpler approaches are likely work just as well (or perhaps even better) for purely numeric or alphanumeric columns.§ CONCLUSIONS AND FUTURE WORKWe introduced ArcheType, a novel CTA approach centered around LLMs. We have shown that with effective context sampling and label remapping, (a) LLMs can be made highly competitive with SOTA CTA models in the fine-tuned setting, and (b) LLMs are both easier to apply and more accurate than existing deep-learning based solutions in the zero-shot domain. Using newly curated benchmarks (sec:new-benchmarks), we have shown that LLM-based CTA can generalize to considerable distribution shifts, making them ideally suited for real-world tasks.We anticipate that methods building upon ArcheType can be useful in a variety of downstream dataset creation, curation, and processing tasks. In the future, we will explore the possibility of extending our methods to novel data tasks, such as semantic joinability, column property annotation, and dataset synthesis.ACM-Reference-Format§ HOW OFTEN DO LLMS GENERATE INVALID LABELS?As discussed in our main paper, generative models such as LLMs sometimes produce labels which are not in the target set. But how often does this occur? We find that the frequency of this event varies widely, depending on the model and the dataset.By way of illustrating the high degree of variance, in tab:label-remapping-freq, we report the number of samples remapped per dataset on five randomly selected experiments from our ablation studies; these random samples are fixed only with respect to the choice of pretraining dataset and the fact that they are all zero-shot. Therefore, they vary across the space of all architectures, prompts, sample sizes and remapping strategies which appear elsewhere in the paper. Modifying these conditions, we observe a high degree of variance between runs. How do remapped labels affect model performance? When we compare average model performance on a zero-shot benchmark to average number of remapped labels in tab:label-remapping-freq, we see that they are inversely correlated – the more labels remapped, the less accurate the model becomes, on average. This provides some useful intuition for why label remapping can have such a large effect on the overall performance of a method; when the LLM-CTA model encounters a challenging sample, it becomes more likely that (1) the LLM will generate an out-of-distribution answer, (2), the LLM will generate an incorrect in-distribution answer. Any label remapping method will, by definition, eliminate all entries in category (1). As these entries would be incorrect without remapping, label remapping can only improve model performance. Furthermore, the better the label remapping technique, the more it will improve model performance.§ EXAMPLE OF A RULE-BASED REMAPPING CHANGEAs a reference for future researchers, we provide an example of a specific rule which we found led to performance gains. For a complete list, we refer the reader to <cit.>.We observe that the SOTAB-91 label set contains a label, , and that the columns labeled as such have elements that are valid URL strings, such as the following: The SOTAB-91 dataset contains other labels (, , , )with type Schema.org enumerationtype and whose elements are a small number of specific Schema.org URLs. The columns with the latter labels are degenerate, becauseis an equally valid and semantically more plausible label than .We therefore apply a simple lookup mapping from the enumerationtype label to the corresponding set of Schema.org URLs.This rule leads to the following per-class accuracy changes on an ArcheType-LLAMA-7B model: § ABLATION ON CLASSNAME SEMANTICS AND POSITIONSemantic changes to label names can have unpredictable effects on performance. In tab:cn-ablation, we introduce two label sets, termed A and B, for the PubChem-20 dataset. Label set B contains 6 semantically changed labels, compared to label set A. When we run the experiment with Label set B and compare the results to label set A, we find substantial accuracy changes to 3 classes; however, only 1 of these classes was among the 6 labels we changed.From this experiment, we conclude the following: (1), contemporary LLMs are sensitive to changes in the label space. (2), this sensitivity is, at times, the functional equivalent of label noise, in that it produces seemingly random and unpredictable changes in test set accuracy. (3), the changes in performance are not confined to the class names modified, but distributed across the entire class space.Changes to label ordering can have unpredictable effects on performance. In tab:cn-ablation, for label set A, we also experiment with randomly shuffling the order in which classnames are presented to the model – we call this experiment (A, S). For all other experiments reported in the paper, we sort classnames in ascending alphabetical order during serialization. We observe substantial changes in accuracy to 7 of 20 classes as a result of this transformation.From this experiment, we conclude the following: (1), contemporary LLMs are sensitive to changes in label position. (2), this sensitivity is the functional equivalent of label noise, in that it produces seemingly random and unpredictable changes in test set accuracy.§ PER-CLASS ACCURACIES ON ZERO-SHOT DATASETS As a convenient reference, in tab:sotab-class, tab:d4-class and tab:pubchem-class we include per-class accuracies for three of the four zero-shot datasets used in this paper.For details on Amstr, see <cit.>.
http://arxiv.org/abs/2310.18208v2
{ "authors": [ "Benjamin Feuer", "Yurong Liu", "Chinmay Hegde", "Juliana Freire" ], "categories": [ "cs.CL", "cs.LG" ], "primary_category": "cs.CL", "published": "20231027153122", "title": "ArcheType: A Novel Framework for Open-Source Column Type Annotation using Large Language Models" }
ackages.tex acros.tex_q _k _v _o_q, h _k, h _v, h _o, h_fc1 _fc2 ℒ _reg _obj λ_reg λ_max λ_min MHA Att FC Top_k Sigmoid 𝐪 titlemize[1]#1 : Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers Zhewei Yao, Reza Yazdani Aminabadi,Stephen Youn, Xiaoxia Wu, Elton Zheng, Yuxiong He Microsoft============================================================================================================= Quantization techniques are pivotal in reducing the memory and computational demands of deep neural network inference. Existing solutions, such as ZeroQuant, offer dynamic quantization for models like BERT and GPT but overlook crucial memory-bounded operators and the complexities of per-token quantization. Addressing these gaps, we present a novel, fully hardware-enhanced robust optimized post-training W8A8 quantization framework, . This framework uniquely integrates both memory bandwidth and compute-intensive operators, aiming for optimal hardware performance. Additionally, it offers flexibility by allowing specific INT8 modules to switch to FP16/BF16 mode, enhancing accuracy.§ INTRODUCTION Quantization is one of the most commonly used techniques to reduce the memory footprint and compute cost for deep neural network inference.Various quantization methods <cit.> have been proposed to speedup the inference and/or improve the throughput. There are mainly two approaches to realize quantization for a trained model: * Quantization aware training ().  <cit.> generally leads to high-quality model but associated with high training/finetuning cost.* Post-training quantization ().  <cit.> minimizes the finetuning cost of but lowers the model quality as compared to .In practice, particularly for fast evolving domains, e.g., Ads and recommendation system, is preferred due to its lower cost and fast adoption speed.In order to alleviate the accuracy issue of , various methods have been proposed.However, due to the interdisciplinary gap between machine-learning algorithms and hardware (in this work, we mainly target Nvidia GPUs, e.g., A100), a hardware-aware method is still largely missing in this field, particularly for Transformer-based models. For instance, ZeroQuant <cit.> proposes dynamic per-token activation quantization and per-column weight quantization for  <cit.> and  <cit.> models to achieve good accuracy.However, it does not consider (1) the non-trivial memory bounded operators, e.g., LayerNorm and attention, and leaves these parts in FP16/BF16 and (2) the per-token quantization cost of invoking additional kernel, when there is no fusion opportunity, e.g., the INT8 GeMM operator of the attention output linear layer. To resolve those limitations, we introduce , a fully hardware-aware and practical post-training W8A8 quantization framework.Our contributions are summarized as below. * considers both memory bandwidth bound and compute intense operators into design.As such, the framework can (potentially) achieve the best hardware performance. * To further improve the usability of , different quantization levels, i.e., the ratio of INT8 operators vs. FP16/BF16 counterparts, of can be performed to achieve desired accuracy and latency trade-off.§ METHODOLOGY§.§ Quantization SchemesThroughout the work, we use symmetric uniform INT8 quantization unless specific comment is applied.However, our method also works for other 8-bit precision formats, like FP8.Particularly, we use the following column-major weight matrix format to perform GeMM, Y = XW,where X∈^n× d is the activation, and W∈^d× m is the weight matrix.For weight quantization, we perform column-wise quantization <cit.>, i.e., each column of the weight has its own scaling factor,W = W_int8S_w,where W is the reconstructed weight matrix, W_int8 is the INT8 counterpart, and S_w∈^1× m is the scaling vector.[Note that here we use PyTorch/Numpy friendly calculation, i.e., W_int8S_w=W_int8Diag(S_w), where Diag is used to make the vector as the diagonal of the matrix.] For activation quantization, we apply three different quantization schemes and we will explain in the next section for the utilization of them.Token-wise quantization () The first quantization scheme we use for token quantization is  <cit.>, i.e., X = S_xX_int8,where X is the reconstructed activation, X_int8 is the INT8 counterpart, and S_x∈^n× 1 is the scaling vector. This approach requires the scaling vector S_x to be calculated on-the-fly, which is more suitable to be fused with bandwidth bounded operators, like Layer Normalization (LN). In fact, quantization is done at zero memory-overhead cost, using extra register-level operations to compute min and max, to reduce the LN's output precision which is going to be used in the following GeMM operation. On the other hand, this approach of scaling hurts Tensor-core efficiency if fused with compute-bound operations such as GeMMs, due to increasing the register pressure and add more compute per Matrix-Multiply-Accumulate (MMA) operations. Feature-wise quantization () The second quantization scheme we use for token quantization is  <cit.>, i.e., X = X_int8S_x,where S_x∈^1× d is the scaling vector. S_x here needs to be calibrated in the pre-processing phase, i.e., feeding multiple batches of data in the network to get the scaling factor.As it is pre-determined, it can be simply fused with most other operators. Compared to TWQ quantization scheme, that involves reading a token-width length to quantize data and can be only fused with certain operations, FWQ scaling can be fused with either memory-bound or compute-bound operations. Static quantization () The final approach we use here is  <cit.>, i.e., X = X_int8S_x=S_xX_int8,where S_x∈ is just a single real value. Similar to , it also needs to be calibrated in the pre-processing phase. §.§ Core methodologyWe discuss the three main components of in this section. §.§.§ Embedding Quantization The first main operator of Transformer models is the lookup table, aka embedding.Normally, there are three types of embedding, i.e., token embedding (X_t), position embedding (X_p), and sentence type embedding (X_s).When the batch size is large enough, the latter two, i.e., X_p and X_s are relative small as compared to X_t.When we have all embedding, a layer norm is applied to get the final result, i.e.,X_emb = LN(X_t, X_p, X_s),where X_emb is the output of the layer norm and LN applies layer norm operator of the sum of all its inputs. The LN operator here is a memory bandwidth bounded operator related to the input X_t and the output X_emb.In order to reduce the memory-bandwidth overhead, we perform on both X_t and X_emb, i.e., S_embX_emb,int8 = LN^quant(S_x_tX_t,int8, X_p, X_s),where LN^Quant is a quantization-aware operator.By utilizing the above embedding format, we roughly reduce the data volume communicated for the following operation by 2x.§.§.§ Attention Module QuantizationThe attention module is illustrated in fig:attn_module.We give a high level calculation process of the attention module here and please refer to <cit.> for more details, X_q/k/v = X_inW_q/k/v,A= X_q X_k^T / √(d), P= Softmax(A), X_attn = PX_v, X_o= X_attnW_O,X_out = LN(X_in, X_o).Before diving into more details, we first categorize all activation quantization schemes: * is applied for X_in and X_out to preserve high accuracy for the input and output of a transformer layer with the least performance overhead since the scaling logic can be fused in the LN operations happening beforehand.* is applied for X_q, X_k, X_v, and P, in order to improve the efficiency of the GeMM operations that involves these tensors. We have this logic added onto the flash-attention kernel implementation, and the dtype for each GeMM can be configured in order to preserve the model accuracy.* is applied for X_attn and X_o to reduce the complexity of scaling activation for the GeMM operation while preserving the accuracy. Compared to , we are using one scale per output element, so the performance cost of this operation is similar to adding a bias at the linear layer.* For A, no quantization is applied. This is due to the sensitivity of the attention score values to their precision which could hurt the model accuracy on downstream tasks.Before applying weight quantization, we haveX_q/k/v,int8S_q/k/v = S_inX_in,int8W_q/k/v,A= S_qS_kX_q,int8 X_k,int8^T / √(d), S_pP_int8 = Softmax^Quant(A), X_attn, int8S_attn = S_pS_vP_int8X_v, int8, X_o, int8S_o= X_attn, int8S_attnW_O,S_outX_out, int8 = LN^Quant(S_inX_in, int8, X_o, int8S_o).Here, ·^quant is the quantization-aware operator, and the output Softmax^quant is asymmetric INT8 numbers since there is no negative value in the output of softmax.Now, let's dive deeper into weight quantization and the GeMM operator.First of all, we can apply the same kernel fusion to fuse the dequantization operator with INT8 GeMM as <cit.>.To further reduce the quantization overhead, we could fuse the scaling factors of and into the INT8 GeMM as the scaling factors are pre-determined without any on-the-fly reduction operator.[Please refer to <https://github.com/openai/triton/blob/main/python/tutorials/03-matrix-multiplication.py> as an example of post-GeMM operator fusion.] More importantly, the /quantization can be simplified as an simple round-to-integer operator without any division/multiplication, as the scaling factor can be merged into weight matrix.Taking X_q, int8 as an example, we can define W̃_q= W_q / S_q, W̃_q,int8S_q= Quant(W̃_q).Here Quant is the quantization converting operator. Afterwards, the post-GeMM quantization operator is simplified as Xq, int8 = Round( GeMM^quant(X_in, int8, W_q, int8, S_in, S_q)),where Round(·) is the round to the integer operator.Similarly, we do not need to dequantize the calculation of A following by the division by √(d). We could simplify it with d̃ = S_qS_k/√(d) and A = GeMM^quant(X_q,int8, X_k,int8^T, d̃). The scaling factor of both S_attn and S_o can be merged into W_o by W̃_o = S_attnW_o / S_o.Such that the overall kernel implementation can be significantly simplified.Afterwards, the LN^quant operator takes two INT8 number as input and outputs the final INT8 activation for the following MLP module.[We use ·^quant operator in a unified way even though the inputs (e.g., the number of input variables, the data type) and/or the outputs are not the same, e.g., the LN^quant used in sec:embedding_module and sec:attn_module are two different kernels.]§.§.§ MLP Module Quantization A standard MLP module is illustrated in fig:mlp.The mathematical flow of the module is asX_1= X_inW_1,A= GELU(X_1), X_2= AW_2, X_out = LN(X_in, X_2).Similar as before, we first categorize all activation quantization schemes. For X_in and X_out, is applied. For A and X_2, is applied.For X_1, no quantization is used.Before applying weight quantization, we haveX_1= S_inX_in, int8W_1,A_int8S_a= GELU^quant(X_1), X_2, int8S_x_2 = A_int8S_aW_2, X_out = LN^quant(S_inX_in, X_2S_x_2).Similar as before, the scaling factors, i.e., S_a and S_x_2, can be merged into W_2 to simplify calculation:W̃_2 = S_aW_2 / S_x_2.§.§ Mixed Precision InferenceCombining all techniques in the previous section, we get the final design.However, different models and/or tasks have different tolerance to quantization, and they also have different desire on the trade-off of accuracy and system efficiency.In order to meet requirements for various models/tasks, mixed-precision inference is one of the solutions for quantization. Thanks to the modulized design of , we can set various quantization level for our final model.To demonstrate the necessary of mixed-precision inference, we show the accuracy of three quantization levels (tab:mode) in the next section.§ RESULTExperiments Setting. We use the “yoshitomo-matsubara/bert-base-uncased-” family models from Huggingface <cit.> to test the accuracy of .Particularly, we use 100 batches and batch size 16 to calibrate (i.e., only run the forward pass) all quantization-related values.The sequence length is 128 for all tasks. Results.The results of different quantizaton level of are shown in tab:result.The overall accuracy degrades as we increase the quantization level.However, besides CoLA, which is a super sentitive task, for all rest tasks, even -M3 achieves reasonable accuracy drop as compared to FP16 counterpart.Discussion. Note that the main focus of this work is not to achieve the best accuracy but to show the hardware-aware and practical INT8 PTQ framework, .As such, we did not tune any hyperparameters, including both explicitand/or implicit hyperparameters. For instance, (1) for explicit hyperparameters, we did not change calibration iterations.By reducing the batch number from 100 to 5 for CoLA, -M3 can get about 1% gain as compared to the result reported in tab:result; (2) for implicit hyperparameters, we did not tune the min/max value truncation for quantization.Normally, a careful tuned quantization threshold can boost the accuracy <cit.>.Two big missing pieces of the current work are the kernel implementation and the end-to-end system performance measurement.We leave them as a future work. § CONCLUSION In this study, we explored the intricacies of quantization for optimizing inference of transformer-based models, with a spotlight on Post-training Quantization (PTQ). Addressing the challenge of aligning algorithms with hardware, we introduced , a novel hardware-enhanced post-training W8A8 quantization framework. Our experiments, based on the Huggingface model family, demonstrated the efficacy of , highlighting its potential even with increased quantization levels. Areas like kernel implementation and end-to-end system performance measurement remain unexplored, paving the way for future research.plain
http://arxiv.org/abs/2310.17723v1
{ "authors": [ "Zhewei Yao", "Reza Yazdani Aminabadi", "Stephen Youn", "Xiaoxia Wu", "Elton Zheng", "Yuxiong He" ], "categories": [ "cs.LG", "cs.CL" ], "primary_category": "cs.LG", "published": "20231026183441", "title": "ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers" }
Department of Physics, Stockholm University, AlbaNova University Center, 106 91 Stockholm, Sweden Department of Physics, Stockholm University, AlbaNova University Center, 106 91 Stockholm, Sweden In this paper, we present a concrete example that highlights how predictions in non-Hermitian quantum mechanics can be inaccurately influenced by the absence of environment-induced fluctuations in the model. Specifically, we investigate the non-Hermitian skin effect and sensor in the Hatano-Nelson model, contrasting it with a more precise Lindblad description. Our analysis reveals that these phenomena can undergo breakdown when environmental fluctuations come to the forefront, resulting in a non-equilibrium phase transition from a localized skin phase to a delocalized phase. Beyond this specific case study, we engage in a broader discussion regarding the interpretations and implications of non-Hermitian quantum mechanics. This examination serves to broaden our understanding of these phenomena and their potential consequences. 45.50.Pq, 03.65.Vf, 31.50Gh Exploring the impact of fluctuation-induced criticality on non-hermitian skin effect and quantum sensors Jonas Larson January 14, 2024 ========================================================================================================§ INTRODUCTION In recent years, the field of non-Hermitian (NH) quantum mechanics (QM) has experienced a remarkable resurgence <cit.>. This renaissance can be traced back to the intriguing discovery that 𝒫𝒯-symmetric Hamiltonians, not necessarily Hermitian, can yield real spectra <cit.>. A pivotal moment in this revival occurred with the introduction of biorthogonal QM <cit.>, which ignited debates about the fundamental nature of QM. It challenged the long-held notion that observables must be represented solely by Hermitian operators <cit.>.The focus of NH QM has evolved to explore novel phenomena that emerge when we relax the constraints of Hermiticity and unitarity. One of the most extensively studied phenomena is the NH skin effect <cit.>, which renders extreme sensitivity to non-local perturbations <cit.>. For certain NH local Hamiltonians with open boundary conditions, all left/right eigenvectors |ϕ_n^L,R⟩ localize to one of the edges, offering intriguing possibilities for detection of weak signals <cit.>.NH QM often serves as an effective description of open quantum systems, typically arising from the interaction with an external environment. However, this approach raises questions about the treatment of fluctuations and the potential violation of well-established quantum theorems <cit.>.This paper adopts a different perspective by employing the Lindblad master equation (LME) as a foundational framework to analyze quantum systems exposed to losses. Unlike NH QM, we do not neglect fluctuations, thus avoiding concerns related to quantum jumps. We also explore the implications and interpretations of NH theories in greater detail. Our study focuses on a specific example, where fluctuations qualitatively alter the physics of the system. We investigate a LME that reduces to the Hatano-Nelson (HN) model <cit.> in the absence of quantum jumps, revealing a breakdown of the NH skin effect in favor of a delocalized phase. We discuss how such non-equilibrium criticality relates to earlier models in the context of optical bistability <cit.>.In conclusion, we examine the effects of fluctuations on NH QM, offering a perspective that complements existing research <cit.>, especially by identifying a phenomenon of fluctuation-induced criticality which qualitatively alters the physical properties. We aim to provide a more detailed understanding of the role of fluctuations in NH QM. This will help shed light on the applicabilities of the theory in the quantum regime.The paper is structured as follows: In the next section, we provide an in-depth discussion of non-unitary time evolution, with a particular focus on its description within the LME. We emphasize the importance of CPTP (Completely Positive, Trace-Preserving) maps and use them to argue why eigenvectors of a Liouvillian should not be considered as physical states. In Sec. <ref>, we introduce the model system, the HN model in Subsec. <ref>, and its LME realization in Subsec. <ref>. Our main findings are presented in Sec. <ref>, beginning with an exploration of the NH skin effect inSubsec. <ref> and then a discussion of how this translates to applications in sensing in Subsec. <ref>. We conclude with a discussion in Sec. <ref>. Additionally, we include two appendices. The first provides general comments on open quantum systems (appendix. <ref>), and in the second, we demonstrate how our results for the HN model also apply to the NH SSH model (appendix. <ref>).§ NON-UNITARY QUANTUM EVOLUTIONIn this section, our primary aim is not to present new findings but rather to provide context by discussing general aspects of quantum state evolution. We specifically focus on the evolution generated by the LME. It is important to note that the Liouvillian, which is responsible for governing time evolution within the LME framework, is not represented by an observable. This distinction leads to significant differences compared to Hamiltonian systems. For instance, the eigenvectors of the Liouvillian do not typically represent physical states.Nonetheless, a NH Hamiltonian somewhat falls between a traditional Hamiltonian and a master equation, but the field's terminology tends to lean more towards that of a Hamiltonian system. Having addressed these formal issues, the subsequent section will explore a concrete example as we apply our knowledge to the NH mode. §.§ The Lindblad master equationIn Appendix <ref>, we provide a more detailed description of open quantum systems. In this section, we will simply state that our system, denoted as 𝒮, is weakly coupled to its surrounding environment. This inevitably implies that the evolution of the system alone cannot be described solely through unitary time evolution. However, we can still assume that the state ρ̂(t), which characterizes the physical properties of the system, adheres to the following physical state conditions:[(i)Tr[ρ̂(t)]=1,Normalization,; ; (ii)ρ̂(t)=ρ̂^†(t),Hermiticity,; ;(iii)||ρ̂(t)||≥0, Positivity. ]The first condition corresponds to standard normalization, which preserves probabilities. The second condition ensures that all eigenvalues of ρ̂(t) are real, while the third condition guarantees that all eigenvalues are non-negative. These conditions are crucial for maintaining the probability interpretation of quantum mechanics, e.g. avoidning negative probabilities. It is important to note that these conditions must hold for all times, t. A mapping of physical states into new physical states, hence obeying the above conditions, is referred to as a completely positive trace-preserving map (CPTP) <cit.>. In classical systems that emulate quantum dynamics (for example after applying the paraxial approximation to light propagation in non-linear media), deviations from the first condition are analogous to the loss or gain of particles or intensity, leading to a departure from the probabilistic nature.Lindblad posed the question <cit.>: What is the most general differential equation describing the time evolution of a quantum state, ensuring that the evolved state ρ̂(t) remains a valid density operator at all times? The most general form of such a dynamical CPTP map (on a differential structure) can be expressed in the Lindblad form <cit.>[∂/∂ tρ̂=ℒ̂[ρ̂] = i[ρ̂,Ĥ]+𝒟̂[ρ̂]; ; = i[ρ̂,Ĥ]+∑_kγ_k(2L̂_kρ̂L̂_k^†-L̂_k^†L̂_kρ̂-ρ̂L̂_k^†L̂_k). ]Here, we introduce the Liouvillian operator ℒ̂, and the dissipator operator 𝒟̂ accounts for the influence of the environment. The γ_k values represent the “decay rates” for channel k, and the L̂_k's are the Lindblad jump operators <cit.>. The above LME can be put on the form∂/∂ tρ̂=ℒ̂_c[ρ̂]≡ i(ρ̂Ĥ_eff-Ĥ_eff^†ρ̂)+𝒥̂_c[ρ̂],with the effective NH “Hamiltonian” defined asĤ_eff=Ĥ-i∑_kγ_nL̂_k^†L̂_k,and the jump super-operator𝒥̂_c[ρ̂]=2c∑_nγ_nL̂_nρ̂L̂_n^†.Please note that we use “Hamiltonian” in quotation marks because, in general, it does not correspond to a traditional physical Hamiltonian. Specifically, when using “Hamiltonian” we refer to Eq. (<ref>). The subscript 0≤ c≤1 parametrizes the master equation, where c=1 reproduces the correct LME (<ref>), and for c=0, the evolution is governed by the NH “Hamiltonian” Ĥ_eff. We will exclusively focus on time-independent jump operators. It is worth mentioning that a microscopic derivation of the LME (<ref>) typically relies on three approximations <cit.>: the Markovian, Born, and secular approximations. The properties of the environment and the system-environment Hamiltonian determine the dissipator 𝒟̂[ρ̂], including the rates γ_k and the jump operators L̂_k. Importantly, in especially cold atom and optical systems, it is feasible, to a high degree, to engineer both the system and its coupling to the environment to achieve desired Liouvillians <cit.>. In the following section, we provide explicit suggestions on how to utilize relaxation to implement a HN-like model characterized by unbalanced left/right hopping in a 1D tight-binding lattice.§.§ Some general properties of the Lindblad master equationAn eigenvector, denoted as ρ̂_j and its corresponding eigenvalue, μ_j, of the LME are defined by the equation <cit.>ℒ̂[ρ̂_j]=μ_jρ̂_j.In principle, we should refer to ρ̂_j as an “eigenmatrix”, but for simplicity, we will continue to use the term “eigenvector”. This choice is justified since the LME (<ref>) can be vectorized asd/dt|ρ⟩⟩=ℒ̂_v|ρ⟩⟩with the voctorized Liouvillian, ℒ̂_v, defined as[ ℒ̂_v=-i(Ĥ⊗𝕀+𝕀⊗Ĥ^T); ; +∑_kγ_k[2L̂_k⊗L̂_k^† T-L̂_k^†L̂_k⊗𝕀-𝕀⊗(L̂_k^†L̂_k)^T]. ]Now, given a finite Hilbert space dimension D(ℋ)=N, ρ̂ is represented as anN^2-component vector, |ρ⟩⟩, rather than an N× N matrix <cit.>.The vector|ρ⟩⟩ resides in a Liouville space ℒ, which is the direct product of two Hilbert spaces, such that the dimension of the Liouville space is D(ℒ)=N^2.For the numerical computations presented in Sec. <ref>, we utilize the vectorized version of the LME. We select an appropriate (Fock) basis and express the operatorsĤ and L̂_k as matrices, which are employed to construct the Liouvillian matrix ℒ̂_v. Subsequently, we numerically diagonalize it to determine the Liouvillian spectrum μ_j and corresponding eigenvectors |ρ_j⟩⟩↔ρ̂_j (including the steady state). The steady state of Eq. (<ref>) complies with ℒ̂[ρ̂_ss]=0. It is important to note that the vector components of|ρ⟩⟩ do not represent probability amplitudes. For the scalar product, we have⟨⟨ρ|ϱ⟩⟩=Tr[ρ̂ϱ̂].The uniqueness of the steady state ρ̂_ss has been extensively discussed in previous works, starting with studies by Spohn and continued by others; refer, for example, to Refs. <cit.>. For LMEs possessing symmetries, multiple steady states can arise, leading to non-trivial situations reminiscent of spontaneous symmetry breaking, akin to continuous phase transitions in closed systems <cit.>. We will return to this in the next section when analyzing a LME-extension of the HN model. The LME is a CPTP map, i.e. dN/dt=0 where N=Tr[ρ̂(t)], and ⟨ψ|ρ̂(t)|ψ⟩≥0 for any state |ψ⟩.However, it is important to note that as soon as the previously introduced parameter c≠1, the CPTP property is generally lost. Specifically, in the NH limit with c=0, it is only under very special circumstances that the evolution remains trace-preserving, even when the spectrum is purely real. A significant consequence of the CPTP property of the LME can be directly inferred <cit.>. First, we observe that an eigenvector evolves as ρ̂_j(t)=e^μ_jtρ̂_j,and thus, its trace N_j(t)=Tr[ρ̂_j(t)]=e^μ_jtTr[ρ̂_j].However, since dN(t)/dt=0, we must haveTr[ρ̂_j]=0 whenever μ_j≠0. In other words, every eigenvector, apart from the steady states, is traceless. According to (<ref>), this leads to the crucial result: Every eigenvector ρ̂_j of some Lindblad Liouvillian ℒ̂, apart for its steady state(s), are unphysical. For classical systems, it is well-established that only the steady state of a master equation can exclusively consist of non-negative entries. In contrast, all other eigenvectors possess at least one entry that is negative, rendering them unsuitable for representing physical states, as these entries correspond to probabilities <cit.>. The result stated above parallels this observation, but now applies to quantum systems. Consequently, this motivates us to express the following:As with a master equation, the eigenvectors and eigenvalues of ℒ̂_c should not, in the general case, be interpreted as representing physical states and observable quantities, respectively.This assertion is not confined solely to the LME case (c=1), as we contend (as elaborated below) that the evolution generated by Ĥ_eff should not be regarded as Hamiltonian time evolution. When one unravels the Lindblad master equation in quantum trajectories <cit.>, the NH (renormalized) time evolution arises through post-selection. Thus, in this case, while it may seem as if Ĥ_eff generates the time evolution, one should not forget that it is actually only true due to the post-selection constraint. In both classical and quantum scenarios, the eigenvectors still constitute a complete set, apart at exceptional points, meaning that any other state can be expressed as a linear combination of them, ρ̂=∑_jp_jρ̂_j, using coefficients p_j. Thus, the time-evolved state can be expressed asρ̂(t)=ρ̂_ss+∑_jp_je^μ_jtρ̂_j,where it is implied that the sum excludes eigenvectors with μ_j=0. To ensure CPTP behavior, the eigenvalues must satisfy Re(μ_j)≤0 <cit.>. Consequently, all states ρ̂_j with a non-zero real part Re(μ_j) are exponentially suppressed as time progresses. The Liouvillian gap, defined as <cit.>Δ_ℒ=min_jRe(μ_j),can be regarded as providing an initial estimate for the relaxation time scale towards the steady state (although the scenario can be more intricate <cit.>). In the context of quantum phase transitions, in the thermodynamic limit, the ground state exhibits non-analytic behavior at the critical point, and additionally, the spectrum necessarily becomes gapless at this juncture. Analogously, the steady state ρ̂_ss may display similar non-analytic behavior, accompanied by the vanishing of the Liouvillian gap <cit.>. It is important to note that the steady state does not decay, although under certain conditions <cit.>, Re(μ_j) may equal 0 while Im(μ_j)≠ 0, potentially resulting in non-stationary states. However, this requires that the eigenvalues μ_j must appear in complex conjugate pairs if they possess a non-vanishing imaginary part.Returning to Eq. (<ref>), if we put the last term to zero (i.e. c=0) the evolution is governed by the NH “Hamiltonian” Ĥ_eff of Eq. (<ref>). In this case, let us assume that we know the right eigenvectors Ĥ_eff^†|φ_l^R⟩=ν_n|φ_l^R⟩.That is, the left eigenvectors obey Ĥ_eff|φ_l^L⟩=ν_n^*|φ_l^L⟩.The eigenvalues and eigenvectors of ℒ̂_c=0 become μ_j^0=i(ν_l^*-ν_k),ρ̂_j^0=|φ_k^R⟩⟨φ_l^R|.Here, the superscript 0 denotes the case c=0, and j replaces the double indices (l,k). As mentioned earlier, if the Hilbert space dimension is finite, D(ℋ)=N, the Liouville space dimension is N^2, as seen in Eq. (<ref>) since 1≤ k, l≤ N implies 1≤ j≤ N^2. Thus, we have far more eigenvectors/values of the Liouvillian ℒ̂_c=0 than for the NH “Hamiltonian” Ĥ_eff. We can parametrize the eigenvectors of ℒ̂_c with the number c (i.e. ρ̂_j^c), such that ρ̂_j^c=0 reproduces the eigenvectors in (<ref>), while ρ̂_j^c=1 provides those of Eq. (<ref>). For finite dimensionsional Hilbert spaces we expect the vectors ρ̂_j^c and eigenvalues μ_j^c to be analytic in c, with the exceptions at possible exceptional points <cit.>. Letting c = 0, if ν_l is real, it follows that the vectors ρ̂_j^0=|φ_l^R⟩⟨φ_l^R| are steady states and also physical. However, ifIm(ν_j)≠0, these eigenvectors ρ̂_j^0=|φ_l^R⟩⟨φ_l^R| are no longer steady states since their norms are not preserved.Let us provide some context for the discussion above. As mentioned in the introduction, NH QM challenges well-established physical concepts. Furthermore, an ongoing debate surrounds the clear interpretation of the theory emerging from NH QM, particularly with regard to which states should be employed to describe time evolution <cit.>.One approach to circumvent potential issues, such as violating the no-signaling theorem, is the introduction of biorthogonal QM <cit.>. This approach hinges on the biorthogonality property, which allows for the construction of mutually orthogonal left |φ_j^L⟩ and right |φ_j^R⟩ eigenvectors, such that ⟨φ_l^L|φ_j^R⟩=δ_lj. For example, this property leads to the modified resolution of identity, which becomes Ô=∑_jo_j|ϕ_j^R⟩⟨ϕ_j^L|. Additionally, the spectral resolution of an operator can be expressed as Ô=∑_jo_j|ϕ_j^R⟩⟨ϕ_j^L|, where o_j and |ϕ_j^R⟩ (⟨ϕ_j^L|) represent the eigenvalue and right (left) eigenvectors of the operator, respectively. In the biorthogonal framework, assuming an initial state |ψ(0)⟩, the state at a later time is described by two vectors: |ψ^L,R(t)⟩, referred to as the `left' and the `right' evolved state. Expectations of observables Ô should also be evaluated according to this `state,' given by 𝒪_|ψ^L,R⟩(t)=⟨ψ^L(t)|Ô|ψ^R(t)⟩=Tr[Ôρ̂(t)],with ρ̂(t)=|ψ^R(t)⟩⟨ψ^L(t)|.This can be extended further; the expectation of an operator for the j'th eigenvector becomes 𝒪_j=⟨φ_j^L|Ô|φ_j^R⟩. When applied to the position operator n̂_n=|n⟩⟨ n| (where |n⟩ represents the particle localized to site n) of the HN model, one finds that the “biorthogonal” eigenvectors are not localized at the edges but instead are delocalized within the bulk <cit.> (see Eq. (<ref>) below for the eigenvectors).Of course, if the generator of time evolution is a Hermitian Hamiltonian, this reproduces standard quantum mechanics since ⟨ψ^L| will equal the right bra vector ⟨ψ^R|. However, while the biorthogonal approach seems to resolve some issues, it comes with caveats. First, we note that ⟨φ_l^L|φ_j^R⟩=δ_lj does not set the norm of the left/right eigenvectors independently of each other. In fact, this results in a family of scalar products parametrized by a metric η <cit.>. Thus, an ambiguity, similar to a gauge freedom, arises. It has been argued that for a given Hamiltonian, there is a preferable metric to be used, which, however, implies that the metric, and thereby the scalar product, changes when, for example, you add one particle or site (modifying the Hilbert space dimension) to your system <cit.>.Secondly, the fact that a physical state must be ascribed both a bra- and ket-vector is non-intuitive using common knowledge, and we have not even addressed mixed states. Now, it has been argued that the biorthogonal scalar product is not the one to be used in order to describe the time evolution of actual physical systems <cit.>. This aligns with our description. If we take the LME as our starting point and think that we can, at least in theory, connect it to NH QM by `turning off' the jump terms, we should not end up with the biorthogonal left/right formalism, but rather with a right/right (or left/left) density operator <cit.>. In the post-selection approach, it is yet again the right/right state (complemented with renormalization) that describes the system.It turns out that the spectral properties (<ref>) of the NH HamiltonianĤ_eff can capture the full Liouvillian spectrum. To be more precise, under specific conditions, the spectrum of ℒ̂_c=0 can be directly mapped to the spectrum of ℒ̂_c=1 <cit.>. This mapping becomes feasible when the system Hamiltonian supports particle conservation, as indicated by [N̂,Ĥ]=0, and every jump operator adheres [L̂_k,N̂]=L̂_k. This situation is relevant in the context of spontaneous decay <cit.>. It is worth noting that our Lindblad representation of the HN model, as presented in the following section, does not satisfy this second condition, and therefore, we cannot utilize such a property. Another interesting scenario arises when the Liouvillian has a quadratic dependence on the creation/annihilation operators â_k^†/â_k <cit.>. Similar to quadratic Hamiltonians, the quadratic form of the Liouvillian can be employed for diagonalization through a generalized Bogoliubov–de Gennes approach. This method has been recently applied in the investigation of the skin effect in Liouvillians <cit.>. However, it is important to note that the physical realization of the HN model we have in mind, as described in Subsec. <ref>, does not fall within this class of solvable models.§ MODEL SYSTEM: LIOUVILLIAN FOR THE HATANO-NELSON MODEL §.§ The Hatano-Nelson modelA fundamental model often used in the study of NH QM is the one proposed by Hatano and Nelson. This model describes a single particle within a one-dimensional tight-binding lattice of N sites, where the hopping between sites is unbalanced in the left and right directions <cit.>. For open boundary conditions (BC), the model is represented by the (NH) Hamiltonian asĤ_HN=∑_n=1^N-1[(1-δ)â_n^†â_n+1+(1+δ)â_n+1^†â_n].Here, â_n (â_n^†) represent the annihilation (creation) operators for a particle at site n, satisfying the single particle constraint N̂=∑_n=1^Nâ_n^†â_n≡∑_n=1^Nn̂_n=1. The parameter δ varies between 0 and 1 and indicates the degree of asymmetry in the hopping to the left and right.For a single particle, the model can also be expressed using bracket notation, for example, â_n+1^†â_n↔|n+1⟩⟨ n|, and so forth, where |n⟩ represents the number state of the particle localized at the n-th site. The model is typically presented in dimensionless units, scaled with respect to the symmetric hopping amplitude. It can be convenient to decompose the Hamiltonian into `real' and `imaginary' parts as followsĤ_HN=Ĥ_R+iδĤ_I,where Ĥ_R=∑_n=1^N-1(â_n^†â_n+1+â_n+1^†â_n)and Ĥ_I=i∑_n=1^N-1(â_n^†â_n+1-â_n+1^†â_n)Both Ĥ_R and Ĥ_I are hermitian. Notably, in the case of open boundary conditions, the commutator [Ĥ_R,Ĥ_I] vanishes, except for the first and last diagonal elements.For future reference, it is informative to introduce the ladder operators, denoted as Ê=∑_nâ_n^†â_n+1 and its hermitian conjugate Ê^†. These operators satisfy the following relationships: Ê|n⟩=|n-1⟩ and Ê^†|n⟩=|n+1⟩ (it should be noted that for a finite lattice, Ê|1⟩=0 and Ê^†|N⟩=0). When combined with the operator Ê_0, which follows the relation Ê_0|n⟩=n|n⟩, these three operators collectively constitute what is known as the Euclidean algebra <cit.>[Ê,Ê_0]=-Ê,[Ê^†,Ê_0]=Ê^†,[Ê,Ê^†]=0.Alternatively, we can introduce the “psoition” and “momentum” operators Ê_x=Ê+Ê^† and Ê_p=i(Ê-Ê^†), and the “Hamiltonian” reads Ĥ_HN=Ê_x+iδÊ_p.For periodic BC, the real and imaginary parts commute and the eigenvectors are delocalized bulk states. For open BC the spectrum reads <cit.>ν_j=2√(1-δ^2)cos(jπ/(N+1)),while for periodic BC one hasν_j=2[cos(2π j/N)-iδsin(2π j/N)],where j=1,2,…,N. Consequently, the spectrum is purely real for open boundary conditions. However, the right and left eigenvectors are not orthogonal, which leads to non-unitary time evolution. All right eigenvectors, while not normalized, are exponentially localized towards the right edge and can be represented in terms of the number states as:|φ_j^R⟩=∑_n=1^N(1+δ/1-δ)^nsin(njπ/N+1)|n⟩.The left eigenvectors can be obtained by substituting δ with -δ. By using this substitution, we find that the biorthogonal QM provides the eigenvector site occupations P_j(n)=⟨φ_j^L|n̂_n|φ_j^R⟩∝sin^2(njπ/N+1). The bulk nature of these states, when combining left and right eigenvectors, was previously observed by Hatano and Nelson <cit.>. §.§ Lindblad implementation of the Hatano-Nelson modelOur objective is to construct jump operators L̂_k, which enable the NH “Hamiltonian” in Eq. (<ref>) to match the HN Hamiltonian (<ref>). Once these operators are identified, we propose a physical system where this can be implemented. By separating the real and imaginary components of both equations, we should obtain the following relationshipsĤ_R=ĤandδĤ_I=-∑_kγ_kL̂_k^†L̂_k.The first equation implies that our Hamiltonian should resemble a simple N-site tight-binding chain. The second identity is more complex, as the spectrum ofĤ_I is not strictly non-negative, while L̂_k^†L̂_kis positive semi-definite (recall γ_k≥0). Nonetheless, we can resolve this issue by shifting the entire spectrum of the NH “Hamiltonian” by an imaginary constant of2iδ <cit.>. It is worth noting that the decomposition into jump operators is not unique; instead, there are numerous possible combinations. To illustrate this, the right-hand side of equation (<ref>) can be compactly expressed as -L̂^†ΓL̂, where L̂=(L̂_1, L_2,…)^t is a column vector containing all jump operators (for a finite dimension D(ℋ)=N, the number of independent jump operators can be limited to N^2-1), and Γ is a diagonal matrix with γ_k along its diagonal. It is evident that this term remains invariant under some unitary transformation Û, i.e., L̂'=ÛL̂ and Γ'=ÛΓÛ^-1. The specific choice should be determined by the physical system in question. Two cases, in particular, come to mind: * Local decay channels. At each site n, we can associate a jump operator L̂_n. When this operator acts on the state |n⟩, there is a non-negligible probability for the particle to transition to |n+1⟩. A straightforward approach might be to set L̂_n=â_nâ_n+1^†. However, this does not suffice, as L̂_n^†L̂_n results in n̂_n, which cannot be used to construct the desired effective NH “Hamiltonian”. Instead, we should employ the following expression:L̂_n=iâ_nâ_n+1^†+n̂_n,ensuring that the application of the jump operator to |n⟩ leaves the state in a superposition of |n⟩ and |n+1⟩. * Collective decay channels. If the jump operator does not “keep track” of the particle's position, we say that the jump occurs independently of the particle's position. In this case, we sum the local jump operators into a single one, i.e.L̂_c=i∑_n=1^N-1â_nâ_n+1^†+∑_n=1^Nn̂_n≡Ê+𝕀,where Ê was defined above Eq. (<ref>).The two scenarios described above result in different physical realizations. Unraveling the LME in terms of stochastic 'quantum trajectories' <cit.> provides a physical picture of the system's evolution. In summary, when we possess complete knowledge about the environment, the system should evolve deterministically with the NH “Hamiltonian” (<ref>). However, this evolution is occasionally interrupted by quantum jumps, whose effects are described by the application of jump operators to the state. This process involves the instantaneous renormalization of the state under non-unitary time evolution.In practice, `keeping track' of the environment involves monitoring any photons spontaneously emitted by the system. If a photon is detected, we can infer that the system has undergone a stochastic jump. This provides a distinct physical differentiation between the two scenarios. In the latter situation, the emitted photon does not yield information about the particle's position. In contrast, in the former scenario, a recorded photon not only signifies that a jump has occurred but also reveals the particle's location, effectively collapsing the wave function within the lattice.As a result, it becomes evident that we can anticipate different behaviors under the evolution driven by the different Lindblad operators. For the remainder of our discussion, we will focus on the collective decay channel. However, for a finite lattice comprising N sites, we should note that the jump operator L̂_c of Eq. (<ref>) is insufficient. We need to supplement it with an additional dephasing operator L̂_1=â_1^†â_1 for the first site. With this consideration in mind we haveL̂_c^†L̂_c+L̂_1^†L̂_1=Ĥ_I+2𝕀,and by further identifying γ_c=γ_1≡γ=-δ we have our LME corresponding to the HN model[∂/∂ tρ̂= i[ρ̂,Ĥ]+γ(2L̂_cρ̂L̂_c^†-L̂_c^†L̂_cρ̂-ρ̂L̂_c^†L̂_c); ; +γ(2L̂_1ρ̂L̂_1^†-L̂_1^†L̂_1ρ̂-ρ̂L̂_1^†L̂_1), ]where the Hamiltonian Ĥ identifies the real part (<ref>) of the HN Hamiltonian.In this section, we offer a conceptual overview of how this model could be implemented within a cold atom setup <cit.>. A similar concept was recently proposed using motional sidebands in a trapped ion setup <cit.>. Our discussion centers on an atom that possesses two internal hyperfine levels: |g⟩ (ground) and |e⟩ (excited). This atom is confined within a one-dimensional lattice. Under the usual approximations, which include tight-binding and single-band assumptions, we implement a resonant classical drive between the two internal atomic states. This leads to the following second-quantized lattice Hamiltonian[Ĥ= -t_g∑_n(â_n^†â_n+1+h.c.)-t_e∑_n(b̂_n^†b̂_n+1+h.c.); ; +g∑_n(b̂_n^†â_n+h.c.). ]Here, t_g and t_e represent the tunneling rates of the two atomic species, and â_n (â_n^†) and b̂_n (b̂_n^†) correspond to the single-site annihilation (creation) operators. The parameter g signifies the effective Rabi coupling. Notably, the driving mechanism couples atomic internal states within a single site. This implies a smooth laser profile and a sufficiently deep lattice. For simplicity, we can assume |t_e|≪|t_g| to suppress lattice dynamics of the excited states. We normalize energies in terms of the tunneling rate t_g, setting t_g=1 from this point forward. If the |e⟩-level rapidly relaxes to the ground state |g⟩, adiabatic elimination is applicable <cit.>. Consequently, we commence with the above Hamiltonian, complemented by a bath of oscillators inducing couplings between the two atomic levels. In this derivation, we employ the standard approximations, which include Born, Markov, and secular approximations <cit.>. The physical setup is depicted in Figure 1. In the resulting LME for the atomic ground state |g⟩, the effective “decay rate” γ is directly proportional to the Rabi frequency g. It also significantly depends on the Franck-Condon factors, which are proportional to the overlaps between Wannier functions of the two species. The alignment of the two lattices and their lattice depths should ensure that a Wannier function localized to the n'th site of the “red” lattice primarily overlaps with Wannier functions at then'th andn+1'th lattice sites of the “black” lattice. In this scenario, and under the assumption of precise phase alignment between the two decay channels (see Eq. (<ref>)), we achieve the desired LME. § CASE STUDIESIn the section we consider two phenomena within the HN model: the skin effect <cit.> and NH sensors <cit.>. The aim is to compare and contrast outcomes obtained using either the HN model of Eq. (<ref>) or the full LME (<ref>). §.§ Skin effectWhile the phenomenon was initially discovered by Hatano and Nelson in the 1990s <cit.>, the term “non-hermitian skin effect” was coined by Yau and Wang more than two decades later when they conducted a more in-depth and comprehensive study <cit.>. When considering a NH lattice “Hamiltonian” with open boundary conditions, the skin effect is characterized by the eigenvectors becoming localized at the edges. This has a profound consequence: the system becomes exponentially sensitive to non-local perturbations <cit.>. This sensitivity can be demonstrated by extending standard perturbation theory to NH models <cit.>. Hence, NH edge localization and exponential sensitivity are two sides of the same coin. We will revisit the latter concept in Subsec. <ref>. To characterize edge localization, we consider a lattice with N=2n-1 (where n is a positive integer) sites and introduce the matrix Ŝ=diag(-n:n) along with the scaled position expectation ξ defined asξ=1/NTr[ρ̂_ssŜ].If ξ=±1, the state ρ̂ is maximally localized at one of the edges, whereas ξ=0 indicates that the state is centered within the lattice. The uncertainty Δξ=√(⟨Ŝ^2⟩-⟨Ŝ⟩^2) determines the degree of localization of the state. Therefore, when Δξ is of the order of N, the state becomes delocalized over the entire lattice. The purity of the state is quantified byP=Tr[ρ̂_ss^2].Notably, in the context of NH QM, the (normalized) eigenvectors are pure, and their normalization ensures that P_NH=1.In the thermodynamic limit of an infinite lattice, where [L̂_c,L̂_c^†]=0, the steady state of the LME simplifies to the maximally mixed state, ρ̂_ss∝𝕀 <cit.>. However, for a finite lattice, non-trivial steady states can emerge, particularly those localized at the edges. When comparing this with the HN model, the question arises regarding which state serves as the counterpart of the steady state in the HN model.One potential approach is to consider the right eigenvector |φ_j^R⟩ with the largest imaginary part of its eigenvalue, i.e., max[Im(ν_j)]. This vector would represent the steady state of Ĥ_HN, provided that we renormalize it under time-evolution. However, for open boundary conditions, the spectrum is real. Instead, we consider the eigenvector with the smallest eigenvalue, which corresponds to the state |φ_N^R⟩ as defined in Eq. (<ref>). In this way, we argue that this state, in some sense, mimics the ground state of the system. The fidelity between this state and the steady state of the LME is defined asF=⟨φ_N^R|ρ̂_ss|φ_N^R⟩, The spectrum and eigenvectors of the LME are determined through exact diagonalization of the vectorized Liouvillian (<ref>). The numerical results for the three defined quantities are presented in Fig. <ref>. In the upper plot (a), we display the position (<ref>) for both the LME and the HN model. We vary the rate γ and consider two system sizes. In the lower plot (b), we provide the purity (<ref>) of the steady state and the fidelity (<ref>). There is a significant distinction between the two models. The HN model exhibits edge localization, or the skin effect, for all γ values, and for γ=1, only the rightmost site is populated.In contrast, the LME supports two different phases: a delocalized phase for γ<γ_c=1/2 and a localized phase for γ>γ_c. In the localized phase, the state primarily occupies the right edge, resembling a skin state, while in the delocalized phase, the state spreads across the entire lattice. The purity, as seen in (b), illustrates that the delocalized phase is approximately a maximally mixed state. Although the states ρ̂_ss and |φ_N^R⟩ exhibit similar localization properties near γ=1, the fidelity reveals that they are, in fact, quite distinct.Even though only the steady state represents a physical state, we can still analyze the properties of the remaining eigenvectors of the Liouvillian. Not shown here, but for γ>γ_c, they tend to localize at the edge, while for γ<γ_c, they become delocalized.To provide insight into the critical properties of the delocalized-to-localized phase transition, we plot the real part of the Liouvillian spectrum μ_j in Fig. <ref> (a). It is evident that the Liouvillian gap Δ_ℒ, as defined in Eq. (<ref>), closes precisely at the critical point γ_c. In this figure, we consider a lattice with N=41 sites, resulting in a Liouvillian matrix of dimensions 1681×1681. In the thermodynamic limit, the real part of the spectrum becomes gapless and continuous within the delocalized phase.For a critical model, the closure of the gap concerning the system size typically follows universal scaling:Δ_ℒ∼ N^-1/ν,where ν represents the correlation length critical exponent <cit.>. Our numerical findings affirm that the exponent ν is 1/2, as demonstrated in Fig. <ref> (b). This value corresponds to the exponent of a mean-field critical model, suggesting that quantum fluctuations are suppressed in comparison to fluctuations arising from the environment.It might appear counterintuitive that the “fluctuation-induced breakdown” of the skin effect occurs for weak couplings (γ) rather than when the system is strongly coupled to its environment. This can be understood by considering the steady state, which describes the system after an infinitely long time. In the delocalized phase, the system is gapless, and the relaxation time diverges. Consequently, there is more room for fluctuations to manifest compared to the localized phase. This phenomenon resonates with what is known in adiabatic quantum computing and quantum control in atomic physics <cit.>.For reasons that will become clear, let us introduce a generalized Fock space. The Liouvillian Fock states, denoted as |l⟩⟩ and residing in the Liouvillian space, can be identified through vectorization as|n⟩⟨ m|→|N(n-1)+m⟩⟩,such that 1≤ l≤ N^2. The vectorized LME is provided in (<ref>), and its formal solution is represented as|ρ(t)⟩⟩=exp(ℒ̂_vt)|ρ(0)⟩⟩ .We can now extend the concept of Fock state lattices <cit.> to Liouvillian Fock state lattices. The idea here is to envision the Liouvillian matrix ℒ̂_v as describing hopping in a lattice, with its sites representing the Fock states |l⟩⟩. In other words, the components of the vector |ρ⟩⟩ correspond to the populations at different lattice sites. Specifically, the diagonal elements ε_i, defined as ε_i≡⟨⟨ l|ℒ̂_v|l⟩⟩, represent on-site “energies”, while the off-diagonal elements ⟨⟨ l|ℒ̂_v|k⟩⟩ characterize the tunneling amplitudes between sites l and k. Importantly, ε_l is real and ε_l≤0, implying that the diagonal terms induce on-site dissipation.In the context of the problem at hand, it turns out that the Liouvillian Fock state lattice takes the form of a square lattice, as illustrated in Fig. <ref>. The thick dots represent the lattice sites, and the arrows indicate the allowed tunnelings between these sites. The color shading of the sites reflects the magnitude of on-site dissipation, with a decreasing order from black to gray to white.Having identified the Fock state lattice, we make the following observations. There is no asymmetry in the horizontal/vertical tunnelings, as one might expect from an HN model. However, there is diagonal tunneling that is non-zero only in one direction. In addition to the imaginary on-site terms, these diagonal directional tunneling terms imply that the Liouvillian matrix becomes NH. Furthermore, these diagonal processes result from quantum jumps, driven by fluctuations, and consequently, they do not appear in the lattice emerging from the HN model. More precisely, the corresponding HN lattice is simply a two-dimensional version of the HN model with imbalanced left/right tunnelings. Hence, quantum jumps not only induce diagonal tunneling terms but also alter the nearest neighbor tunneling amplitudes, making them balanced.As previously mentioned, the on-site “energies” vary in the lattice. For instance, at site l=N^2 (corresponding to the rightmost lattice site in the original 1D real space lattice), we have ε_N^2=0, indicating no local dissipation. In the Fock state lattice, this corresponds to the site in the lower left corner (white dot). Along the edges originating from the l=N^2-site, there is moderate dissipation (gray dots), while in the bulk, the dissipation is most pronounced (black dots). Let us introduce ε_b and ε_e for the bulk and edge on-site “energies” respectively (the black and gray sites in the figure), and t_0 and t_d for the horizontal/vertical (nearest neighbor) and diagonal (next nearest neighbor) tunneling amplitudes respectively. These lattice parameters are related to γ as shown in Tab. <ref>. If there were no tunnelings (t_0=t_d=0), the system (ground/steady state) would localize at the site represented by the white dot. The tunneling terms tend to delocalize the steady state. As γ increases, the dissipation-induced localization becomes stronger, but at the same time, tunneling-induced delocalization also strengthens. Before the critical point, γ<γ_c=1/2, the nearest neighbor tunneling rates (2|t_0|) are greater than ε_b, causing the system to delocalize. Beyond the critical point, γ>γ_c, where 2|t_0|<ε_b, the edge skin mode wins. This reasoning demonstrates how the transition can be understood from the Liouvillian Fock state lattice.It is worth noting that similar delocalization-localization transitions have been discussed in the past with a model described as∂ρ̂/∂ t=i[ρ̂,Ŝ_x]+γ/S(2Ŝ^-ρ̂Ŝ^+-Ŝ^+Ŝ^-ρ̂-ρ̂Ŝ^+Ŝ^-).This model was introduced in the late 1970s to study quantum optical bistability <cit.>. In this context, the Ŝ-operators represent collective spin operators, and S is the total (conserved) spin. The Hamiltonian can be seen as describing a classical drive of the spin, while the Lindblad dissipation represents spontaneous decay of the spin. One attractive aspect of this model is that the steady state can be determined analytically <cit.>, and it exhibits a mean-field critical point at γ_c=1/2, where the system transitions from being magnetized (localized) to paramagnetic (delocalized) <cit.>. The transition is continuous, without any apparent spontaneous symmetry breaking, which has generated some debate <cit.>.To draw a connection to the model studied in this paper, it is important to note that the Hamiltonian component Ŝ_x tends to delocalize the state in the spin Fock basis (the |S,m⟩-eigenstates of Ŝ_z), while the dissipative part drives the state toward the “edge” |S,-m⟩. We have numerically solved the LME using Ĥ=Ê_x and L̂=Ê, with Ê_x and Ê being operators from the Euclidean algebra (<ref>). This resulted in a similar phase transition as observed for the LME (<ref>). Consequently, we draw a comparison between two models: one supporting an SU(2) algebra and the other an Euclidean algebra, both being otherwise equivalent. Of course, the spin operators come with a square-root normalization factor when acting on the Fock states, but this factor does not alter the Fock state lattice geometry; it induces a strain in the lattice <cit.>. It is important to mention that this connection is relevant only for a finite lattice, as in the infinite case, all operators would mutually commute, i.e. [Ê,Ê^†]=[Ê,Ê_x]=[Ê^†,Ê_x]=0.The critical behavior of the bistability model has been extensively studied in Ref. <cit.>. It was argued that the critical behavior, devoid of symmetry breaking, arises from the transition `softening' of a first-order transition into a continuous one. In the current model, we appear to observe similar universal behavior. Specifically, the transition is continuous, the steady state remains unique throughout, and hence, there is no apparent symmetry breaking occurring. It is worth noting that quenched disorder can alter the nature of a transition from first to second order in classical critical models <cit.>. Tri-critical points provide another example where the order of a transition changes, and in the Potts model, the transition can shift from first to second order as a system parameter varies <cit.>. Continuous phase transitions without symmetry breaking can also occur in fermionic models when the Fermi sea undergoes volume changes <cit.>.It remains unclear whether the mechanism underlying the observed criticality in this open system (and in the bistability model ofEq. (<ref>)) differs in nature from those listed in the references mentioned above. After all, we are dealing with an open, non-Hamiltonian system. In Ref. <cit.>, it was noted that the full Hamiltonian model, including the degrees of freedom of the environment, exhibited a first-order phase transition, and it was only in the limiting case of infinite separation in time-scales that the initially first-order, discontinuous phase transition became continuous. Something similar might occur in our system, such as starting from the Hamiltonian (<ref>) and coupling it to a bath of oscillators.§.§ Non-hermitian sensorsIn recent years, there have been several proposals on how systems described by NH “Hamiltonians” can be leveraged to enhance sensor performance. Various concepts for these implementations have been explored, including harnessing the non-analyticity associated with exceptional points <cit.>, non-unitary evolution <cit.>, and the phenomenon of exponential sensitivity <cit.>. Motivated in part by a recent experimental demonstration <cit.>, our focus will be on the latter aspect and its application to the HN model.We aim to address two key questions in this context: firstly, what role do fluctuations play in the sensor setup, and secondly, how does disorder impact the performance of the NH sensor?§.§.§ Non-hermitian sensor with fluctuationsLet us begin by summarizing the fundamental concept of the NH sensor <cit.>. As previously mentioned, the skin effect is directly linked to exponential sensitivity. To illustrate this, let us consider the N-site open BC HN model. The spectrum for this model was given in Eq. (<ref>), and we observed how it is real and symmetric around zero. This symmetry implies that for an odd number of sites, there is a zero eigenvalue, denoted as ν_z≡ν_j=0 for j=(N+1)/2. Now, let us further assume a very weak (real) coupling denoted as |ϵ|≪1, which connects the first and last sites. This coupling represents a non-local perturbation of the form: V̂=ϵ(â_1^†â_N+h.c.).Our goal here is to determine the value of |ϵ|. When treating V̂ as a perturbation, we find (for NH, perturbation theory involves both left and right unperturbed states since they form the orthogonal states used for the resolution of identity) that the lowest-order corrections to the zeroth eigenvalue scale as <cit.>ν_z→ν_z=δν,δν∼ϵ e^α N.Here, α depends on specific system details. Consequently, the eigenvalue remains real but shifts away from zero. Notably, this shift can be significant as long as N is sufficiently large. Beyond a critical perturbation ϵ_c(N) (which depends on the system size and other system parameters), the scaling breaks down, and the eigenvalue becomes complex. In line with the experimental work of Ref. <cit.>, if the system is initially prepared in the zero eigenvalue state |φ_z^R⟩ and evolves for a brief time t under the perturbed Hamiltonian Ĥ_HN+V̂, resulting in the state |ψ⟩, then the decay of the auto-correlation functionA(N)=|⟨φ_z^L|ψ⟩|serves as a measure of the perturbation. Consequently, for a given perturbation strength ϵ, due to the exponential sensitivity, the logarithm of A(N) should exhibit a linear relationship with the system size N as long as ϵ<ϵ_c(N). The results, obtained from numerical time-propagation of the initial state, are presented in Fig. <ref> as solid black curves. In the figure, we display the logarithm of the auto-correlation function for four different HN parameters, denoted as δ. It is worth recalling that in the Lindblad realization of the HN model, δ corresponds to the rate γ. As expected, the anticipated linear dependence on N is demonstrated in all four examples within the figure.It is particularly noteworthy that for large δ values, the exponential dependence as per Eq. (<ref>) persists only for relatively small system sizes. Conversely, this regime can extend over significantly larger system sizes for small δ values. The slope of the curves is determined by the parameter α in Eq. (<ref>), which is found to scale as α∼δ. This relationship explains why rather large system sizes N are required to reach the critical ϵ_c for small δ values. In practical implementations, state tomography provides the time-evolved state |ψ(t)⟩. With prior knowledge of |φ_z^L⟩, it becomes possible to estimate the decay of A(N) and, consequently, δν. This approach has been realized in classical light-pulses in waveguides, where fluctuations likely play a less significant role (see discussions in Sec. <ref>) <cit.>.Now, turning our attention to the LME, we consider the same type of perturbation (<ref>). The exponential sensitivity in this context is a consequence of the skin effect. As we have discussed, the skin effect can be lost in the LME, leading to the steady state becoming delocalized when γ<γ_c=1/2. Consequently, we expect that exponential sensitivity is only present in the localized phase. Indeed, within this phase, the eigenvectors of the Liouvillian become localized to the corner of the Liouvillian Fock state lattice <cit.>. In Fig. <ref>, we provide two examples of the Liouvillian spectrum: one in the delocalized phase (a) and the other in the localized phase (b). As anticipated, the exponential sensitivity is no longer present when the system transitions into the delocalized phase.However, in the localized phase, the Liouvillian also exhibits exponential sensitivity, a phenomenon that has been observed in the past <cit.>. Nevertheless, the challenge remains to identify an observable quantity that is exponentially sensitive and capable of extracting information about the perturbation. For the HN model, a natural choice is to consider the state associated with the zero eigenvalue, |φ_z^R⟩⟩. As we have seen, this is due to the initial decay of its auto-correlation function, which directly reflects the magnitude of the perturbation. In the case of the full LME, the primary option would be the steady state, denoted as ρ̂_ss^(ϵ=0) (where ϵ=0 signifies the unperturbed state). We can analyze how this steady state evolves under the perturbed Liouvillian. Alternatively, we could also contemplate using the same initial state |φ_z^R⟩ as in the HN model and explore the influence of fluctuations on the auto-correlation function (<ref>). Through numerical simulations, we find no signs of exponential sensitivity in either of these scenarios.In the latter case, the decay of the auto-correlation function is predominantly driven by the relaxation towards the system's steady state, and it occurs on a timescale proportional to the inverse Liouvillian gap (<ref>). The perturbation-induced decay happens on an entirely different timescale and gets obscured by the relaxation of the steady state. Extending the steady state relaxation time by moving closer to the critical point does not result in a favorable situation, as it rapidly suppresses exponential sensitivity when the system becomes more delocalized. In our numerical experiments, we have not identified a regime where the system effectively functions as a sensor when initialized with the |φ_z^R⟩ state.This same argument applies to initializing the system in the unperturbed steady state, meaning that the relaxation of ρ̂_ss^(ϵ=0) to the final perturbed steady state ρ̂_ss^(ϵ) dominates the evolution. Importantly, this behavior does not exhibit a strong N-dependence. More precisely, we find that the auto-correlation function A(N)=Tr[ρ̂_ss^(0)ρ̂_ss^(ϵ)] follows a linear N-dependence rather than an exponential one.In summary, even though the spectrum of the Liouvillian exhibits exponential sensitivity to non-local perturbations in the localized phase, it is not evident how this can be effectively harnessed for sensing purposes. Generalizing the concept of the HN sensor does not appear to yield favorable results. It remains uncertain whether there might be another experimentally measurable quantity that could salvage the sensor's performance when accounting for fluctuations. §.§.§ Non-hermitian sensor with disorderAnother significant limitation of the NH sensor that we need to address, which does not stem from environment-induced fluctuations, pertains to disorder. Consider the presence of local quenched disorder, which can arise in an imperfect sensor, causing the actual Hamiltonian to take the form:Ĥ_dHN=Ĥ_HN+∑_n=1^Nκ_nn̂_n,where κ_n∈[-W,W] represents a random onsite offset. Here, W denotes the disorder strength. We assume that W≪1, which means it is orders of magnitude smaller than the tunneling rate, ensuring that the system is not localized on any relevant length scales. However, it is important to note that W can be larger than the perturbation strength that the sensor is meant to measure. Therefore, a potential breakdown of the sensor should not be attributed to hindrance in propagation due to localization. In Fig. <ref>, we present the sensor's performance in the presence of quenched disorder, indicated by the red dashed line. The disorder strength is approximately 0.05% of the tunneling strength, and to reduce the scatter, we averaged over 1000 disorder realizations. Our findings reveal that below a certain system size, denoted as N_l, the sensor primarily detects the disorder, with the perturbation signal getting lost in the noise generated by the disorder. However, beyond N_l, the perturbation signal begins to dominate the disorder noise, thanks to its exponential increase, and the disorder's impact on the signal diminishes significantly.It is important to note that above a certain upper system size, N_u, the perturbation signal no longer follows the exponential form described in Eq. (<ref>). Consequently, when given a perturbation ϵ and a disorder strength W, there exists a window of system sizes N_l<N<N_u within which the NH sensor operates effectively. These windows, corresponding to the same values of δ as seen in Fig. <ref>, are displayed in Fig. <ref>.In summary, when δ is small, it implies the need for larger sensor sizes N. In practice, achieving extremely large chains may not be feasible, making it desirable to use a larger δ. Our results are computed using the same methodology as for the non-disordered sensor, involving the propagation of the initial state and the analysis of the auto-correlation function's decay. We have also verified numerically that similar results can be obtained by directly extracting δν from the full spectrum, rather than indirectly calculating it from A(N). § DISCUSSION AND CONCLUDING REMARKSWhile NH Hamiltonians offer utility in modeling a quantum system's interaction with its environment, they can also lead to complex, non-physical outcomes. A significant challenge in this endeavor arises from the intricate entanglement between the quantum system and its environment, resulting in a mixed state representation rather than the pure state representation ρ̂=|ψ⟩⟨ψ|. However, it is worth noting that there are scenarios in which the quantum system's state remains nearly pure. In such cases, relevant observables 𝒪=Tr[ρ̂Ô] can be well described by a pure state. Many experimental activities involve classical emulations of quantum systems, where classical systems obey equations of motion similar, or equivalent, to those found in quantum systems described by NH Hamiltonians. While classical systems are not represented by states in a Hilbert space, the fluctuation theorem extends to both classical and quantum systems. In a strict sense, any open system is subject to fluctuations from its environment, with Brownian motion serving as a classic example. However, this influence may be negligible for macroscopic objects. Classical states, described as coherent states in the quantum realm, are known to be robust against fluctuations, such as the state of the electromagnetic field originating from a laser <cit.>.The role of fluctuations becomes a more intricate matter when dealing with systems deep in the quantum regime. To circumvent this issue, researchers have employed the concept of post-selection <cit.>, where experiments are conducted under full observation, and only the data from experimental runs that do not undergo “quantum jumps”, such as spontaneous photon emission, are considered <cit.>. However, this approach often leads to a significantly reduced probability of successful experimental runs over time, as they become exponentially suppressed.In this study, we explored the time evolution generated by the Liouvillian without invoking measurement-induced projections or assuming a semi-classical regime. Specifically, we focused on two well-studied phenomena within the framework of NH QM: the skin effect and NH sensors.While previous arguments suggested that both the skin effect and NH sensors should persist when fluctuations are taken into account <cit.>, our study unveiled subtleties in this context, leading to new insights. Importantly, the Liouvillian is not uniquely determined by a NH “Hamiltonian”. This multiplicity is akin to the purification of mixed states <cit.>, where infinitely many different pure states can construct a given mixed state. Consequently, there exist infinitely many Liouvillians corresponding to equivalent NH “Hamiltonians”, and these Liouvillians can exhibit qualitative differences.In our study, we chose to investigate a Liouvillian represented by collective quantum jumps, as defined in Eq. (<ref>). We also considered an alternative model with local jump operators, as defined in Eq. (<ref>). Upon numerical analysis of the latter model, we found that it lacks critical behavior – the delocalized phase does not emerge in this case. This finding implies that the collectiveness of quantum jumps is essential for the appearance of a critical point. The criticality observed in our model is an example of a driven-dissipative non-equilibrium phase transition <cit.>. This specific type of transition is qualitatively different from phase transitions described by the Ginzburg-Landau paradigm <cit.>. The transition is continuous and lacks any apparent symmetry breaking, yet the critical exponent for the correlation length aligns with the characteristics of a mean-field transition. This observation resonates with related models, such as the one represented in Eq. (<ref>), which has been explored in the context of optical bistability.The existence of a delocalized phase signifies the breakdown of the NH skin effect. In this phase, the system exhibits an extended steady state that approximately populates the lattice sites uniformly. The coherences between the sites practically vanish, implying that the steady state approximates a maximally mixed state or an infinite-temperature state. In the case of periodic BC, the maximally mixed state is the exact steady state, a concept that was briefly mentioned in a previous study <cit.>, which investigated a quadratic Liouvillian.It is essential to recognize that not only the steady state becomes delocalized for γ<γ_c, but the other Liouvillian eigenvectors ρ̂_j also undergo delocalization in this phase. This delocalization affects the entire spectrum of the Liouvillian.Indeed, the absence of a skin effect in the delocalized phase results in the system's lack of exponential sensitivity. Consequently, it cannot be employed for sensing purposes in this regime. However, when the losses are substantial, i.e., γ>γ_c, the Liouvillian does exhibit exponential sensitivity once it enters the localized phase.Nonetheless, despite the presence of exponential sensitivity in this regime, our exploration did not yield a suitable measure that could effectively extract the quantity to be detected. Attempts to directly generalize experiments like the one reported in <cit.>, which used classical light, were ineffective in the context of Liouvillian systems. This inefficacy is primarily due to the dominance of other mechanisms, such as relaxation towards a steady state, during the early stages of evolution. We also considered observing the long-term evolution and the fidelities of the resulting steady states, but these did not exhibit exponential sensitivity either.While the specific observables explored in this study did not yield the desired results, it remains a possibility that more refined observables could be identified for use in Liouvillian-based sensors. However, further investigation and research would be required to identify and assess these potential observables effectively. In our study, we have also demonstrated that disorder introduces limitations to the applicability of NH sensors, even without considering fluctuations. This finding underscores the importance of the sensor's size relative to the disorder strength. The sensor must be sufficiently large to overcome the noise due to disorder.In particular, we observed that for the NH sensor to effectively detect a perturbation signal, the signal strength must exceed a critical value that is determined by the disorder strength. This means that the sensor must be sufficiently large to overcome the detector noise generated by the disorder.These findings emphasize the importance of carefully considering disorder effects and ensuring that the sensor size and sensitivity are suitable for practical applications. It also suggests that engineering sensors with greater robustness against disorder may be necessary for reliable measurements in realistic experimental conditions.While our analysis has been centered on the HN model, the applicability of our results to other models remains an open question. In Appendix <ref>, however, we extend our findings to the NH SSH chain and demonstrate that the qualitative conclusions applies. Nevertheless, one can imagine the potential for further research in other directions that addresses slightly different questions, including topology, localization, and the realm of many-body NH physics. We propose that the introduction of the Liouvillian Fock state lattice, as illustrated in Fig. <ref>, offers a promising tool for gaining fresh insights. Particularly fascinating is the observation that, for the 1D HN model, the dimension of the Liouvillian Fock state lattice stands at 2D. This observation leads to the intriguing speculation that, for a D-dimensional model coupled to an environment, the relevant physics might manifest in D+1 dimensions. Furthermore, the interplay of symmetries in NH quantum mechanics compared to the full LME <cit.> remains an open question, partially due to the complexity of relating biorthogonal QM to real physical systems. These questions are left unexplored for future studies. § OPEN QUANTUM SYSTEMS In this appendix, we outline some general concepts related to the dynamics of open quantum systems. The typical scenario is illustrated in Fig. <ref>. In (a), we depict a small system 𝒮 interacting with a larger environment ℰ. Information is exchanged between these subsystems, with the rate of exchange determined by the parameter γ. This exchange includes processes such as particle or energy losses and decoherence. The combined system's evolution is described by the Schrödinger equation. For instance, an initial pure separable state evolves as|Ψ(t)⟩=Û(t)|ψ_s(0)⟩⊗|ψ_E(0)⟩.Here, the time-evolution operator Û(t) is generated from the full Hamiltonian Ĥ=Ĥ_s+Ĥ_E+Ĥ_sE, where the first two terms represent the system and environment sub-Hamiltonians, and the last term accounts for their interaction. The state of the system at time t is obtained by taking the partial trace of the full state, as illustrated in Fig. <ref> (b). As time progresses, the initially separable state of the system-environment becomes entangled. Given that the initial state is pure, this entanglement between the system and the environment is reflected in the system's reduced state, ρ̂_s(t), being mixed. For instance, the von Neumann entropy S_vN=-Tr[ρ̂(t)lnρ̂(t)]>0 is used as a measure of the amount of entanglement [33]. From this point forward, we will omit the subscript “s” for the reduced density operator of the system.In the Markovian approximation, as assumed for the LME (<ref>), this information flow out of the system is forever lost to the environment <cit.>. Furthermore, in deriving equation (<ref>), we also assume the validity of the Born approximation, which implies that, due to the substantial difference in system sizes, the state of the environment is unaffected by the presence of the small system <cit.>.As previously mentioned, the loss of information can happen through either dissipation or decoherence. In the former, we often think of particle losses, while in the latter, it typically results from uncontrolled energy shifts. Regardless of the specific mechanism, both processes tend to cause the state of the system to become mixed in most cases. This concept is at the core of fluctuation-dissipation or quantum regression theorems <cit.>. However, if one has complete access to the environment, in principle, it is possible to extract all the information about the system. In this scenario, the system's state can be ascribed a pure state. This would be the case of post-selection <cit.>, briefly touched upon in the final Sec. <ref>. § LIOUVILLIAN FOR NON-HERMITIAN SSH MODELIt is straightforard to generalize the HN to the Su–Schrieffer–Heeger (SSH) model <cit.>. The Hamiltonian of the hermitian SSH model isĤ_SSH=(1-χ)∑_n=1^Nâ_n^†b̂_n+(1+χ)∑_n=1^Nâ_n+1^†b̂_n+h.c.The parameter χ, which takes values in the interval [-1,0], dictates the relative strengths of consecutive tunneling amplitudes in the SSH model. For χ=0, the model reverts to the conventional tight-binding model. However, for χ≠0, the lattice exhibits a bipartite structure, with each unit cell containing two distinct types of sites: the a-sites and the b-sites. In cases with an odd number of sites, the SSH model supports a zero-energy topological edge state, also known as symmetry-protected state, which exhibit exponential localization near the chain's edge. The specific value of χ determines the size of the energy gap that separates these edge states from the remaining bulk states, as well as the amount of localization of the edge state. These edge states tend to be more isolated from the rest, potentially enhancing their robustness for various applications. Consequently, it is worthwhile to investigate how the findings of the present paper extend to the SSH model. Transitioning to the SSH model from the tight-binding Hamiltonian in Eq. (<ref>) is straightforward. The results of numerical simulations are illustrated in Fig. <ref>, displaying the spectra, scaled positions (<ref>), and widths Δξ=√(⟨Ŝ^2⟩-⟨Ŝ⟩^2).The plot clearly demonstrates that χ influences the transitions, shifting them toward smaller values of γ, and the transition itself becomes smoother. Nonetheless, based on the available numerical data, it appears that the transition remains distinct and is not merely a crossover. A noteworthy observation is that, even when the steady state is not precisely centered in the middle of the lattice, its width remains close to the maximum. Importantly, it should be noted that the model exhibits asymmetry concerning the sign of χ. This asymmetry arises from the drift induced in the lattice by the jump operators, and reversing the sign of χ results in a distinct model. The author acknowledges financial support from VR-Vetenskapsråset (The Swedish Research Council), and is thankful for fruitful discussions with Elisabet Edvardsson and Emil Bergholtz.999uedaM. A. Miri, and A. Alu, Science, 363, eaar7709 (2019); Y. Ashida, Z. Gong, and M. Ueda, Adv. Phys. 69, 249 (2020); E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Rev. Mod. Phys. 93, 015005 (2021).bender C. M. Bender and S. Boettcher, Phys. Rev. Lett. 80, 5243 (1998); C. M. Bender,Rep. Prog. Phys. 70, 947 (2007).bioqmC. M. Bender, D. C. Brody, and H. F. Jones, Phys. Rev. Lett. 89, 270401 (2002); D. C. Brody, J. Phys. A Math. Theo. 47, 035305 (2013); F. K. Kunst, Kunst, F. K., E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Phys. Rev. Lett. 121, 026808 (2018). largeqm C. M. Bender, D. C Brody, and H. F Jones, Am. J. Phys. 11, 1095 (2003).skinref S. Yao and Z. Wang, Phys. Rev. Lett. 121, 086803 (2018).skineffect N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Phys. Rev. Lett. 124, 086801 (2020); X. Zhang, T. Zhang, M. H. Lu, and Y. F. Chen, Adv. Phys. X 7, 2109431 (2022).skineffect2 K. Kawabata, T. Numasawa, and S. Ryu, Phys. Rev. X 13, 021007 (2022). ee E. Edvardsson and E. Ardonne, Phys. Rev. B 106, 115107 (2022).expsens R. Koch, and J. C. Budich, Euro. Phys. J. D 74, 1 (2020); C.-X. Guo, C.-H. Liu, X.-M. Zhao, Y. Liu, and S. Chen, Phys. Rev. Lett. 127, 116801 (2021).sensor J. C. Budich and E. J. Bergholtz, Phys. Rev. Lett. 125, 180403 (2020).sensor4 M. Parto, C. Leefmans, J. Williams, and A. Marandi, arXiv:2305.03282.mw R. Kubo, Rep. Prog. Phys. 29, 255 (1966); L. Mandel, and E. Wolf, Optical coherence and quantum optics, (Cambridge university press, Cambridge, 1995).carmichael H. J. Carmichael, Statistical Methods in Quantum Optics 1, (Springer, 2002).nosignaling Y.-C. Lee, M.-H. Hsieh, S. T. Flammia, and R.-K. Lee, Phys. Rev. Lett. 112, 130404 (2014); J.-S. TANG, et al., Nature Phot. 10, 642 (2016); A. Kumari and U. Sen, arXiv:2202.02744.nocloning X. Zhan, K. Wang, L. Xiao, Z. Bian, Y. Zhang, B. C. Sanders, C. Zhang, and P. Xue, Phys. Rev. A 101, 010302(R) (2020).LR Y. Ashida and M. Ueda, Phys. Rev. Lett. 120, 185301 (2018); N. Matsumoto, K. Kawabata, Y. Ashida, S. Furukawa, and M. Ueda, Phys. Rev. Lett. 125, 260601 (2020); B. Barch, N. Anand, J. Marshall, E. Rieffel, and P. Zanardi, arXiv:2305.12054.entincreaseS.-L. Chen, G.-Y. Chen, and Y.-N. Chen, Phys. Rev. A 90, 054301 (2014); A. K. Pati, arXiv:1404.6166; J.-W. Wen, C. Zheng, X.-Y. Kong, S.-J. Wei, T. Xin, and G.-L. Long, Phys. Rev. A 99, 062122 (2019).unred Y. T. Tu, Y. C. Tzeng, and P. Y. Chang, SciPost Phys. 12, 194 (2022); M. Fossati, F. Ares, and P. Calabrese, Phys. Rev. B 107, 205153 (2023); C. T. Hsieh, and P. Y. Chang, 6, 062 (2023)hn N. Hatano and N. D. Nelson, Phys. Rev. Lett. 77, 570 (1996); ibid, Phys. Rev. B 56, 8651 (1997).optbis R. Bonifacio and L. A. Lugiato, Phys. Rev. A 18, 1129 (1978); R. Bonifacio and L. A. Lugiato, Phys. Rev. Lett. 40, 1023 (1978); G. P. Agrawal and H. J. Carmichael, Phys. Rev. A 19, 2074 (1979); P. D. Drummond and D. F. Walls, J. Phys. A 13, 725 (1980); S. R. K. Rodriguez, W. Casteels, F. Storme, N. Carlon Zambon, I. Sagnes, L. Le Gratiet, E. Galopin, A. Lemaître, A. Amo, C. Ciuti, and J. Bloch, Phys. Rev. Lett. 118, 247402 (2017).optbis2 R. R. Puri and S. V. Lawande, Phys. Lett. A 72, 200 (1979); S. V. Lawande, R. R. Puri, and S. S. Hassan, J. Phys. B 14, 4171 (1981).comp1 K. G. Zloshchastiev, and A. Sergi, J. Mod. Opt. 61, 1298 (2014).physstate F. Minganti, A. Miranowicz, R. W. Chhajlany, and F. Nori, Phys. Rev. A, 100, 062131 (2019).hatanonelsonlindblad A. McDonald, R. Hanai, and A. A. Clerk, Phys. Rev. B 105, 064302 (2022).nhlindblad1 S. Longhi, Phys. Rev. B 102, 201103(R) (2020). nhlindblad2 F. Song, S. Yao, and Z. Wang, Phys. Rev. Lett. 123, 170401 (2019); C.-H. Liu, K. Zhang, Z. Yang, and S. Chen, Phys. Rev. Research 2, 043167 (2020); X. Li, M. A. Begaowe, S. Zhang, and B. Flebus, arXiv:2307.15792. nhlindblad2b N. Okuma and M. Sato, Phys. Rev. B 103, 085428 (2021); F. Yang, Q.-D. Jiang, and E. J. Bergholtz, Phys. Rev. Research 4, 023160 (2022); Z. Zhou and Z. Yu, Phys. Rev. A 106, 032216 (2022). nhlindblad4 F. Koch and J. C. Budich, Phys. Rev. Research 4, 013113 (2022). sensor2 A. McDonald and A. A. Clerk, Nature Comm. 11, 5382 (2020); L. Bao, B. Qi, D. Dong, and F. Nori, Phys. Rev. A 103, 042418 (2021); L. Bao, B. Qi, F. Nori, and D. Dong,arXiv:2303.16575.jumpperturb X. Niu, J. Li, S. L. Wu, and X. X. Yi, arXiv:2202.12591. nc M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, (Cambridge University Press, 2000). preskill J. Preskill,Lecture notes for physics 229: Quantum information and computation, California Institute of Technology 16, 1 (1998); V. Vedral, Introduction to Quantum Information Science, (Oxford University Press, 2006).ingemar I. Bengtsson,Göran Lindblad in memoriam, arXiv:2307.10621.lindblad G. Lindblad, Commun. Math. Phys. 48, 119 (1976).bp H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems, (Oxford University Press, 2002). lengineer S. Clark, A. Peng, M. Gu, and S. Parkins, Phys. Rev. Lett. 91, 177901 (2003); J. T. Barreiro, P. Schindler, O. Gühne, T. Monz, M. Chwalla, C. F. Roos, M. Hennrich, and R. Blatt, Nature Phys. 6, 943 (2010); J. Cho, S. Bose, and M. Kim, Phys. Rev. Lett. 106, 020504 (2011); S. Diehl, E. Rico, M. A. Baranov, and P. Zoller, Nature Phys. 7, 971 (2011); M. Müller, S. Diehl, G. Pupillo, P. Zoller, et al., Adv. At. Mol. Opt. Phys. 61, 1 (2012); P. Schindler, M. Müller, D. Nigg, J. T. Barreiro, E. A. Martinez, M. Hennrich, T. Monz, S. Diehl, P. Zoller, and R. Blatt, Nature Phys. 9, 361 (2013).unik H. Spohn, Lett. Math. Phys. 2, 33 (1977); F. Alberto, Comm. Mth. Phys. 63, 269 (1978); D. Nigro, J. Stat. Mech. 4, 043202 (2019); H. Yoshida,arXiv:2309.00335.albert V. V. Albert and L. Jiang, Phys. Rev. A 89, 022118 (2014). patrik P. Hedvall and J. Larson, arXiv: 1712.01560.vec J. A. Gyamfi, Eur. J. Phys. 41, 063002 (2020).sofia J. Larson and S. Qvarfort, Open Sys. Quant. Inf. 30, 2350008 (2023).vk N. G. van Kampen, Stochastic processes in physics and chemistry, (Elsevier, 1992).unraveling P Alsing and H J Carmichael, Quant. Opt. 3 13 (1991); J. Dalibard, Yvan Castin, and Klaus Mølmer Phys. Rev. Lett. 68, 580 (1992); A. J. Daley, Adv. Phys. 63, 77 (2014).huelga A. Rivas and S. F. Huelga, Open Quantum Systems, (Springer, 2012).nonneg E. M. Kessler, G. Giedke, A. Imamoglu, S. F. Yelin, M. D. Lukin, and J. I. Cirac, Phys.l Rev. A 86, 012116 (2012); F. Minganti, A. Biella, N. Bartolo, and C. Ciuti, Phys. Rev. A 98, 042118 (2018).lgap E. M. Kessler, G. Giedke, A. Imamoglu, S. F. Yelin, M. D. Lukin, and J. I. Cirac, Phys. Rev. A 86, 012116 (2012); F. Minganti, A. Biella, N. Bartolo, and C. Ciuti, Phys. Rev. A 98, 042118 (2018).lgap2 T. Mori and T. Shirai, Phys. Rev. Lett. 125, 230604 (2020); T. Haga, M. Nakagawa, R. Hamazaki, and M. Ueda, Phys. Rev. Lett. 127, 070402 (2021).lcrit W. Casteels, R. Fazio, and C. Ciuti, Phys. Rev. A 95, 012128 (2017); F. Vicentini, F. Minganti, R. Rota, G. Orso, and C. Ciuti, Phys. Rev. A 97, 013853 (2018); R. Rota, F. Minganti, C. Ciuti, and V. Savona, Phys. Rev. Lett. 122, 110405 (2019); M. Nakagawa, N. Kawakami, and M. Ueda, Phys. Rev. Lett. 126, 110404(2021); G. T. Landi, D. Poletti, and G. Schaller, Rev. Mod. Phys. 94, 045006 (2022).nons B. Buca, J. Tindall, and D. Jaksch, Nature Commun. 10, 1730 (2019). ep W. D. Heiss, J. Phys. A Math. Theo. 45, 444016 (2012).rrdebate1 S. S. Roy, S. Bandyopadhyay, R. C. de Almeida, and P. Hauke, arXiv:2309.00049.nogo C. Y. Ju, A. Miranowicz, G. Y. Chen, and F. Nori, Phys. Rev. A 100, 062118 (2019); S. S. Bhosale, B. Rath, and P. K. Panigrahi, Quant. Rep. 3, 417 (2021); D. C. Brody,J. Phys. A: Math. Theor. 49, 10LT03 (2016).el2 F. K. Kunst, E. Edvardsson, J. C. Budich, and E. J. Bergholtz, Phys. Rev. Lett. 121, 026808 (2018).elisabet F. G. Scholtz, H. B. Geyer, and F. J. W. Hahne,Ann. Phys. 213, 74 (1992); A. Mostafazadeh, J. Math. Phys. 44, 974 (2003); E. Edvardsson, J. L. K. König, and M. Stålhammar, arXiv:2212.06004.com1 Motivating NH QM as emerging from a more complete description can be done in various ways, and starting from the LME is only one of them. This is, however, the one approach we discuss in this paper. lspec J.M. Torres, Phys. Rev. A 89, 052133 (2014).prosen T. Prosen, New J. Phys. 10, 043026 (2008).hn2 N. Hatano and D. R. Nelson, Phys. Rev. B 58, 8384 (1998). euclid A. B. Klimov and S. M. Chumakov. A Group-theoretical Approach to Quantum Optics: Models of Atom-field Interactions, (John Wiley and Sons, 2009).hnspec F.Roccati, Phys. Rev. A 104, 022215 (2021).coldatom M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen, and U. Sen, Adv. Phys. 56, 243 (2007); I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008).trapsetup Z. Wang, Y. Lu, Y. Peng, R. Qi, Y. Wang, and J. Jie Phys. Rev. B 108, 054313 (2023).adel C. Gardiner, and P. Zoller, The quantum world of ultra-cold atoms and light book II: the physics of quantum-optical devices, (World Scientific Publishing, 2015); J. Larson and T. K. Mavrogordatos, The Jaynes-Cummings model and its descendants, (IOP Publishing, 2022).sensor3 W. Chen, S. Kaya Özdemir, G. Zhao, J. Wiersig, and L. Yang, Nature 548, 192 (2017); H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia-Gracia, R. El-Ganainy, D. N. Christodoulides, and M Khajavikhan.Nature 548, 187 (2017); H.-K. Lau and A. A. Clerk, Nature Comm. 9, 4320 (2018); M. Zhang, W. Sweeney, C. W. Hsu, L. Yang, A. D. Stone, and L. Jiang, Phys. Rev. Lett. 123, 180501 (2019);M. Parto, Y. G. Liu, B. Bahari, M. Khajavikhan, and D. N. Christodoulides, Nanophot. 10, 403 (2020); S. Soleymani, Q. Zhong, M. Mokim, S. Rotter, R. El-Ganainy, and S. K. Özdemir, Nature Comm. 13, 599 (2022).cardy J. Cardy, Scaling and renormalization in statistical physics, (Cambridge University Press, 1996).stirapP. Ivanov, N. Vitanov, and K. Bergmann, Phys. Re- v. A 70, 063409 (2004); M. S. Sarandy, D. A. Lidar, Phys. Rev. Lett. 95, 250503 (2005); T. Mathisen and J. Larson, Entropy 20, 20 (2018).pil H. Cai and D.-W. Wang, National Sci. Rev. 8, nwaa196 (2021); P. M. Saugmann and J. Larson, arXiv:2203.13813.optbiscrit1 D. F. Walls, P. D. Drummond, S. S. Hassan, and H. J. Carmichael, Prog. Theor. Phys. 64, 307 (1978); P. D. Drummond and H. J. Carmichael, Opt. Commun. 27, 160 (1978); P. D. Drummond, Phys. Rev. A 22, 1179 (1980); S. Schneider and G. J. Milburn, Phys. Rev. A 65, 042107 (2002); A. Dombi, A. Vukics, and P. Domokos, J. Phys. B 46, 224010 (2013); J. J. Mendoza-Arenas, S. R. Clark, S. Felicetti, G. Romero, E. Solano, D. G. Angelakis, and D. Jaksch, Phys. Rev. A 93, 023821 (2016); R. M. Wilson, K. W. Mahmud, A. Hu, A. V. Gorshkov, M. Hafezi, and M. Foss-Feig, Phys. Rev. A 94, 033801 (2016); W. Casteels, R. Fazio, and C. Ciuti, Phys. Rev. A 95, 012128 (2017); M. Foss-Feig, P. Niroula, J. T. Young, M. Hafezi, A. V. Gorshkov, R. M. Wilson, and M. F. Maghrebi, Phys. Rev. A 95, 043826 (2017)optbiscrit2 J. Hannukainen and J. Larson, Phys. Rev. A 98, 042113 (2018).qd M. Aizenman and J. Wehr, Phys. Rev. Lett. 62, 2503 (1989).potts B. Nienhuis, A. N. Berker, E. K. Riedel, and M. Schick, Phys. Rev. Lett. 43, 737 (1979); E. Domany, M. Schick, and R. H. Swendsen, Phys. Rev. Lett. 52, 1535 (1984).fermisea T. Senthil, Matthias Vojta, and Subir Sachdev Phys. Rev. B 69, 035111 (2004); Q. Si, and F. Steglich, Science, 329, 1161 (2010); N. Maksimovic et al., Science 375, 76 (2022).com2 The degree of localization in the Liouvillian Fock state lattice can be calculated from the inverse partition ratio, e.g. IPR_j=1/∑_l|⟨⟨ l|φ_j^R⟩⟩|^4, where |l⟩⟩ is a Liouvillian Fock state, and |φ_j^R⟩⟩ is the normalized j'th eigenvector of the Liouvillian. One finds (not shown in this paper) that all eigenvectors are to some high localized for γ>γ_c, and they are much less localized for γ<γ_c (even though the limiting value of IPR_j=N^2 is never reached even in the delocalized phase).com3 Photon losses of the laser resonator can be mimicked by a Lindblad jump operator L̂=â, where â is the photon annihilation operator of the light mode. Since a coherent state is an eigenstate of this operator, its action on the state does not alter it; â|α⟩=α|α⟩. As soon as the state is not a coherent one, the presence of fluctuations will influence it.postselect M. Naghiloo, M. Abbasi, Y. N. Joglekar, and K. W. Murch, Nature Physics, 15, 1232 (2019); G. L. Zhang, D. Liu, H. M. Yung, Scientific Rep. 11, 13795 (2021); A. Quinn, J. Metzner, J. E. Muldoon, I. D. Moore, S. Brudney, S. Das, D. T. C. Allcock and Y. N. Joglekar, arXiv:2304.12413; S. Erdamar, M. Abbasi B. Ha W. Chen, J. Muldoon Y. Joglekar and K. W. Murch, arXiv:2309.12393.unravel P. Alsing, H. J. Carmichael, J. Euro. Opt. Soc. B 3, 13 (1991); J. Dalibard, Y. Castin, and K. Mölmer, Phys. Rev. Lett. 68, 580 (1992).lsym V. V. Albert and L. Jiang, Phys. Rev. A 89, 022118 (2014).ssh S. Lieu, Phys. Rev. B 97, 045106 (2018); S. Yao and Z. Wang, Phys. Rev. Lett. 121, 086803 (2018).
http://arxiv.org/abs/2310.18259v1
{ "authors": [ "Clement Ehrhardt", "Jonas Larson" ], "categories": [ "quant-ph", "cond-mat.quant-gas" ], "primary_category": "quant-ph", "published": "20231027164806", "title": "Exploring the impact of fluctuation-induced criticality on non-hermitian skin effect and quantum sensors" }